Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Special Interest Groups
  3. C++ Gurus
  4. Fastest way to read part of 300 Gigabyte binary file
Forum Updated to NodeBB v4.3 + New Features

Fastest way to read part of 300 Gigabyte binary file

Scheduled Pinned Locked Moved Solved C++ Gurus
58 Posts 7 Posters 14.5k Views 5 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S Offline
    S Offline
    SGaist
    Lifetime Qt Champion
    wrote on 7 Jan 2020, 17:07 last edited by
    #38

    You can't just replace an input type by an array of the same type. That's not how it's working. And in any case, the returned value of map is the address you'll have to pass to the unmap function.

    You won't avoid using a form of loop or another.

    Interested in AI ? www.idiap.ch
    Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

    1 Reply Last reply
    3
    • P Please_Help_me_D
      7 Jan 2020, 16:11

      @SGaist I understand that I have offset and size parameters and actually I use them as a single valued numbers. If I want to map several regions of a file then I should use multiple offsets and multiple size but the example below doesn't work:

          qint64 offset[] = {100, 200, 300};
          qint64 size[] = {4, 4, 4};
          qint32* memory = reinterpret_cast<qint32*>(file.map(offset, size));
      

      The error I get is:
      main.cpp:19:57: error: cannot initialize a parameter of type 'qint64' (aka 'long long') with an lvalue of type 'qint64 [3]'
      qfiledevice.h:127:23: note: passing argument to parameter 'offset' here

      J Offline
      J Offline
      JonB
      wrote on 7 Jan 2020, 17:24 last edited by
      #39

      @Please_Help_me_D
      As @SGaist has said. You will need to make multiple calls to QFileDevice::map(), one for each of the distinct regions you want mapped.

      1 Reply Last reply
      1
      • P Offline
        P Offline
        Please_Help_me_D
        wrote on 7 Jan 2020, 21:46 last edited by
        #40

        @SGaist when I heard the word "loop" then I finnaly got the idea :)
        Here is my code:

        #include <QCoreApplication>
        #include <QFile>
        #include <QVector>
        //#include <QIODevice>
        #include <armadillo>
        using namespace arma;
        
        int main()
        {
            char segyFile[]{"D:/STACK1_PRESTM.sgy"};
            QFile file(segyFile);
            qint64 fSize = file.size();
            qint64 N = 1734480;
            qint64 Nb = 2059*4;
            if (!file.open(QIODevice::ReadOnly)) {
                 //handle error
            }
            //qint32* memory = reinterpret_cast<qint32*>(file.map(3608, file.size()-3608));
            qint32* memory = new qint32;
            QVector<qint32> FFID(N);
            std::cout << "started..." << std::endl;
            wall_clock timer;
            timer.tic();
            for (int i = 0; i < N; i++){
                memory = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                FFID[i] = *memory;
                //std::cout << *memory << std::endl;
            }
            double n0 = timer.toc();
            std::cout << n0 << std::endl;
            std::cout << "finished!" << std::endl;
        }
        
        

        Is it possible to create to store in memory all the values that I need? Now I only have a pointer to the single value in var memory. Then I could avoid to use assigning values to FFID.
        The timing result is almost the same:
        SSD internal

        • QFile::map 97 Seconds (previously it was 86)

        HDD internal

        • QFile::map 223 Seconds (previously it was 216)

        To check the reliability of the results I also made the experiments with whole file mapping as I did before and the timings is the same. So there is no big difference whether to map the whole file or many regions of it

        1 Reply Last reply
        0
        • S Offline
          S Offline
          SGaist
          Lifetime Qt Champion
          wrote on 8 Jan 2020, 07:25 last edited by
          #41

          Is it possible to create to store in memory all the values that I need? Now I only have a pointer to the single value in var memory. Then I could avoid to use assigning values to FFID.

          I am not sure I understand that question. memory is a pointer to the start of the region you mapped. In any case, you are still not un-mapping anything in your code which is a bad idea.

          In order to be able to answer your question, please explain what your are you going to do with the values you want to retrieve from that file.

          Interested in AI ? www.idiap.ch
          Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

          P 1 Reply Last reply 8 Jan 2020, 11:12
          1
          • S SGaist
            8 Jan 2020, 07:25

            Is it possible to create to store in memory all the values that I need? Now I only have a pointer to the single value in var memory. Then I could avoid to use assigning values to FFID.

            I am not sure I understand that question. memory is a pointer to the start of the region you mapped. In any case, you are still not un-mapping anything in your code which is a bad idea.

            In order to be able to answer your question, please explain what your are you going to do with the values you want to retrieve from that file.

            P Offline
            P Offline
            Please_Help_me_D
            wrote on 8 Jan 2020, 11:12 last edited by
            #42

            @SGaist I forgot to unmap...
            Here to get an array (or vector) of values in FFID I use:

                for (int i = 0; i < N; i++){
                    memory = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                    FFID[i] = *memory;
                    //std::cout << *memory << std::endl;
                }
            

            I do that because memory is a pointer to a single value. If I could write something like:

                for (int i = 0; i < N; i++){
                    memory[i] = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                }
            

            then I could avoid using FFID.
            It doesn't make much sense in this case but i'm just interested and maybe this information would be useful in future in other situations.

            J 1 Reply Last reply 8 Jan 2020, 11:26
            0
            • P Please_Help_me_D
              8 Jan 2020, 11:12

              @SGaist I forgot to unmap...
              Here to get an array (or vector) of values in FFID I use:

                  for (int i = 0; i < N; i++){
                      memory = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                      FFID[i] = *memory;
                      //std::cout << *memory << std::endl;
                  }
              

              I do that because memory is a pointer to a single value. If I could write something like:

                  for (int i = 0; i < N; i++){
                      memory[i] = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                  }
              

              then I could avoid using FFID.
              It doesn't make much sense in this case but i'm just interested and maybe this information would be useful in future in other situations.

              J Offline
              J Offline
              jsulm
              Lifetime Qt Champion
              wrote on 8 Jan 2020, 11:26 last edited by jsulm 1 Aug 2020, 11:26
              #43

              @Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:

              memory is a pointer to a single value

              No. It is a pointer to a chunk of memory (bytes), you can interpret that memory as you like. You can use memory as array (as in C/C++ an array is simply a pointer to first element):

              FFID[i] = memory[i];
              

              So, there is really no need to map inside the loop.

              https://forum.qt.io/topic/113070/qt-code-of-conduct

              J 1 Reply Last reply 8 Jan 2020, 11:35
              1
              • J jsulm
                8 Jan 2020, 11:26

                @Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:

                memory is a pointer to a single value

                No. It is a pointer to a chunk of memory (bytes), you can interpret that memory as you like. You can use memory as array (as in C/C++ an array is simply a pointer to first element):

                FFID[i] = memory[i];
                

                So, there is really no need to map inside the loop.

                J Offline
                J Offline
                JonB
                wrote on 8 Jan 2020, 11:35 last edited by JonB 1 Aug 2020, 11:39
                #44

                @jsulm
                I'm afraid this is not what he means/how he is using memory. There are quite distinct, separate, non-contiguous areas of his memory-mapped file he wishes to access. He wishes to call QFile::map() many times, each one mapping a separate area of memory. He will need to retain those mapped addresses so that he can later unmap() them.

                He should change to an array/list of memoryMappings. I'm not a C++ expert, but his code should be more like:

                QVector<qint32*> memoryMappings(N);
                for (int i = 0; i < N; i++){
                    memoryMappings[i] = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                    FFID[i] = *memoryMappings[i]
                }
                
                P 1 Reply Last reply 8 Jan 2020, 12:16
                2
                • J JonB
                  8 Jan 2020, 11:35

                  @jsulm
                  I'm afraid this is not what he means/how he is using memory. There are quite distinct, separate, non-contiguous areas of his memory-mapped file he wishes to access. He wishes to call QFile::map() many times, each one mapping a separate area of memory. He will need to retain those mapped addresses so that he can later unmap() them.

                  He should change to an array/list of memoryMappings. I'm not a C++ expert, but his code should be more like:

                  QVector<qint32*> memoryMappings(N);
                  for (int i = 0; i < N; i++){
                      memoryMappings[i] = reinterpret_cast<qint32*>(file.map(3600+i*Nb, 1));
                      FFID[i] = *memoryMappings[i]
                  }
                  
                  P Offline
                  P Offline
                  Please_Help_me_D
                  wrote on 8 Jan 2020, 12:16 last edited by
                  #45

                  @JonB yes, that was exactly what I wanted!
                  Despite of my humble knowledge in C/C++ programming I got an idea :)
                  If I map the adress of the first value that I want to read (3600-3604 bites). Then calling:

                  memory
                  

                  would show me the adress of that value. So my file is stored continuosly on the disk and the second quint4 value has to be on the (memory+4) adress. So if I call:

                  first_value = *memory;
                  second_value = *(memory+4);
                  third_value = *(memory+8);
                  

                  Should this work? Would it be faster? I'm going to try

                  J 1 Reply Last reply 8 Jan 2020, 12:31
                  0
                  • P Please_Help_me_D
                    8 Jan 2020, 12:16

                    @JonB yes, that was exactly what I wanted!
                    Despite of my humble knowledge in C/C++ programming I got an idea :)
                    If I map the adress of the first value that I want to read (3600-3604 bites). Then calling:

                    memory
                    

                    would show me the adress of that value. So my file is stored continuosly on the disk and the second quint4 value has to be on the (memory+4) adress. So if I call:

                    first_value = *memory;
                    second_value = *(memory+4);
                    third_value = *(memory+8);
                    

                    Should this work? Would it be faster? I'm going to try

                    J Offline
                    J Offline
                    JonB
                    wrote on 8 Jan 2020, 12:31 last edited by
                    #46

                    @Please_Help_me_D
                    Huh? Do you mean you are intending to change the physical file content/format to move the data points you want to retrieve so that they are contiguous? Seems pretty surprising to me, one would assume the format is dictated by something else external to your program. But then you never have explained what this data/file is all about....

                    P 1 Reply Last reply 8 Jan 2020, 12:54
                    0
                    • J JonB
                      8 Jan 2020, 12:31

                      @Please_Help_me_D
                      Huh? Do you mean you are intending to change the physical file content/format to move the data points you want to retrieve so that they are contiguous? Seems pretty surprising to me, one would assume the format is dictated by something else external to your program. But then you never have explained what this data/file is all about....

                      P Offline
                      P Offline
                      Please_Help_me_D
                      wrote on 8 Jan 2020, 12:54 last edited by
                      #47

                      @JonB no I don't want to change the content of a file. My file is like the following:

                      • first 3600 bytes describe the rest of the file. Here I get information how much rows Nb and columns N I have

                      • the rest of the file is a N-time repeating Nb number of bytes. We can represent this part as a matrix with Nb rows (or bytes if we multiply it by 4) and N columns and my task is to read a single row of this matrix, in other words I need to read every Nb byte since some starting byte (say 3600 or 3604 or something)
                        Actually it is a little bit more complicated and some rows of this "matrix" is of qint16, other qint32 adn single.
                        Here what I do and I get the correct values for few first qint32 rows:

                          qint64 N = 44861;
                          qint64 Nb = 100;
                          memory = reinterpret_cast<qint32*>(file.map(3600, 4));
                          for (int i = 0; i < N; i++){
                              std::cout << memory+i << std::endl; //  adress
                              std::cout << *(memory+i) << std::endl; // value
                          }
                      

                      But my program breaks whe I try:

                          qint64 N = 44861;
                          qint64 Nb = 100;
                          memory = reinterpret_cast<qint32*>(file.map(3600, 4));
                          for (int i = 0; i < N; i++){
                              std::cout << memory+i*Nb << std::endl;
                              std::cout << *(memory+i*Nb) << std::endl;
                          }
                      

                      Application output:
                      15:54:06: C: \ Users \ tasik \ Documents \ Qt_Projects \ build-untitled1-Desktop_Qt_5_12_6_MSVC2017_64_bit-Release \ release \ untitled1.exe starts ...
                      15:54:09: C: \ Users \ tasik \ Documents \ Qt_Projects \ build-untitled1-Desktop_Qt_5_12_6_MSVC2017_64_bit-Release \ release \ untitled1.exe completed with the code -1073741819

                      J 1 Reply Last reply 8 Jan 2020, 13:30
                      0
                      • P Offline
                        P Offline
                        Please_Help_me_D
                        wrote on 8 Jan 2020, 13:04 last edited by
                        #48

                        Seems to me that this work only for 124*4 bytes.
                        I just tested how much iterations completed before the program breaks for different Nb:

                            for (int i = 0; i < N; i++){
                                std::cout << *(memory+i*Nb) << std::endl;
                            }
                        
                        • Nb = 1, max_iterator_i = 124
                        • Nb = 2, max_iterator_i = 62
                        • Nb = 4, max_iterator_i = 31
                          So I think that my idea is not such good as I thought :)
                        1 Reply Last reply
                        0
                        • S Offline
                          S Offline
                          SGaist
                          Lifetime Qt Champion
                          wrote on 8 Jan 2020, 13:10 last edited by
                          #49

                          @Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:

                          memory = reinterpret_cast<qint32*>(file.map(3600, 4));

                          You are mapping a region of 4 bytes yet trying to read way past that.

                          Interested in AI ? www.idiap.ch
                          Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

                          P 1 Reply Last reply 8 Jan 2020, 13:44
                          2
                          • P Please_Help_me_D
                            8 Jan 2020, 12:54

                            @JonB no I don't want to change the content of a file. My file is like the following:

                            • first 3600 bytes describe the rest of the file. Here I get information how much rows Nb and columns N I have

                            • the rest of the file is a N-time repeating Nb number of bytes. We can represent this part as a matrix with Nb rows (or bytes if we multiply it by 4) and N columns and my task is to read a single row of this matrix, in other words I need to read every Nb byte since some starting byte (say 3600 or 3604 or something)
                              Actually it is a little bit more complicated and some rows of this "matrix" is of qint16, other qint32 adn single.
                              Here what I do and I get the correct values for few first qint32 rows:

                                qint64 N = 44861;
                                qint64 Nb = 100;
                                memory = reinterpret_cast<qint32*>(file.map(3600, 4));
                                for (int i = 0; i < N; i++){
                                    std::cout << memory+i << std::endl; //  adress
                                    std::cout << *(memory+i) << std::endl; // value
                                }
                            

                            But my program breaks whe I try:

                                qint64 N = 44861;
                                qint64 Nb = 100;
                                memory = reinterpret_cast<qint32*>(file.map(3600, 4));
                                for (int i = 0; i < N; i++){
                                    std::cout << memory+i*Nb << std::endl;
                                    std::cout << *(memory+i*Nb) << std::endl;
                                }
                            

                            Application output:
                            15:54:06: C: \ Users \ tasik \ Documents \ Qt_Projects \ build-untitled1-Desktop_Qt_5_12_6_MSVC2017_64_bit-Release \ release \ untitled1.exe starts ...
                            15:54:09: C: \ Users \ tasik \ Documents \ Qt_Projects \ build-untitled1-Desktop_Qt_5_12_6_MSVC2017_64_bit-Release \ release \ untitled1.exe completed with the code -1073741819

                            J Offline
                            J Offline
                            JonB
                            wrote on 8 Jan 2020, 13:30 last edited by JonB 1 Aug 2020, 14:20
                            #50

                            @Please_Help_me_D
                            I give up, I really don't understand what you think you are trying to achieve.

                            If the data you want to fetch is physically separated all over the file, as you originally said if that hasn't changed, you are wasting your time trying to miraculously "coalesce/adjacentise" the data access in memory via mapping. It is vain attempt. Whichever way you look at it, if you have a physical hard disk it is going to have seek/move the head to reach discontinuous areas. That is what will "take time", and there is nothing you can do about it.....

                            1 Reply Last reply
                            4
                            • S SGaist
                              8 Jan 2020, 13:10

                              @Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:

                              memory = reinterpret_cast<qint32*>(file.map(3600, 4));

                              You are mapping a region of 4 bytes yet trying to read way past that.

                              P Offline
                              P Offline
                              Please_Help_me_D
                              wrote on 8 Jan 2020, 13:44 last edited by
                              #51

                              @SGaist yes thank you
                              @JonB I was wrong. Thank you for explanation

                              1 Reply Last reply
                              0
                              • P Offline
                                P Offline
                                Please_Help_me_D
                                wrote on 14 Aug 2020, 01:35 last edited by
                                #52

                                Hi all again,

                                I just noticed one thing:
                                if I iterate through the mapped file of size 14 GygaBite I can see memory consumption that eats 4 GB of RAM in about 10 seconds. After that I have to stop the execution because of my RAM limit but it doesn't have any signs that it is going to stop growing.

                                For example this code produces all that I say on Windows 10 x64, Qt 5.14.0, MSVC 64 2017:

                                    qFile = new QFile("myBigFile");
                                    uchar* memFile_uchar = qFile->map(0, qFile->size());
                                    int val;
                                    size_t I = qFile->size();
                                    for(size_t i = 0; i < I; i++){
                                        val = memFile_uchar[i];
                                    }
                                

                                Hope somebody is able to explaing that...

                                PS: When I was using Matlab and memory mapping technique there I was able to see similar behaviour there.

                                J 1 Reply Last reply 14 Aug 2020, 09:21
                                0
                                • P Please_Help_me_D
                                  14 Aug 2020, 01:35

                                  Hi all again,

                                  I just noticed one thing:
                                  if I iterate through the mapped file of size 14 GygaBite I can see memory consumption that eats 4 GB of RAM in about 10 seconds. After that I have to stop the execution because of my RAM limit but it doesn't have any signs that it is going to stop growing.

                                  For example this code produces all that I say on Windows 10 x64, Qt 5.14.0, MSVC 64 2017:

                                      qFile = new QFile("myBigFile");
                                      uchar* memFile_uchar = qFile->map(0, qFile->size());
                                      int val;
                                      size_t I = qFile->size();
                                      for(size_t i = 0; i < I; i++){
                                          val = memFile_uchar[i];
                                      }
                                  

                                  Hope somebody is able to explaing that...

                                  PS: When I was using Matlab and memory mapping technique there I was able to see similar behaviour there.

                                  J Offline
                                  J Offline
                                  JonB
                                  wrote on 14 Aug 2020, 09:21 last edited by
                                  #53

                                  @Please_Help_me_D
                                  I'm not sure what you're asking here. You are mapping the whole of the file. As you begin to access data in the mapped area it gets brought into memory, and that takes up space. If you have limited memory, this is not a good idea.

                                  I haven't used memory mapping myself, but presumably if you want to keep memory usage down you have to do something like only map partial areas of the file at a time (arguments to map()), and release previously mapped areas (unmap()). You'd have to test whether that actually results in less memory usage.

                                  If you are limited in memory compared to the size of the file, perhaps you shouldn't be using memory mapping at all. File seeking to desired data won't have a memory overhead. In the code you show you are reading the data just once, so there may not be much difference. Have you actually measured performance with file versus memory-map access?

                                  P 1 Reply Last reply 14 Aug 2020, 16:38
                                  2
                                  • J JonB
                                    14 Aug 2020, 09:21

                                    @Please_Help_me_D
                                    I'm not sure what you're asking here. You are mapping the whole of the file. As you begin to access data in the mapped area it gets brought into memory, and that takes up space. If you have limited memory, this is not a good idea.

                                    I haven't used memory mapping myself, but presumably if you want to keep memory usage down you have to do something like only map partial areas of the file at a time (arguments to map()), and release previously mapped areas (unmap()). You'd have to test whether that actually results in less memory usage.

                                    If you are limited in memory compared to the size of the file, perhaps you shouldn't be using memory mapping at all. File seeking to desired data won't have a memory overhead. In the code you show you are reading the data just once, so there may not be much difference. Have you actually measured performance with file versus memory-map access?

                                    P Offline
                                    P Offline
                                    Please_Help_me_D
                                    wrote on 14 Aug 2020, 16:38 last edited by
                                    #54

                                    @JonB said in Fastest way to read part of 300 Gigabyte binary file:

                                    I'm not sure what you're asking here. You are mapping the whole of the file. As you begin to access data in the mapped area it gets brought into memory, and that takes up space. If you have limited memory, this is not a good idea.

                                    Well this helped me. So I divide my file by portions and unmap() those portions when they become unuseful. In this case there is no such memory consumption
                                    Thank you!

                                    N 2 Replies Last reply 15 Aug 2020, 17:54
                                    0
                                    • P Please_Help_me_D
                                      14 Aug 2020, 16:38

                                      @JonB said in Fastest way to read part of 300 Gigabyte binary file:

                                      I'm not sure what you're asking here. You are mapping the whole of the file. As you begin to access data in the mapped area it gets brought into memory, and that takes up space. If you have limited memory, this is not a good idea.

                                      Well this helped me. So I divide my file by portions and unmap() those portions when they become unuseful. In this case there is no such memory consumption
                                      Thank you!

                                      N Offline
                                      N Offline
                                      Natural_Bugger
                                      wrote on 15 Aug 2020, 17:54 last edited by Natural_Bugger
                                      #55

                                      @Please_Help_me_D

                                      http://www.c-jump.com/bcc/c155c/MemAccess/MemAccess.html
                                      https://stackoverflow.com/questions/14324709/c-how-to-directly-access-memory

                                      not sure if my input helps, but i'm gonna try anyway ..
                                      you could try to create dynamically very large array's in memory.
                                      using try, catch and catch the exception to see how much space you can reserve in your ram.
                                      and automatically scale down until no exception is thrown and your array fits.
                                      than read the file in chunks according to space reserved in your ram.

                                      store data in a text file or so, in case you programs crashes half way through the file and you have to start all over again.
                                      a sort of snapshot method.

                                      https://stackoverflow.com/questions/2513505/how-to-get-available-memory-c-g

                                      how does the data look like?

                                      ChiliTomatoNoodle learned me about Structs.
                                      https://www.youtube.com/user/ChiliTomatoNoodle
                                      so you could make an array of structs that fit your data.

                                      1 Reply Last reply
                                      0
                                      • P Please_Help_me_D
                                        14 Aug 2020, 16:38

                                        @JonB said in Fastest way to read part of 300 Gigabyte binary file:

                                        I'm not sure what you're asking here. You are mapping the whole of the file. As you begin to access data in the mapped area it gets brought into memory, and that takes up space. If you have limited memory, this is not a good idea.

                                        Well this helped me. So I divide my file by portions and unmap() those portions when they become unuseful. In this case there is no such memory consumption
                                        Thank you!

                                        N Offline
                                        N Offline
                                        Natural_Bugger
                                        wrote on 16 Aug 2020, 16:14 last edited by
                                        #56

                                        @Please_Help_me_D

                                        this might help you out even more.

                                        https://stackoverflow.com/questions/7749066/how-to-catch-out-of-memory-exception-in-c

                                        https://stackoverflow.com/questions/23587837/c-allocating-large-array-on-heap-gives-out-of-memory-exception

                                        P 1 Reply Last reply 18 Aug 2020, 22:58
                                        0
                                        • N Natural_Bugger
                                          16 Aug 2020, 16:14

                                          @Please_Help_me_D

                                          this might help you out even more.

                                          https://stackoverflow.com/questions/7749066/how-to-catch-out-of-memory-exception-in-c

                                          https://stackoverflow.com/questions/23587837/c-allocating-large-array-on-heap-gives-out-of-memory-exception

                                          P Offline
                                          P Offline
                                          Please_Help_me_D
                                          wrote on 18 Aug 2020, 22:58 last edited by
                                          #57

                                          @Natural_Bugger thank you for help!
                                          My data is a binary files from few Gb to hundreds Gb. I read it and write it by portions. I also use OpenMP to read it in parallel.
                                          Now I solved the my task and I'm busy with other task for now but I think a little bit later I will try your solution

                                          N 1 Reply Last reply 18 Aug 2020, 23:09
                                          1

                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Users
                                          • Groups
                                          • Search
                                          • Get Qt Extensions
                                          • Unsolved