Fastest way to read part of 300 Gigabyte binary file
-
@SGaist if it is possible then I would try. Could you please give me some hints how to do that?
Also do you know if it is possible to define an array (or vector) of indexes that I want to read and insted of calling loop just write something like FFID[ind0] = memory[ind1];? where ind0 is an array (vector) = {0, 1, 2, 3, ...} and ind1 is an array (vector) = {0, 8000, 16000, 24000, ...} -
Well, the first parameter is an offset and the second is a size so you could jump from point to point.
-
Well, the first parameter is an offset and the second is a size so you could jump from point to point.
@SGaist but as far as I know the offset and the size is a single valued numbers. If I need to get 10th, 20th, 30th elements then I need multiple valued offset, beacause offset is a number of bytes from the beginning of file. Or I misunderstand something?
-
@SGaist @JonB I've tried few ways to read 115 MegaByte file in a way that I described above (read every n-th byte). So the result is:
fread/fseek = 0.28 seconds
QFile::map = 0.06 seconds
std::ifstream/seekg = 0.35 seconds
_read/_lseek = 0.29 secondsSo the fastest is memory mapping technique and seems to me that I'm going use it. So as I don't fully understand how to optimize the code with QFile::map could you please explain me how to change it? My data is consisted of qint16 and qint32 and float format. Something like first 10 bytes of qint16, then 12 bytes of qint32 and then 1000 bytes of single and this triplet (10 bytes -> 12 bytes -> 1000 bytes) repeats until the end of file. Is it possible to map the whole file in this complex format?
If not how to map it in qint32 rather than uchar? Unfortunately in my example below I could only map it in uchar
I use Armadillo only for timings here.#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 44861; qint64 Nb = 2640; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
I forgot to notice that in the previous result the file C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy is on the SSD disk. But when I put the file on the internal (local) HHD there was no difference in timing
-
I forgot to notice that in the previous result the file C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy is on the SSD disk. But when I put the file on the internal (local) HHD there was no difference in timing
@Please_Help_me_D
Yes, that would figure then! Meanwhile, I thought earlier on you were saying the file was on the network, that's a very different situation from a local SSD.... -
@Please_Help_me_D
Yes, that would figure then! Meanwhile, I thought earlier on you were saying the file was on the network, that's a very different situation from a local SSD....@JonB maybe I do some confusing things :)
My computer (laptop) has two devices to store data: SSD and HDD. Windows is installed on SSD. But neither of those two has enough free space to store 300 Gygabyte file. So if I do some manipulation with this file then I use external HDD disk (third device) :)
Now I got an idea to check the speed to read this small data (115 Megabyte) if I copy it to an external HDD G:/raw_le.sgy. Here is the result:
fread/fseek = 0.5 seconds
QFile::map = 0.06 seconds (the only one that didn't change)
std::ifstream/seekg = 0.6 seconds
_read/_lseek = 0.4 secondsI have to notice that when external HDD is plugged-in then the timings is less stable. My laptop starts to work a little harder from time to time...
But the interesting thing is that external HDD increase the time of all the methods except memory mapping. Of course I only read 0.18 Megabyte data of 115 Megabyte file and the effect that external HDD is adjusted via USB doesn't hurt much (is negligible) on the resulting timings and we can see that it doesn't depend whether such small data is on internal device or on external. But when dealing with big data file (300 Gygabite) I can suppose that it have the dominant role in timings. I can't check it now because I don't have enough space on laptop but I'm going to try with 13 or 27 Gygabite data right now :D that should be interesting, I need to prepare the space)) -
@JonB maybe I do some confusing things :)
My computer (laptop) has two devices to store data: SSD and HDD. Windows is installed on SSD. But neither of those two has enough free space to store 300 Gygabyte file. So if I do some manipulation with this file then I use external HDD disk (third device) :)
Now I got an idea to check the speed to read this small data (115 Megabyte) if I copy it to an external HDD G:/raw_le.sgy. Here is the result:
fread/fseek = 0.5 seconds
QFile::map = 0.06 seconds (the only one that didn't change)
std::ifstream/seekg = 0.6 seconds
_read/_lseek = 0.4 secondsI have to notice that when external HDD is plugged-in then the timings is less stable. My laptop starts to work a little harder from time to time...
But the interesting thing is that external HDD increase the time of all the methods except memory mapping. Of course I only read 0.18 Megabyte data of 115 Megabyte file and the effect that external HDD is adjusted via USB doesn't hurt much (is negligible) on the resulting timings and we can see that it doesn't depend whether such small data is on internal device or on external. But when dealing with big data file (300 Gygabite) I can suppose that it have the dominant role in timings. I can't check it now because I don't have enough space on laptop but I'm going to try with 13 or 27 Gygabite data right now :D that should be interesting, I need to prepare the space))@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
-
@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
@JonB said in Fastest way to read part of 300 Gigabyte binary file:
I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
Yes I that is maybe be true... I need to test it
I will try to stop most of programs (anti-virus first of all) before launching my app. -
@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
@JonB I got the result. So my file is 13.957 Gygabite (about 14 Gygabite). I read 1734480 int values which is equal to 6.9 Megabite. The result:
SSD internal- fread/fseek 213 Seconds
- QFile::map 86 Seconds
HDD internal
- fread/fseek 350 Seconds
- QFile::map 216 Seconds
HDD external
- fread/fseek 1058 Seconds
- QFile::map 655 Seconds
So the fastest way is to use memory mapping. And the most crucial effect when working with big data is in whether I use external HDD or internal SSD/HDD.
But I need to optimize my QFile::map code I said few messages above. Does anybody know how to do that?For fread/fseek I used the code:
#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> #include <boost/endian/buffers.hpp> #include <boost/static_assert.hpp> #include <armadillo> using namespace arma; using namespace boost::endian; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; FILE *pFile; unsigned long int segySize, nCol; unsigned short int dataFormatCode, nRow; // since 3600 byte we can represent segyFile as a matrix with number of rows = nRow and number of columns = nCol nRow = 2060; nCol = 1734480; QVector<quint32_le> FFID(nCol); pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every (nRow-1)*4 byte starting from 3608 byte, in other word we read only 3rd row wall_clock timer; timer.tic(); fseek (pFile , 3608, SEEK_SET); long int offset = (nRow-1)*4; for(unsigned long int i = 0; i < nCol; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); //std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; }
And for QFile::map I used:
#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 1734480; qint64 Nb = 2059*4; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
@JonB I got the result. So my file is 13.957 Gygabite (about 14 Gygabite). I read 1734480 int values which is equal to 6.9 Megabite. The result:
SSD internal- fread/fseek 213 Seconds
- QFile::map 86 Seconds
HDD internal
- fread/fseek 350 Seconds
- QFile::map 216 Seconds
HDD external
- fread/fseek 1058 Seconds
- QFile::map 655 Seconds
So the fastest way is to use memory mapping. And the most crucial effect when working with big data is in whether I use external HDD or internal SSD/HDD.
But I need to optimize my QFile::map code I said few messages above. Does anybody know how to do that?For fread/fseek I used the code:
#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> #include <boost/endian/buffers.hpp> #include <boost/static_assert.hpp> #include <armadillo> using namespace arma; using namespace boost::endian; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; FILE *pFile; unsigned long int segySize, nCol; unsigned short int dataFormatCode, nRow; // since 3600 byte we can represent segyFile as a matrix with number of rows = nRow and number of columns = nCol nRow = 2060; nCol = 1734480; QVector<quint32_le> FFID(nCol); pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every (nRow-1)*4 byte starting from 3608 byte, in other word we read only 3rd row wall_clock timer; timer.tic(); fseek (pFile , 3608, SEEK_SET); long int offset = (nRow-1)*4; for(unsigned long int i = 0; i < nCol; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); //std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; }
And for QFile::map I used:
#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 1734480; qint64 Nb = 2059*4; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
-
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
@J-Hilk
Out of interest: I hope you are right, but I don't see much in code which spends its time seeking and reading a few bytes out of an enormous file that will benefit from any code optimization. Presumably all the time is being taken in the OS calls themselves.... -
@J-Hilk
Out of interest: I hope you are right, but I don't see much in code which spends its time seeking and reading a few bytes out of an enormous file that will benefit from any code optimization. Presumably all the time is being taken in the OS calls themselves....@JonB said in Fastest way to read part of 300 Gigabyte binary file:
Presumably all the time is being taken in the OS calls themselves....
you mean, most time is lost during the network access calls? Possibly. But I would expect at least a couple of seconds improvements anyway :)
-
@JonB said in Fastest way to read part of 300 Gigabyte binary file:
Presumably all the time is being taken in the OS calls themselves....
you mean, most time is lost during the network access calls? Possibly. But I would expect at least a couple of seconds improvements anyway :)
@J-Hilk
I would not, can't see how it would save anything here. But that aside, the OP wrote earlier:@SGaist 15155 seconds (4 hours 12 min) it took to read these data.
Your "couple of seconds" is not going to be ground-breaking on that timing, is it? ;-)
OK, the OP has shown a newer, quicker timing. By all means try release optimization, worth a go :)
-
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
@J-Hilk Yes I did all the experiments in release mode
-
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
uchar *memory = file.map(3608, file.size()-3608);
is it possible to represent *memory as a heap of type qint32 rather than uchar?
-
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
uchar *memory = file.map(3608, file.size()-3608);
is it possible to represent *memory as a heap of type qint32 rather than uchar?
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
is it possible to represent *memory as a heap of type qint32 rather than uchar?
Sure, cast the pointer to qint32*
-
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
is it possible to represent *memory as a heap of type qint32 rather than uchar?
Sure, cast the pointer to qint32*
@jsulm
Your answer is in principle correct. However, should we warn the OP that I'm thinking this will only "work" if the return result from theQFile::map()
he calls (given his offsets) is suitably aligned at a 32-bit boundary forqint32 *
to address without segmenting?? I don't see the Qt docs mentioning whether this is the case for the normally-uchar *
return result? -
@jsulm
Your answer is in principle correct. However, should we warn the OP that I'm thinking this will only "work" if the return result from theQFile::map()
he calls (given his offsets) is suitably aligned at a 32-bit boundary forqint32 *
to address without segmenting?? I don't see the Qt docs mentioning whether this is the case for the normally-uchar *
return result? -
@jsulm
Your answer is in principle correct. However, should we warn the OP that I'm thinking this will only "work" if the return result from theQFile::map()
he calls (given his offsets) is suitably aligned at a 32-bit boundary forqint32 *
to address without segmenting?? I don't see the Qt docs mentioning whether this is the case for the normally-uchar *
return result?@JonB
well if you take a look at the loop so far:for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; }
no checks inside the loop nor before, so it's going to hard crash any way, when the file is not int32_t aligned.