Fastest way to read part of 300 Gigabyte binary file
-
I forgot to notice that in the previous result the file C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy is on the SSD disk. But when I put the file on the internal (local) HHD there was no difference in timing
@Please_Help_me_D
Yes, that would figure then! Meanwhile, I thought earlier on you were saying the file was on the network, that's a very different situation from a local SSD.... -
@Please_Help_me_D
Yes, that would figure then! Meanwhile, I thought earlier on you were saying the file was on the network, that's a very different situation from a local SSD....@JonB maybe I do some confusing things :)
My computer (laptop) has two devices to store data: SSD and HDD. Windows is installed on SSD. But neither of those two has enough free space to store 300 Gygabyte file. So if I do some manipulation with this file then I use external HDD disk (third device) :)
Now I got an idea to check the speed to read this small data (115 Megabyte) if I copy it to an external HDD G:/raw_le.sgy. Here is the result:
fread/fseek = 0.5 seconds
QFile::map = 0.06 seconds (the only one that didn't change)
std::ifstream/seekg = 0.6 seconds
_read/_lseek = 0.4 secondsI have to notice that when external HDD is plugged-in then the timings is less stable. My laptop starts to work a little harder from time to time...
But the interesting thing is that external HDD increase the time of all the methods except memory mapping. Of course I only read 0.18 Megabyte data of 115 Megabyte file and the effect that external HDD is adjusted via USB doesn't hurt much (is negligible) on the resulting timings and we can see that it doesn't depend whether such small data is on internal device or on external. But when dealing with big data file (300 Gygabite) I can suppose that it have the dominant role in timings. I can't check it now because I don't have enough space on laptop but I'm going to try with 13 or 27 Gygabite data right now :D that should be interesting, I need to prepare the space)) -
@JonB maybe I do some confusing things :)
My computer (laptop) has two devices to store data: SSD and HDD. Windows is installed on SSD. But neither of those two has enough free space to store 300 Gygabyte file. So if I do some manipulation with this file then I use external HDD disk (third device) :)
Now I got an idea to check the speed to read this small data (115 Megabyte) if I copy it to an external HDD G:/raw_le.sgy. Here is the result:
fread/fseek = 0.5 seconds
QFile::map = 0.06 seconds (the only one that didn't change)
std::ifstream/seekg = 0.6 seconds
_read/_lseek = 0.4 secondsI have to notice that when external HDD is plugged-in then the timings is less stable. My laptop starts to work a little harder from time to time...
But the interesting thing is that external HDD increase the time of all the methods except memory mapping. Of course I only read 0.18 Megabyte data of 115 Megabyte file and the effect that external HDD is adjusted via USB doesn't hurt much (is negligible) on the resulting timings and we can see that it doesn't depend whether such small data is on internal device or on external. But when dealing with big data file (300 Gygabite) I can suppose that it have the dominant role in timings. I can't check it now because I don't have enough space on laptop but I'm going to try with 13 or 27 Gygabite data right now :D that should be interesting, I need to prepare the space))@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
-
@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
@JonB said in Fastest way to read part of 300 Gigabyte binary file:
I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
Yes I that is maybe be true... I need to test it
I will try to stop most of programs (anti-virus first of all) before launching my app. -
@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
@JonB I got the result. So my file is 13.957 Gygabite (about 14 Gygabite). I read 1734480 int values which is equal to 6.9 Megabite. The result:
SSD internal- fread/fseek 213 Seconds
- QFile::map 86 Seconds
HDD internal
- fread/fseek 350 Seconds
- QFile::map 216 Seconds
HDD external
- fread/fseek 1058 Seconds
- QFile::map 655 Seconds
So the fastest way is to use memory mapping. And the most crucial effect when working with big data is in whether I use external HDD or internal SSD/HDD.
But I need to optimize my QFile::map code I said few messages above. Does anybody know how to do that?For fread/fseek I used the code:
#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> #include <boost/endian/buffers.hpp> #include <boost/static_assert.hpp> #include <armadillo> using namespace arma; using namespace boost::endian; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; FILE *pFile; unsigned long int segySize, nCol; unsigned short int dataFormatCode, nRow; // since 3600 byte we can represent segyFile as a matrix with number of rows = nRow and number of columns = nCol nRow = 2060; nCol = 1734480; QVector<quint32_le> FFID(nCol); pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every (nRow-1)*4 byte starting from 3608 byte, in other word we read only 3rd row wall_clock timer; timer.tic(); fseek (pFile , 3608, SEEK_SET); long int offset = (nRow-1)*4; for(unsigned long int i = 0; i < nCol; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); //std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; }
And for QFile::map I used:
#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 1734480; qint64 Nb = 2059*4; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
@JonB I got the result. So my file is 13.957 Gygabite (about 14 Gygabite). I read 1734480 int values which is equal to 6.9 Megabite. The result:
SSD internal- fread/fseek 213 Seconds
- QFile::map 86 Seconds
HDD internal
- fread/fseek 350 Seconds
- QFile::map 216 Seconds
HDD external
- fread/fseek 1058 Seconds
- QFile::map 655 Seconds
So the fastest way is to use memory mapping. And the most crucial effect when working with big data is in whether I use external HDD or internal SSD/HDD.
But I need to optimize my QFile::map code I said few messages above. Does anybody know how to do that?For fread/fseek I used the code:
#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> #include <boost/endian/buffers.hpp> #include <boost/static_assert.hpp> #include <armadillo> using namespace arma; using namespace boost::endian; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; FILE *pFile; unsigned long int segySize, nCol; unsigned short int dataFormatCode, nRow; // since 3600 byte we can represent segyFile as a matrix with number of rows = nRow and number of columns = nCol nRow = 2060; nCol = 1734480; QVector<quint32_le> FFID(nCol); pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every (nRow-1)*4 byte starting from 3608 byte, in other word we read only 3rd row wall_clock timer; timer.tic(); fseek (pFile , 3608, SEEK_SET); long int offset = (nRow-1)*4; for(unsigned long int i = 0; i < nCol; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); //std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; }
And for QFile::map I used:
#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 1734480; qint64 Nb = 2059*4; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
-
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
@J-Hilk
Out of interest: I hope you are right, but I don't see much in code which spends its time seeking and reading a few bytes out of an enormous file that will benefit from any code optimization. Presumably all the time is being taken in the OS calls themselves.... -
@J-Hilk
Out of interest: I hope you are right, but I don't see much in code which spends its time seeking and reading a few bytes out of an enormous file that will benefit from any code optimization. Presumably all the time is being taken in the OS calls themselves....@JonB said in Fastest way to read part of 300 Gigabyte binary file:
Presumably all the time is being taken in the OS calls themselves....
you mean, most time is lost during the network access calls? Possibly. But I would expect at least a couple of seconds improvements anyway :)
-
@JonB said in Fastest way to read part of 300 Gigabyte binary file:
Presumably all the time is being taken in the OS calls themselves....
you mean, most time is lost during the network access calls? Possibly. But I would expect at least a couple of seconds improvements anyway :)
@J-Hilk
I would not, can't see how it would save anything here. But that aside, the OP wrote earlier:@SGaist 15155 seconds (4 hours 12 min) it took to read these data.
Your "couple of seconds" is not going to be ground-breaking on that timing, is it? ;-)
OK, the OP has shown a newer, quicker timing. By all means try release optimization, worth a go :)
-
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
@J-Hilk Yes I did all the experiments in release mode
-
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
uchar *memory = file.map(3608, file.size()-3608);
is it possible to represent *memory as a heap of type qint32 rather than uchar?
-
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
uchar *memory = file.map(3608, file.size()-3608);
is it possible to represent *memory as a heap of type qint32 rather than uchar?
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
is it possible to represent *memory as a heap of type qint32 rather than uchar?
Sure, cast the pointer to qint32*
-
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
is it possible to represent *memory as a heap of type qint32 rather than uchar?
Sure, cast the pointer to qint32*
@jsulm
Your answer is in principle correct. However, should we warn the OP that I'm thinking this will only "work" if the return result from theQFile::map()
he calls (given his offsets) is suitably aligned at a 32-bit boundary forqint32 *
to address without segmenting?? I don't see the Qt docs mentioning whether this is the case for the normally-uchar *
return result? -
@jsulm
Your answer is in principle correct. However, should we warn the OP that I'm thinking this will only "work" if the return result from theQFile::map()
he calls (given his offsets) is suitably aligned at a 32-bit boundary forqint32 *
to address without segmenting?? I don't see the Qt docs mentioning whether this is the case for the normally-uchar *
return result? -
@jsulm
Your answer is in principle correct. However, should we warn the OP that I'm thinking this will only "work" if the return result from theQFile::map()
he calls (given his offsets) is suitably aligned at a 32-bit boundary forqint32 *
to address without segmenting?? I don't see the Qt docs mentioning whether this is the case for the normally-uchar *
return result?@JonB
well if you take a look at the loop so far:for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; }
no checks inside the loop nor before, so it's going to hard crash any way, when the file is not int32_t aligned.
-
@JonB
well if you take a look at the loop so far:for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; }
no checks inside the loop nor before, so it's going to hard crash any way, when the file is not int32_t aligned.
@J-Hilk
Umm, no, I don't see that. His currentuchar *memory
means it's only picking up bytes from there. And he made hisFFID
beQVector<uchar>
. So he is copying one byte at a time (which is what I think he wants to get rid of), and current code won't have odd-boundary-memory-alignment issue. But new code withqint32*
foruchar*
could have problem....If his offset is always like the example
7996
so it's divisible by 4 always then I would guess the return result fromQFile::map()
will not show any problem. This is an issue which does not arise when reading numbers from file, only from mapping, so just to be aware. -
@J-Hilk
Umm, no, I don't see that. His currentuchar *memory
means it's only picking up bytes from there. And he made hisFFID
beQVector<uchar>
. So he is copying one byte at a time (which is what I think he wants to get rid of), and current code won't have odd-boundary-memory-alignment issue. But new code withqint32*
foruchar*
could have problem....If his offset is always like the example
7996
so it's divisible by 4 always then I would guess the return result fromQFile::map()
will not show any problem. This is an issue which does not arise when reading numbers from file, only from mapping, so just to be aware.@JonB
really? And what guarantees, thatmemory[i*Nb+3];
will be part of the valid memory ?I assume this, is, what the OP wants to do
QVector<uchar> FFID(N*4); -> QVector<qint32> FFID(N); uchar *memory -> qint32 *memory and for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; }
-
@J-Hilk
Umm, no, I don't see that. His currentuchar *memory
means it's only picking up bytes from there. And he made hisFFID
beQVector<uchar>
. So he is copying one byte at a time (which is what I think he wants to get rid of), and current code won't have odd-boundary-memory-alignment issue. But new code withqint32*
foruchar*
could have problem....If his offset is always like the example
7996
so it's divisible by 4 always then I would guess the return result fromQFile::map()
will not show any problem. This is an issue which does not arise when reading numbers from file, only from mapping, so just to be aware.@jsulm thank you, that works!
@JonB @J-Hilk I think I see what you are discussing and I keep that in mind.
If I map the part of a file that is is not equal to N*4 (like in the code below) my program doesn't output any error or command line. Compiler says that it was succesfully built and application output throws that it started and one second later it is terminated.#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } //qint32 *memory = new qint32; //(uchar*)&memory; uchar* memory = file.map(3608, file.size()-3607); // Here the mappable part file.size()-3607 has some remainder of the division by 4 (qint32*) memory; if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 44861; qint64 Nb = 661*4; QVector<qint32> FFID(N); (uchar *)&FFID; timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; /*FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3];*/ std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
@jsulm thank you, that works!
@JonB @J-Hilk I think I see what you are discussing and I keep that in mind.
If I map the part of a file that is is not equal to N*4 (like in the code below) my program doesn't output any error or command line. Compiler says that it was succesfully built and application output throws that it started and one second later it is terminated.#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } //qint32 *memory = new qint32; //(uchar*)&memory; uchar* memory = file.map(3608, file.size()-3607); // Here the mappable part file.size()-3607 has some remainder of the division by 4 (qint32*) memory; if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 44861; qint64 Nb = 661*4; QVector<qint32> FFID(N); (uchar *)&FFID; timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; /*FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3];*/ std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
and application output throws that it started and one second later it is terminated.
Yes, that was my point. You won't get a compilation error. You would get a run-time "crash" on something like line
FFID[i] = memory[i*Nb];
. Under Linux you'd get a core dump (if enabled), under Windoze I don't know but would have thought it would bring up a message box of some kind.However, I haven't got time, I don't think the code you've written reflects this. For a start statements
(qint32*) memory;
and(uchar *)&FFID;
are No-Ops (turn compiler warnings level up, you might get a warning of "no effect" for these lines, you should always develop with highest warning level you can). You haven't changed over thememory
toqint32*
, what you seem to think is how to do casts is wrong. This is C/C++ stuff. You'll want something more likeqint32* memory = static_cast<qint32*>(file.map(3608, file.size()-3607));qint32* memory = reinterpret_cast<qint32*>(file.map(3608, file.size()-3607));
but I haven't got time to sort you out. And if you do that you need to understand how to then index it, it won't be the same offsets as you used when it was
uchar*
. Don't try to change toqint32*
for your accesses if you don't know what you're doing cast-wise in C/C++! :) -
@Please_Help_me_D said in Fastest way to read part of 300 Gigabyte binary file:
and application output throws that it started and one second later it is terminated.
Yes, that was my point. You won't get a compilation error. You would get a run-time "crash" on something like line
FFID[i] = memory[i*Nb];
. Under Linux you'd get a core dump (if enabled), under Windoze I don't know but would have thought it would bring up a message box of some kind.However, I haven't got time, I don't think the code you've written reflects this. For a start statements
(qint32*) memory;
and(uchar *)&FFID;
are No-Ops (turn compiler warnings level up, you might get a warning of "no effect" for these lines, you should always develop with highest warning level you can). You haven't changed over thememory
toqint32*
, what you seem to think is how to do casts is wrong. This is C/C++ stuff. You'll want something more likeqint32* memory = static_cast<qint32*>(file.map(3608, file.size()-3607));qint32* memory = reinterpret_cast<qint32*>(file.map(3608, file.size()-3607));
but I haven't got time to sort you out. And if you do that you need to understand how to then index it, it won't be the same offsets as you used when it was
uchar*
. Don't try to change toqint32*
for your accesses if you don't know what you're doing cast-wise in C/C++! :)@JonB said in Fastest way to read part of 300 Gigabyte binary file:
qint32* memory = static_cast<qint32*>(file.map(3608, file.size()-3607));
thank you but this sends me an error:
main.cpp:17:22: error: static_cast from 'uchar *' (aka 'unsigned char *') to 'qint32 *' (aka 'int *') is not allowed