Fastest way to read part of 300 Gigabyte binary file
-
Hi,
I have a binary file of size about 300 Gygabyte. To general information about it I need to read every n-th byte. For example in my case I need to read every 8000-th byte (integer 4 bytes) of the file. So I wrote the code to try and it is still running about 2 hours.
As far as I know it is slow because of fread call time is big and fseek is pretty fast. So I thought maybe if I could call fread only once and give the OFFSET to each byte as vector then maybe I could improve the perfomance. Maybe Qt has something like that? Or what should I try?
By the way my 300 Gygabyte data file is on the NTFS file system external hardware. I use Windows 10 x64, MSVC x64.#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> int main() { char segyFile[]{"G:/DATA/CDP_FOR_REGLO.sgy"}; FILE *pFile; unsigned long int N = 300000000000/8000; QVector<quint32_le> FFID(N); //FFID is a vector of size N, one number takes 4 bytes pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every 8000-th byte in loop wall_clock timer; timer.tic(); long int offset = 7996; for(unsigned long int i = 0; i < N; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); // make OFFSET from current position } double n0 = timer.toc(); std::cout << n0 << std::endl; }
-
Hi,
I haven't used it but it looks like you could benefit from the map function.
Note that depending on what external support your file is on, it could also be a bottleneck.
Hope it helps
@SGaist I will try it today, thank you! I will report here if map is faster.
But if this is based on memory mapping technique then it has some restrictions that I'm trying to avoid. For example memory mapping allows you to map only files that are located on your computer. If for example you have two computers that are connected by some network (local network for example) and the file is on the 2nd computer then you can't get access to the file from 1st computer. Something like that.
I encountered that problem when I chained two computers in "cluster" and using Matlab I tryed to use memory mapping and I got error. -
@SGaist I will try it today, thank you! I will report here if map is faster.
But if this is based on memory mapping technique then it has some restrictions that I'm trying to avoid. For example memory mapping allows you to map only files that are located on your computer. If for example you have two computers that are connected by some network (local network for example) and the file is on the 2nd computer then you can't get access to the file from 1st computer. Something like that.
I encountered that problem when I chained two computers in "cluster" and using Matlab I tryed to use memory mapping and I got error.@Please_Help_me_D
Assuming you are talking aboutmemmap()
et al. No, it's likely not to work on remote file!At the risk of being shot down, You can't really do any better/faster than "seek-and-read". At 8,000 bytes apart, it won't help reading all instead of seeking. You might try an unbuffered level like
read()
/lseek()
instead offread()
/fseek()
for what you want, it's worth a try, you don't need the buffer-reading that comes with the latter.Reading from a 300GB file across a network is indeed going to take some time. 2 hours may not be long! The only way to really speed this up against a network file is to run the code on the server which has the file system local, and request from a client just what you need it to send you remotely.
-
Hi,
I haven't used it but it looks like you could benefit from the map function.
Note that depending on what external support your file is on, it could also be a bottleneck.
Hope it helps
@SGaist 15155 seconds (4 hours 12 min) it took to read these data.
@JonB I'm going to try read()/lseek#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/CDP_FOR_REGLO.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(0, file.size()); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 43933814; qint64 Nb = 8000; QVector<uchar> FFID(N); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
Did you consider mapping only the parts that are pertinent to what you want to read ?
-
@SGaist if it is possible then I would try. Could you please give me some hints how to do that?
Also do you know if it is possible to define an array (or vector) of indexes that I want to read and insted of calling loop just write something like FFID[ind0] = memory[ind1];? where ind0 is an array (vector) = {0, 1, 2, 3, ...} and ind1 is an array (vector) = {0, 8000, 16000, 24000, ...} -
Well, the first parameter is an offset and the second is a size so you could jump from point to point.
-
Well, the first parameter is an offset and the second is a size so you could jump from point to point.
@SGaist but as far as I know the offset and the size is a single valued numbers. If I need to get 10th, 20th, 30th elements then I need multiple valued offset, beacause offset is a number of bytes from the beginning of file. Or I misunderstand something?
-
@SGaist @JonB I've tried few ways to read 115 MegaByte file in a way that I described above (read every n-th byte). So the result is:
fread/fseek = 0.28 seconds
QFile::map = 0.06 seconds
std::ifstream/seekg = 0.35 seconds
_read/_lseek = 0.29 secondsSo the fastest is memory mapping technique and seems to me that I'm going use it. So as I don't fully understand how to optimize the code with QFile::map could you please explain me how to change it? My data is consisted of qint16 and qint32 and float format. Something like first 10 bytes of qint16, then 12 bytes of qint32 and then 1000 bytes of single and this triplet (10 bytes -> 12 bytes -> 1000 bytes) repeats until the end of file. Is it possible to map the whole file in this complex format?
If not how to map it in qint32 rather than uchar? Unfortunately in my example below I could only map it in uchar
I use Armadillo only for timings here.#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 44861; qint64 Nb = 2640; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
I forgot to notice that in the previous result the file C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy is on the SSD disk. But when I put the file on the internal (local) HHD there was no difference in timing
-
I forgot to notice that in the previous result the file C:/Users/tasik/Documents/Qt_Projects/raw_le.sgy is on the SSD disk. But when I put the file on the internal (local) HHD there was no difference in timing
@Please_Help_me_D
Yes, that would figure then! Meanwhile, I thought earlier on you were saying the file was on the network, that's a very different situation from a local SSD.... -
@Please_Help_me_D
Yes, that would figure then! Meanwhile, I thought earlier on you were saying the file was on the network, that's a very different situation from a local SSD....@JonB maybe I do some confusing things :)
My computer (laptop) has two devices to store data: SSD and HDD. Windows is installed on SSD. But neither of those two has enough free space to store 300 Gygabyte file. So if I do some manipulation with this file then I use external HDD disk (third device) :)
Now I got an idea to check the speed to read this small data (115 Megabyte) if I copy it to an external HDD G:/raw_le.sgy. Here is the result:
fread/fseek = 0.5 seconds
QFile::map = 0.06 seconds (the only one that didn't change)
std::ifstream/seekg = 0.6 seconds
_read/_lseek = 0.4 secondsI have to notice that when external HDD is plugged-in then the timings is less stable. My laptop starts to work a little harder from time to time...
But the interesting thing is that external HDD increase the time of all the methods except memory mapping. Of course I only read 0.18 Megabyte data of 115 Megabyte file and the effect that external HDD is adjusted via USB doesn't hurt much (is negligible) on the resulting timings and we can see that it doesn't depend whether such small data is on internal device or on external. But when dealing with big data file (300 Gygabite) I can suppose that it have the dominant role in timings. I can't check it now because I don't have enough space on laptop but I'm going to try with 13 or 27 Gygabite data right now :D that should be interesting, I need to prepare the space)) -
@JonB maybe I do some confusing things :)
My computer (laptop) has two devices to store data: SSD and HDD. Windows is installed on SSD. But neither of those two has enough free space to store 300 Gygabyte file. So if I do some manipulation with this file then I use external HDD disk (third device) :)
Now I got an idea to check the speed to read this small data (115 Megabyte) if I copy it to an external HDD G:/raw_le.sgy. Here is the result:
fread/fseek = 0.5 seconds
QFile::map = 0.06 seconds (the only one that didn't change)
std::ifstream/seekg = 0.6 seconds
_read/_lseek = 0.4 secondsI have to notice that when external HDD is plugged-in then the timings is less stable. My laptop starts to work a little harder from time to time...
But the interesting thing is that external HDD increase the time of all the methods except memory mapping. Of course I only read 0.18 Megabyte data of 115 Megabyte file and the effect that external HDD is adjusted via USB doesn't hurt much (is negligible) on the resulting timings and we can see that it doesn't depend whether such small data is on internal device or on external. But when dealing with big data file (300 Gygabite) I can suppose that it have the dominant role in timings. I can't check it now because I don't have enough space on laptop but I'm going to try with 13 or 27 Gygabite data right now :D that should be interesting, I need to prepare the space))@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
-
@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
@JonB said in Fastest way to read part of 300 Gigabyte binary file:
I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
Yes I that is maybe be true... I need to test it
I will try to stop most of programs (anti-virus first of all) before launching my app. -
@Please_Help_me_D
Especially with memory mapping, I would think caching could easily affect your test timings. You'd better be timing only from clean OS boot!I would also guess that memory mapping might suffer from size of file, as caching may be a factor. Testing it with a 100MB file (which can be easily memory cached) may not be representative of performance when the real file will be 300GB.
@JonB I got the result. So my file is 13.957 Gygabite (about 14 Gygabite). I read 1734480 int values which is equal to 6.9 Megabite. The result:
SSD internal- fread/fseek 213 Seconds
- QFile::map 86 Seconds
HDD internal
- fread/fseek 350 Seconds
- QFile::map 216 Seconds
HDD external
- fread/fseek 1058 Seconds
- QFile::map 655 Seconds
So the fastest way is to use memory mapping. And the most crucial effect when working with big data is in whether I use external HDD or internal SSD/HDD.
But I need to optimize my QFile::map code I said few messages above. Does anybody know how to do that?For fread/fseek I used the code:
#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> #include <boost/endian/buffers.hpp> #include <boost/static_assert.hpp> #include <armadillo> using namespace arma; using namespace boost::endian; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; FILE *pFile; unsigned long int segySize, nCol; unsigned short int dataFormatCode, nRow; // since 3600 byte we can represent segyFile as a matrix with number of rows = nRow and number of columns = nCol nRow = 2060; nCol = 1734480; QVector<quint32_le> FFID(nCol); pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every (nRow-1)*4 byte starting from 3608 byte, in other word we read only 3rd row wall_clock timer; timer.tic(); fseek (pFile , 3608, SEEK_SET); long int offset = (nRow-1)*4; for(unsigned long int i = 0; i < nCol; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); //std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; }
And for QFile::map I used:
#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 1734480; qint64 Nb = 2059*4; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
-
@JonB I got the result. So my file is 13.957 Gygabite (about 14 Gygabite). I read 1734480 int values which is equal to 6.9 Megabite. The result:
SSD internal- fread/fseek 213 Seconds
- QFile::map 86 Seconds
HDD internal
- fread/fseek 350 Seconds
- QFile::map 216 Seconds
HDD external
- fread/fseek 1058 Seconds
- QFile::map 655 Seconds
So the fastest way is to use memory mapping. And the most crucial effect when working with big data is in whether I use external HDD or internal SSD/HDD.
But I need to optimize my QFile::map code I said few messages above. Does anybody know how to do that?For fread/fseek I used the code:
#include <iostream> #include <stdio.h> #include <QtEndian> #include <QVector> #include <boost/endian/buffers.hpp> #include <boost/static_assert.hpp> #include <armadillo> using namespace arma; using namespace boost::endian; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; FILE *pFile; unsigned long int segySize, nCol; unsigned short int dataFormatCode, nRow; // since 3600 byte we can represent segyFile as a matrix with number of rows = nRow and number of columns = nCol nRow = 2060; nCol = 1734480; QVector<quint32_le> FFID(nCol); pFile = fopen(segyFile, "rb"); if (pFile == nullptr){ std::cout << "Error opening segy-file!" << std::endl; return 0; } // read every (nRow-1)*4 byte starting from 3608 byte, in other word we read only 3rd row wall_clock timer; timer.tic(); fseek (pFile , 3608, SEEK_SET); long int offset = (nRow-1)*4; for(unsigned long int i = 0; i < nCol; i++){ fread(&FFID[i], 4, 1, pFile); fseek (pFile , offset , SEEK_CUR); //std::cout << FFID[i] << std::endl; } double n0 = timer.toc(); std::cout << n0 << std::endl; }
And for QFile::map I used:
#include <QCoreApplication> #include <QFile> #include <QVector> //#include <QIODevice> #include <armadillo> using namespace arma; int main() { char segyFile[]{"G:/DATA/STACK1_PRESTM.sgy"}; QFile file(segyFile); if (!file.open(QIODevice::ReadOnly)) { //handle error } uchar *memory = file.map(3608, file.size()-3608); if (memory) { std::cout << "started..." << std::endl; wall_clock timer; qint64 fSize = file.size(); qint64 N = 1734480; qint64 Nb = 2059*4; QVector<uchar> FFID(N*4); timer.tic(); for(qint64 i = 0; i < N; i++){ FFID[i] = memory[i*Nb]; FFID[i+1] = memory[i*Nb+1]; FFID[i+2] = memory[i*Nb+2]; FFID[i+3] = memory[i*Nb+3]; } double n0 = timer.toc(); std::cout << n0 << std::endl; std::cout << "finished!" << std::endl; } }
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
-
@Please_Help_me_D
out of curiosity, do you build and run your tests in release mode?Compiler optimizations could go a long way in improving the speed, if you so far only ran debug builds.
@J-Hilk
Out of interest: I hope you are right, but I don't see much in code which spends its time seeking and reading a few bytes out of an enormous file that will benefit from any code optimization. Presumably all the time is being taken in the OS calls themselves.... -
@J-Hilk
Out of interest: I hope you are right, but I don't see much in code which spends its time seeking and reading a few bytes out of an enormous file that will benefit from any code optimization. Presumably all the time is being taken in the OS calls themselves....@JonB said in Fastest way to read part of 300 Gigabyte binary file:
Presumably all the time is being taken in the OS calls themselves....
you mean, most time is lost during the network access calls? Possibly. But I would expect at least a couple of seconds improvements anyway :)