How to write entire QVector to a binary file?
-
@CJha said in How to write entire QVector to a binary file?:
The only other way is to iterate through a QVector<double>, and is a time-consuming process, especially with millions of data points
Yep. That's what has to be done, and it's what the
<<
serializer does, as @J-Hilk showed you. The only other way would be if you can get the address of contiguousQVector
memory and save from there, which I'm guessing can be done. However......If @VRonin's latest post is correct and you're supposed to produce text instead to export then you cannot help but do it one-by-one....
P.S.
BTW, you'd have to test, but my guess is that code to output data points one-by-one instead of in a contiguous clump is not what would be slow over 1,000,000 points --- rather, the size of the output written to file will be what is significant.... -
You're correct with the adjacent memory but QVector can also hold objects so memcpy'ing it out will not work there. You can use memcpy if you want in your case but QDataStream is generic and has no optimizations for such things.
-
@VRonin Yes, but writing a .csv file takes much longer than writing a binary file (almost 10 times more for large data sets). I have tried and tested it. I am gathering data at a much faster rate, up to 1 million doubles per second and I have to write it to a file continuously for hours, and this file will be analysed in Matlab by researchers. If I write 1 million data points in a .csv file it takes around 4 seconds while doing the same in a binary file takes around 400 milliseconds.
-
@CJha said in How to write entire QVector to a binary file?:
I am gathering data at a much faster rate, up to 1 million doubles per second and I have to write it to a file continuously for hours
actually stop right here!
if this program is used more than once, you're going to destroy your HD/SSD very quickly!
I'm sure there's an other - in memory - way to hand over those data points
-
@CJha said in How to write entire QVector to a binary file?:
Matlab supports the binary format
Do you have its specification?
-
@J-Hilk I am not sure what you mean by
if this program is used more than once, you're going to destroy your HD/SSD very quickly!
Given that 1 million doubles are 8 million bytes, I think modern processors and disk drives can handle such speed easily.
-
@CJha
You may (well) know more than I, but can MatLab read and process 8MB of new data per second, at the same time as something else is producing it? And, separately, do you really generate 1 million new data points per second?Also, as @J-Hilk said, wouldn't sending a pipe stream (e.g. a socket?) be better than writing to file and reading back? Does Matlab accept incoming data elsewhere than in a file?
-
@jsulm It is highly compatible. Here are the links for fopen, fread, fseek. I all these I can specify the format, ByteOrder, size of data (such as int, double, etc), and quite a few other things.
I don't think Matlab is the restrictive thing here, I can read any type of binary file in Matlab as long as I know how it is written. -
@CJha
The code to write aQVector
to file in the way you want, as fast as possible in one blob not one-by-one, is given in e.g. https://www.qtcentre.org/threads/65713-Output-a-QVector-of-doubles-to-a-binary-file-(without-using-QDatastream)?p=289540#post289540 :qint64 bytesWritten = file.write(reinterpret_cast<const char*>(vec.constData()), sizeof(double) * vec.size());
EDIT I think you will want
reinterpret_cast<>
rather thanstatic_cast<>
here as shown in that post, so I have altered the code line to use that. -
@JonB No, Matlab is going to read it at a later time. When data is being generated it is just stored in a binary file for later use by Matlab. And yes, Matlab is slower but it doesn't matter if it takes 1 second or 1 day to read the file as the researchers can just start loading the file in the night and come back in the morning to work on it (many researchers wait for times like 24 to 36 hours for files to get processed).
And yes, I am generating data at 1 million doubles per second. I am using National Instruments and Measurement Computing DAQ boards, controlling both through Qt and C++ and these boards are capable of generating 1 million doubles per second.
-
@CJha said in How to write entire QVector to a binary file?:
@J-Hilk I am not sure what you mean by
if this program is used more than once, you're going to destroy your HD/SSD very quickly!
Given that 1 million doubles are 8 million bytes, I think modern processors and disk drives can handle such speed easily.
its not about the speed, its about the amount of times written into the cell, Samsung for examples says their ssd's are "built to handle 150 terabytes written" with, lets say 1 million points of double (8 bytes each) per second would mean your ssd is done for in roughly 200 days, instead of the approximated 10 years.
also you have to coordinate read and write access of the file, so that Matlab and your Qt Programm to not try to access the file at the same time with potential data loss etc
-
@J-Hilk That's a good point, but it's not the case for me as the data write and data read happens at different times. Also, SSD lifetime doesn't matter as these researchers have lots of funding and SSD is a cheap item for them. My job is to give them what they ask for, and if they ruin their SSD in 200 days that is up to them (of course I will tell them that it can ruin their SSD fast but that's all I can do).
-
@CJha
BTW. When you have gotten it working with thatfile.write()
, which is going to be as good as it gets. Since speed seems to be such an issue, and you're going to be doing ~1,000,000 points, and you goal is going to be to access the data array and write it out raw. Then my thought would be: why use a QtQVector<>
at all? For best efficiency/memory usage, would this be a case where simply creating a C++ array ofdouble
s of sufficient size and storing into that directly/writing out to file would be simpler than wrapping it inQVector<>
overheads: even if that is small, what's the point?And P.S.
If you stick withQVector<>
, do make sure you useQVector::resize
/reserve(int size)
appropriately early (once if possible), I think. What you do not want is to have theQVector
keep reallocating/moving existing data as your million points keep arriving.... -
@JonB I agree that a simple C++ array would be faster and easier as that is the format in which data is generated in the buffer from the acquisition device.
However, if I write the data to a file in the same thread (in the same callback function where the data is deposited in the buffer from the acquisition device or in a different function), then since writing takes a long time it blocks the entire thread, this (once in a while) blocks the callback function which is called each time the required number of data samples is generated by the acquisition device resulting in an error.
To solve this problem, I write data to a binary file in a different thread. Now, if I pass the address of the same buffer in which data is deposited then it defeats the purpose of having multiple threads as I am accessing the same buffer in which data is deposited from the acquisition device just from a different thread instead of the main one. To overcome this I write the incoming data from the acquisition device's buffer to a
QVector<double>
then send this vector over aQt::QueuedConnection
to my "Writer" thread and I write it there. I am not so good with C++ arrays and so I am not quite confident on how to achieve this without involvingQVector
in the process. If you have any idea on how can I simplify this process I will be very grateful :)