QString Question
-
Hi I will be using QString to record program activity which will be written to a log after program activity is completed. Sometimes the QString may contain thousands of lines of program activity. What's the best way to handle this situation without consuming too many resources? (don't want too many expensive memory reallocations unless their resource consumption is negligible)
Thanks :) -
in this situation instead of saving all this in memory, you can use streams in QT or the stander c++ to write your log file without save all this until the programme terminates.
something like this.std::ofstream mylog("logFile.txt"); mylog << "your string you want to save here"\n; mylog << "..etc"; mylog.close();
and save as many as you want, if there is any chance your app will close without reach to the end which is mylog.close(); you can use mylog.flush(); from time to time to save your string and make sure it saved your string without reach close function to close the stream.
don't forget #include <fstream> -
@Crag_Hack
QString
s are stored in a shared lookup area. Somewhere it says they're good at reallocation, in that they reserve extra memory to allow for the possibility. Bit strange to store thousands of lines in aQString
, but if that's what you want to do I guess it would be OK. Of course if you can do what @AmrCoder says and write it out as you go along that will be a lot cheaper, but I presume you have some reason for wanting to keep it all together in memory... -
-
@Crag_Hack No that is very good using QFile with QTextStream the same logic as stander fstream.
-
How come the write happens immediately though? Here's the code:
//setting up the stream: log.setFileName("log.txt"); log.open(QIODevice::Append | QIODevice::Text); logStream.setDevice(&log); //...some code later as needed... logStream << statusString + "<br>" << endl;
-
@Crag_Hack said in QString Question:
How come the write happens immediately though?
...logStream << statusString + "<br>" << endl;
First,
std::endl
flushes the buffer, which causes all buffered data to be written to the file. https://stackoverflow.com/questions/4751972/endl-and-flushing-the-bufferMay I ask why you don't want it written immediately?
-
-
@Crag_Hack said in QString Question:
Is this a justified conclusion?
The only way to know is to benchmark. Do not write complicated "optimizations" based on what you think will happen. Instead, base your optimizations on solid data. Tell us: How much faster will your backup be if you defer the log write? (run a test and give us some actual numbers)
If you backup and log to the same spinning disk drive, then there could be a noticeable slowdown. If you backup and log to the same M.2 SSD, or if you backup and log to different drives, you might not notice any slowdown at all. (Note: This is a general principle; this is not guaranteed fact)
I don't want to write immediately because this is a data backup program and I don't want logging to happen concurrently with the data backup operations since it will most likely slow them down.
If you find that you do need to wait till the end of the backup process before you write your log to disk, you can append each log line in memory in a
QStringList
, instead of a single, ever-growing QString.One last thing to think about: What do you want to happen if the backup gets interrupted halfway? (If the user aborts it, or if power is lost) Do you want a partial log plus partial backup? Do you want to discard everything and start from scratch?
-
@JKSH I just tried with deferred log writing and without even timing I can see significant improvement for 128 and 512 small files in a backup.
I'll test more tomorrow.
A less than ideal detail for this scenario - it appears as if the QTextStream is automatically flushing after it reaches a certain amount of information to write. Any way to disable this behavior?
Perhaps I should switch to std::ofstream?
-
I log all of my applications inputs and some engine activity. I can also log performance metrics.
I have a central buffer / queue which has multiple threads calling it with <time,> <command>, <data>
Every time it's called, once the add to log entry container is done, it calls another method to process.
This process method has:
std::unique_lock<std::mutex> unique_lock(mutex,std::defer_lock); if(mutex.try_lock()==false) return;
This way a single process is writing the messages to disk and it just keeps processing to disk as fast as it can - it might complete and then the next message comes in starts it up again.
I just use QByteArray and QFile to write.
-
@Crag_Hack said in QString Question:
Perhaps I should switch to std::ofstream?
As @JKSH already suggested use QStringList.
-
@Crag_Hack said in QString Question:
it appears as if the QTextStream is automatically flushing after it reaches a certain amount of information to write.
Yes, this is how stream buffers work.
QTextStream
andstd::ofstream
both do it.Any way to disable this behavior?
No. You can set the buffer size for
std::ofstream
, but you cannot ask it to refrain from writing to disk until the backup completes. If you want this guarantee, implement your own in-memory buffer (see my QStringList suggestion)Further reading: https://stackoverflow.com/questions/10449772/does-c-ofstream-file-writing-use-a-buffer