Qt World Summit: Register Today!

Data Append In File (Speed Optimization)

  • Hello Friends And Qt Experts.

    Currently I Am working on write data into the file
    i am trying to optimize speed

    Here My Code

    void Cls_StoreData::AddData(std::string Data)
        auto begin = std::chrono::high_resolution_clock::now();
            std::fstream LogFile;
            LogFile.open("Log.txt", std::ios::app);
            if (LogFile) 
                LogFile<< Data.c_str() << char(13);
        auto end = std::chrono::high_resolution_clock::now();
        auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
        printf("Time measured: %.3f seconds.\n", elapsed.count() * 1e-9);

    this function taking 6-7 milliseconds for append single line data into the file .
    i was noticed when we try to close the file that time it will take 5-6 milliseconds .

    i want to optimize this time.

    if you have any solution or any idea then please drop here.

  • Moderators

    This looks totally unrelated to Qt... but here are some hints:

    • keep the file open between writes (don't call LogFile.open() and LogFile.close() all the time)
    • make sure the clock is actually giving you a nanosecond resolution... most often operating systems don't offer such high precision. Which means your 6-7 milliseconds might be incorrect
    • buffer the data and write more of it in one go, instead of doing it in small chunks

  • This post is deleted!

  • @sierdzio said in Data Append In File (Speed Optimization):

    Thanks for reply

    Which means your 6-7 milliseconds might be incorrect

    Even i have already try with QElapsedTimer and it give me correct Time.

  • Moderators

    Well, the documentation warns:

    On platforms that do not provide nanosecond resolution, the value returned will be the best estimate available.

    Anyway no worries. Regardless of what real time value is, the hints should help you reduce it.

  • The way you are measuring write times is meaningless in modern systems. All you are really measuring is how long it takes to queue the intended operations to the OS. It is not the actual media write time. Optimizing media write operations involves intimate knowledge of the underlying OS capabilities and the specifics of what the media supports, as well as being very dependent upon the "type" of data you are handling: when/how to disable buffered IO, optimal block sizes, command queuing, effects of cache, IO channel bandwidth limitations, etc.

    Append operations are also more costly because if data is not block aligned the write operation may have to read/patch/update the last block of the previous file contents.

Log in to reply