Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

stream inserter order/aggregation...



  • consider this done by one thread:

    cerr << a << b << c << d << endl;
    

    and this done by another:

    cerr << x << y << z << endl;
    

    Now the question becomes how cerr receives the stream objects. If the list is build as tmp = (a << b << cc << endl), followed by cerr << tmp, then the expected output is preserved. If however the breakdown is cerr << a , cerr << b , cerr << c then you cannot make any assumptions about the output strings. The behaviour can become platform dependent.

    so we are left with the use-case question:

    ostringstream out;
    out << a << b << c << endl;
    cerr << out.str();
    

    Prudent or overkill?

    In my embedded environment cerr the default sink for all multiprocessing application debug info, and will be redirected to the syslog() facility where formatted messages will be prioritized and forwarded.



  • @Kent-Dorfman
    You are using multiple threads to write to cerr at the same time. I am not as expert as some on this, but I suggest you might like to read through the following:


  • Lifetime Qt Champion

    cerr is not thread-safe - so your two cerr outputs can be interleaved.



  • @Christian-Ehrlicher said in stream inserter order/aggregation...:

    cerr is not thread-safe - so your two cerr outputs can be interleaved.

    Actually the reference at cppreference.com says it is, but the question is more about whether stream inserters are evaluated left to right, right to left, or one at a time, as noted in OP...whether it is necessary to build a temporary line object and send the whole line to cerr, or if the thread use of cerr keeps a local line buffer until endl or flush?

    Anecdotal evidence and the mere existence of the stackoverflow article referenced above would seem to indicate that the building of a complete output line via a temp is necessary to avoid intermixing of output.


  • Moderators

    @Kent-Dorfman said in stream inserter order/aggregation...:

    Actually the reference at cppreference.com says it is

    Yes, but each singular operation, not the chain of them. And getting a jumble of a mess ain't cool for the one that needs to disentangle it. But anyway I suppose this was the crux of the question.

    Anecdotal evidence and the mere existence of the stackoverflow article referenced above would seem to indicate that the building of a complete output line via a temp is necessary to avoid intermixing of output.

    Correct. Or, which is actually better - wrap around and do the output as to serialize the line writing by hand.

    Note: For multiple processes this isn't an issue, because the stream is flushed at the endl. The problem's relevant when you're doing it from the same address space.

    In my embedded environment cerr the default sink for all multiprocessing application debug info, and will be redirected to the syslog() facility where formatted messages will be prioritized and forwarded.

    Then, if you don't run this in different threads you don't need to do nothing.

    @Christian-Ehrlicher said in stream inserter order/aggregation...:

    cerr is not thread-safe - so your two cerr outputs can be interleaved.

    It is thread safe, and the output can still be interleaved. The two things aren't exclusionary.

    @Kent-Dorfman said in stream inserter order/aggregation...:

    whether it is necessary to build a temporary line object and send the whole line to cerr, or if the thread use of cerr keeps a local line buffer until endl or flush?

    As mentioned above: The buffer is relevant per process, and is flushed by endl. In the same process, you have a problem, in different processes - not. In different processes there's little sense about talking thread-safety, though. So which is it, in the end?

    but the question is more about whether stream inserters are evaluated left to right, right to left, or one at a time

    Before c++17 - undefined order. C++17 and later - left to right. Always one at a time.



  • @kshegunov
    Thank you for all your clarifications.

    Always one at a time.

    @Kent-Dorfman backing up what @kshegunov says, I think you were looking for the explanation in https://stackoverflow.com/a/14649485/489865 which states:

    void f() {
        std::cout << "Hello, " << "world!\n";
    }
    

    is equivalent to

    void f() {
        std::cout << "Hello, ";
        std::cout << "world!\n";
    }
    

    in terms of the potential-interleaving output from multiple threads.

    If you really want to understand the possible behaviour of your endl and "buffering" I suggest you read through the comprehensive answer at https://stackoverflow.com/a/29708161/489865. Which makes you realise just how many places potential "buffering" can be going on.

    I have not tried it, and I realise it will not apply in practice to the relatively "short" length of your output, but I wonder what happens if one were to construct a string in memory of, say, 1GB in length and then write(fd, buffer, 1000000000) from multiple threads, perhaps to a slow memory stick. Does one thread block the other here? Does the buffer get divided into "chunks" (e.g. disk block size) at the OS write level such that one thread's "chunks" could be interleaved with those from another thread? Interesting to me, if not of practical relevance to your case.



  • @JonB slow memory stick is not a way to test it, you invite another level of problems with exclusive access to a file on the mounted media and OS buffering and prioritising access, which is very OS dependent. I'd rather spin two concurrent tasks on the local file system (or shared memory later dumped to the file? idk).



  • @artwaw said in stream inserter order/aggregation...:

    exclusive access to a file on the mounted media

    I'd rather spin two concurrent tasks on the local file system

    I didn't know this. So filing system on a plug-in memory stick is handled differently (exclusivity-wise) from my internal hard disk? I thought they would be equivalent. They are both "mounted" aren't they? So this is to do with one being classified as "removable" media versus the other being non-removable??



  • @JonB Most of the time - yes. On most modern systems - yes. But buffering will be different which might also influence access order / write source switching. I would not consider it equivalent to the local file, mounted file systems are always something else, just users see it working as local fs.



  • @JonB said in stream inserter order/aggregation...:

    So this is to do with one being classified as "removable" media versus the other being non-removable??

    In short - yes. It is very OS dependent though, how network and removable storage is handled and buffered, hence lights me a huge billboard with "not uniform" sign. Unlike local storage, which is handled in much more consistent manner.


  • Lifetime Qt Champion

    @kshegunov said in stream inserter order/aggregation...:

    Yes, but each singular operation, not the chain of them.

    Correct - my comment was a bit misleading



  • OK. interleaved in the plethora of answers. Therein is some wisdom that clarifies my assertions. Thanks.

    Embedded doesn't necessarily infer single process here. use case will be both multi-process, AND multi-thread within service daemons (embedded linux)...so thread safety is important. Other thing that was missing from my knowledgebase was pre-c++17 behaviour...left-to-right not being specified before that.

    Building a temporary stream object and manually flushing or endl seems to address my issues, but I was trying to determine if it was a requirement for proper behaviour...it seems to be.

    Thanks!


Log in to reply