The trick is one QByteArray receives the data (on signal slot basis)
QByteArray can receive nothing, it's an array of bytes - a container, nothing more, nothing less.
where the other container (which is just a copy of the receiver-QByteArray) is parsing the received Data, as the symbols come in a random order.
How can you parse something that doesn't have any order? And where is this other container?
I could think of having an extra signal-slot that exchanges the data between both containers whenever it is demanded, but it sounds like I have to double check if I'm actually dealing with the most current data.
I don't follow. Why would be this even needed ...?
There's an extra mutex for the device access, and another one for the shared message container.
Qt imposes thread affinity on QObject instances, you don't need a mutex for device access, and you shouldn't share the message container.
and because I'm dealing with QElapsedTimer, that needs the thread.
In what universe QElapsedTimer needs a thread ...?! It's like saying that an integer needs a thread ... I don't get it.
Then I'd recommend using a dedicated library to do that. Qt already uses GStreamer for it's backend on Linux so that might be a possibility. You also have the QtGstreamer module that can help with that.
So is there any magic involved in getting a decent fps? I am aware that this is more related to QtGstreamer and RPi2 forums, but I've got nothing to lose trying here too as i assume others have been here before me.
QML linked to surface via sink
stream directly from RPi camera on /dev/video0
Ive been having a hard time finding out what kind of element name would work when generating a pipeline, especially since I'm using Gstreamer 0.10, so I'm thinking the cause of the low FPS may be related a mix of the element name, the pipeline properties, and the (Qt)GStreamer version?
Also, does QtGstreamer 0.10 support black/white colors?