Unsolved Inter-Process Communication: QLocalServer/QLocalSocket vs QSharedMemory/QSystemSemaphore
-
Consider two processes A and B running on the same machine. These processes need to exchange information. Process A will send messages (probably XML-based) to Process B, and Process B will then send replies to Process A. Process B might also send messages to Process A without being triggered by a request from A (e.g. if some error happens in Process B, it needs to notify process A about that).
The only experience I have with Inter-Process communication as explained on https://doc.qt.io/qt-5/ipc.html is TCP/IP communication, but that is when the processes run on a different machine in a network. If both processes are running on the same machine, then apparently I have the choice between:
QLocalServer
/QLocalSocket
QSharedMemory
/QSystemSemaphore
I have looked at the IPC Examples at https://doc.qt.io/qt-5/examples-ipc.html but still cannot decide what would be the best choice for my case.
Can someone summarize the most important differences between the
QLocalServer
/QLocalSocket
and theQSharedMemory
/QSystemSemaphore
approach? What are the typical situations to which these IPC techniques apply? What would be the best option for my case, or is the choice simply a matter of taste? -
@Bart_Vandewoestyne
While you await a better answer. I could be mistaken(!), but I think we had this question a while back. And I think some expert (looks like it was @KroMignon ) said thatQLocalServer
/QLocalSocket
had the advantage that you didn't have to worry about sequencing/mutexing access to the data being exchanged?Separately, semaphores (
QSystemSemaphore
) are only for counting/locking, not for exchanging arbitrary messages. -
In fact you could also stay with TCP/IP sockets (UDP or TCP), even if all processes runs on same machine.
The choice between sockets (Local or TCP/UDP) or shared memory depends on what you want to do:
- if you need to have big data sets accessibles on all process, shared memory is the way to go
- if you only want to exchange small part of data (updates) or commands/reply, the socket way is the best choice
Using sockets (local or "standard" TCP/IP) also had the advantage that you don't have to deal with memory locks and so not create deadlock situations between process, which are very complex to debug!
my 2cts.
-
@Bart_Vandewoestyne to be a bit more confusing :D the documentation page is a bit outdated, we now also have:
https://doc.qt.io/qt-5/qtremoteobjects-index.htmlWhich, if you want to literally emit signals and call slots in other processes, is probably the Qt way to do it 🤓
-
I have two questions regaring your answer,@KroMignon :
First question:In fact you could also stay with TCP/IP sockets (UDP or TCP), even if all processes runs on same machine.
but then why on earth were
QLocalServer
andQLocalSocket
ever introduced??? Just by looking at the names of these classes, I would conclude that if you know both processes are on the same machine, it is recommended to use theQLocal*
classes instead of theQTcp*
ones, no?Second question:
The choice between sockets (Local or TCP/UDP) or shared memory depends on what you want to do:
- if you need to have big data sets accessibles on all process, shared memory is the way to go
- if you only want to exchange small part of data (updates) or commands/reply, the socket way is the best choice
Well, the biggest chunk of data that needs to be sent from Process A to Process B is the binary content of a TIFF-file. That does not happen so often (depends on manual action of the user, so maybe a few times a day). These TIFF-files typically are about 5 MB in size, but I've seen 21 MB too.
For the rest, most of the communication will probably be XML being sent back and forth. Note by the way that the TIFF-content is sent as base64 encoded data in the XML that is being sent from A to B. Does all this classify as 'big data sets' (shared memory solution) or 'small part of data' (socket solution)? Given this info, what approach would you recommend? -
@Bart_Vandewoestyne said in Inter-Process Communication: QLocalServer/QLocalSocket vs QSharedMemory/QSystemSemaphore:
but then why on earth were QLocalServer and QLocalSocket ever introduced??? Just by looking at the names of these classes, I would conclude that if you know both processes are on the same machine, it is recommended to use the QLocal* classes instead of the QTcp* ones, no?
No. Think broad - you can have use patterns that permit part of the software infrastructure being installed on the same or different host.
-
@Bart_Vandewoestyne said in Inter-Process Communication: QLocalServer/QLocalSocket vs QSharedMemory/QSystemSemaphore:
but then why on earth were QLocalServer and QLocalSocket ever introduced???
Local socket have a lower overhead as "standard" tcp/ip socket, so they give more performance. But they can only work locally on the machine and cannot be routed. Which is what you want.
Extract from
QLocalSocket
documentation:
On Windows this is a named pipe and on Unix this is a local domain socket.Does all this classify as 'big data sets' (shared memory solution) or 'small part of data'
My definition was a little bit confusing, sorry for that. My question was if booth (or more) have to access rd/wr to a the same big amount of data (for example a big matrix). In this case, shared memory would be the best in terms of performances, because you don't have to maintain multiple copies of the matrix.
My preferred way is the socket way, it don't made difference if LocalSocket or "TCP/IP socket", the software interface is very close.
There is also
QtRemote
for easy RPC implementation, but I never use it so I can't say so much about it. -
Local socket have a lower overhead as "standard" tcp/ip socket, so they give more performance. But they can only work locally on the machine and cannot be routed. Which is what you want.
OK. So basically, if you know both Process A and B will always run on the same host, then use the
QLocalServer
/QLocalSocket
approach for performance reasons. However, if there is a possibility that somewhere in the near or far future, Process A and B might run on different hosts, then you might consider theQTcpServer
/QTcpSocket
approach. With that, you will loose some performance, but at least your application is 'future-proof' in the sense that A and B can also easily run on different hosts.My definition was a little bit confusing, sorry for that. My question was if booth (or more) have to access rd/wr to a the same big amount of data (for example a big matrix). In this case, shared memory would be the best in terms of performances, because you don't have to maintain multiple copies of the matrix.
OK. I think I understand what you mean. If process A and B more or less operate on the same set of large data, and you want to avoid having to send back and forth that data, but rather prefer to have one shared copy of that data, then go for the
QSharedMemory
/QSystemSemaphore
approach. If however what you have is more like a request/reply protocol-alike mechanism (like I will probably have), then it's better to use theQLocalServer
/QLocalSocket
approach.My preferred way is the socket way, it don't made difference if LocalSocket or "TCP/IP socket", the software interface is very close.
After this discussion, it now also becomes more and more clear to me that the
QLocalServer
/QLocalSocket
approach is probably what I need (I don't expect process A and B to run on different hosts in the near or far future). -
@artwaw said in Inter-Process Communication: QLocalServer/QLocalSocket vs QSharedMemory/QSystemSemaphore:
No. Think broad - you can have use patterns that permit part of the software infrastructure being installed on the same or different host.
Yes, in the case where you want to keep the option open to install the software on different hosts, you must go for the
QTcpServer
/QTcpSocket
approach. In case you know that all your processes will always run on the same machine, then for performance reasons it is better to use theQLocalServer
/QLocalSocket
approach. Agree? -
@Bart_Vandewoestyne But of course.