Using QTcpSocket without event loop and without waitForReadyRead()
-
Hi,
I have two programs, one with a GUI and one with no GUI or event loop. I want to send messages over TCP from the GUI application to the non-GUI application.
By using waitForReadyRead() in the non-GUI application this actually works completely fine. However, the non-GUI application is a simulator which needs to run fairly close to real-time, so having to wait 1 msec is not ideal.
I've tried calling bytesAvailable() and read() directly on the socket, but this does not seem to work. I've only been able to make it work by using waitForReadyRead(), and this function has a minimum waiting time of 1 msec.
Should it be possible to call read() on the socket directly? Is it possible to force whatever it is waitForReadyRead() does to the socket? (I'm not sure it actually does anything, I just don't understand why I have to call it for read() to work.)
Just to make it clear. Everything works as intended, and may even be good enough, using waitForReadyRead() with 1 msec waiting time, however it would be preferable to not have to wait 1 msec, or not have to wait at all.
Any ideas?
-
Hi,
I have two programs, one with a GUI and one with no GUI or event loop. I want to send messages over TCP from the GUI application to the non-GUI application.
By using waitForReadyRead() in the non-GUI application this actually works completely fine. However, the non-GUI application is a simulator which needs to run fairly close to real-time, so having to wait 1 msec is not ideal.
I've tried calling bytesAvailable() and read() directly on the socket, but this does not seem to work. I've only been able to make it work by using waitForReadyRead(), and this function has a minimum waiting time of 1 msec.
Should it be possible to call read() on the socket directly? Is it possible to force whatever it is waitForReadyRead() does to the socket? (I'm not sure it actually does anything, I just don't understand why I have to call it for read() to work.)
Just to make it clear. Everything works as intended, and may even be good enough, using waitForReadyRead() with 1 msec waiting time, however it would be preferable to not have to wait 1 msec, or not have to wait at all.
Any ideas?
@Obi-Wan said in Using QTcpSocket without event loop and without waitForReadyRead():
Just to make it clear. Everything works as intended, and may even be good enough, using waitForReadyRead() with 1 msec waiting time, however it would be preferable to not have to wait 1 msec, or not have to wait at all.
It would indeed but TCP has latency, data takes time to move from one place to the other. You can't call read() directly simply because there is nothing to read, the receiver did not receive anything yet.
P.S.
If you can just connect a slot (even a lambda) to the readyread signal instead of halting your app and wait for something to arrive (but you'll need an event loop) -
@Obi-Wan said in Using QTcpSocket without event loop and without waitForReadyRead():
Just to make it clear. Everything works as intended, and may even be good enough, using waitForReadyRead() with 1 msec waiting time, however it would be preferable to not have to wait 1 msec, or not have to wait at all.
It would indeed but TCP has latency, data takes time to move from one place to the other. You can't call read() directly simply because there is nothing to read, the receiver did not receive anything yet.
P.S.
If you can just connect a slot (even a lambda) to the readyread signal instead of halting your app and wait for something to arrive (but you'll need an event loop)Thank you!
@VRonin said in Using QTcpSocket without event loop and without waitForReadyRead():
It would indeed but TCP has latency, data takes time to move from one place to the other. You can't call read() directly simply because there is nothing to read, the receiver did not receive anything yet.
This is the part I don't understand. If the issue is latency, why can't I wait "manually"? Why is there a difference between calling waitForReadyRead( 1 msec ) and just waiting 1 msec in real time and then calling read? I've tried calling read() every 10 seconds and there is still no data on the socket.
I'm imagining the socket to be this thing that exists somewhere in the OS. At some points in time the GUI application writes to this socket and the data is then stored in some buffer there. The non-GUI application continuously probes this socked for data and when there is something there it will read the bytes and try to build a message based on the bytes.
It now seems to me that I have to set the socket in some kind of "waiting-state" for the time elapsed to have an effect, but this seems strange, so I expect some part of my view of things is wrong!
-
Thank you!
@VRonin said in Using QTcpSocket without event loop and without waitForReadyRead():
It would indeed but TCP has latency, data takes time to move from one place to the other. You can't call read() directly simply because there is nothing to read, the receiver did not receive anything yet.
This is the part I don't understand. If the issue is latency, why can't I wait "manually"? Why is there a difference between calling waitForReadyRead( 1 msec ) and just waiting 1 msec in real time and then calling read? I've tried calling read() every 10 seconds and there is still no data on the socket.
I'm imagining the socket to be this thing that exists somewhere in the OS. At some points in time the GUI application writes to this socket and the data is then stored in some buffer there. The non-GUI application continuously probes this socked for data and when there is something there it will read the bytes and try to build a message based on the bytes.
It now seems to me that I have to set the socket in some kind of "waiting-state" for the time elapsed to have an effect, but this seems strange, so I expect some part of my view of things is wrong!
@Obi-Wan the underlying issue is with your missing event loop without it,
QAbstractSocket::flush
is not called and no data send over the network.Call this function if you need QAbstractSocket to start sending buffered data immediately. The number of bytes successfully written depends on the operating system. In most cases, you do not need to call this function, because QAbstractSocket will start sending data automatically once control goes back to the event loop. In the absence of an event loop, call waitForBytesWritten() instead.
-
Thank you!
@VRonin said in Using QTcpSocket without event loop and without waitForReadyRead():
It would indeed but TCP has latency, data takes time to move from one place to the other. You can't call read() directly simply because there is nothing to read, the receiver did not receive anything yet.
This is the part I don't understand. If the issue is latency, why can't I wait "manually"? Why is there a difference between calling waitForReadyRead( 1 msec ) and just waiting 1 msec in real time and then calling read? I've tried calling read() every 10 seconds and there is still no data on the socket.
I'm imagining the socket to be this thing that exists somewhere in the OS. At some points in time the GUI application writes to this socket and the data is then stored in some buffer there. The non-GUI application continuously probes this socked for data and when there is something there it will read the bytes and try to build a message based on the bytes.
It now seems to me that I have to set the socket in some kind of "waiting-state" for the time elapsed to have an effect, but this seems strange, so I expect some part of my view of things is wrong!
-
Thank you!
@VRonin said in Using QTcpSocket without event loop and without waitForReadyRead():
It would indeed but TCP has latency, data takes time to move from one place to the other. You can't call read() directly simply because there is nothing to read, the receiver did not receive anything yet.
This is the part I don't understand. If the issue is latency, why can't I wait "manually"? Why is there a difference between calling waitForReadyRead( 1 msec ) and just waiting 1 msec in real time and then calling read? I've tried calling read() every 10 seconds and there is still no data on the socket.
I'm imagining the socket to be this thing that exists somewhere in the OS. At some points in time the GUI application writes to this socket and the data is then stored in some buffer there. The non-GUI application continuously probes this socked for data and when there is something there it will read the bytes and try to build a message based on the bytes.
It now seems to me that I have to set the socket in some kind of "waiting-state" for the time elapsed to have an effect, but this seems strange, so I expect some part of my view of things is wrong!
actually the answer is in the QIODevice details:
Certain subclasses of QIODevice, such as QTcpSocket and QProcess, are asynchronous.This means that I/O functions such as write() or read() always return immediately, while communication with the device itself may happen when control goes back to the event loop. QIODevice provides functions that allow you to force these operations to be performed immediately, while blocking the calling thread and without entering the event loop. This allows QIODevice subclasses to be used without an event loop, or in a separate thread:
-
actually the answer is in the QIODevice details:
Certain subclasses of QIODevice, such as QTcpSocket and QProcess, are asynchronous.This means that I/O functions such as write() or read() always return immediately, while communication with the device itself may happen when control goes back to the event loop. QIODevice provides functions that allow you to force these operations to be performed immediately, while blocking the calling thread and without entering the event loop. This allows QIODevice subclasses to be used without an event loop, or in a separate thread:
Thank you very much everyone, it seems @J.Hilk found the solution!
I've read this before, but didn't grasp the meaning of it. My understanding now is that QTcpSocket is an inherently asynchronous class that I'm trying to use synchronously, particularly:
while communication with the device itself may happen when control goes back to the event loop
I.e., it's fine to call read() synchronously, but it won't achieve anything because the communication with the underlying device doesn't occur. We need to force this communication by "pretending" for a short while to have an event loop, and we do this by calling waitForReadyRead() which so happens to have a minimum wait time of 1 msec.
If I want to have a shorter wait time, I would need to create my own TCP socket class.
-
Thank you very much everyone, it seems @J.Hilk found the solution!
I've read this before, but didn't grasp the meaning of it. My understanding now is that QTcpSocket is an inherently asynchronous class that I'm trying to use synchronously, particularly:
while communication with the device itself may happen when control goes back to the event loop
I.e., it's fine to call read() synchronously, but it won't achieve anything because the communication with the underlying device doesn't occur. We need to force this communication by "pretending" for a short while to have an event loop, and we do this by calling waitForReadyRead() which so happens to have a minimum wait time of 1 msec.
If I want to have a shorter wait time, I would need to create my own TCP socket class.
@Obi-Wan said in Using QTcpSocket without event loop and without waitForReadyRead():
waitForReadyRead() which so happens to have a minimum wait time of 1 msec.
I'm not sure this is the case. the argument of waitForReadyRead is the maximum wait time, if the data is received faster it returns faster
@Obi-Wan said in Using QTcpSocket without event loop and without waitForReadyRead():
If I want to have a shorter wait time, I would need to create my own TCP socket class.
Not really, see https://forum.qt.io/topic/78715/why-my-timer-is-not-emit-timeout-signal/7 but instead of connecting the timeout signal of a QTimer you connect a readyRead signal from a socket