QTcpServer with multiple persistent connections
-
Hi :-)
I'm working on a server-client solution for an apparently simple task: Basically, I want different instances of my program running on different computers to keep a list of strings that can be changed in sync. Like one is the server and all others are clients. When the list is changed on the server, it tells all clients to also e. g. add or remove an entry, if something is changed on a client, the client asks the server that it wants to change something and if everything is okay, the server does it and notifies all clients.
I never did such stuff before, so perhaps, I'm on a comletely wrong way ;-) Here's my approach:
I basically took the Fortune Server example to start with. Problem is that, with this example, a connection is started, the client gets data, and the connection is terminated afterwards. I need a bidirectional and persistent connection (I do, yes?!). So i thought about simply keeping a map of connections with
QTcpSocket
pointers so that I know who's connected and I can send data to all clients.Something like this:
Server::Server() { m_tcpServer = new QTcpServer(this); connect(m_tcpServer, &QTcpServer::newConnection, this, &Server::newConnection); } void Server::newConnection() { QTcpSocket *socket = m_tcpServer->nextPendingConnection(); m_streams[socket] = QTextStream(); connect(socket, &QAbstractSocket::disconnected, [this, socket]() { socketDisconnected(socket); }); connect(socket, &QAbstractSocket::readyRead, [this, socket]() { readData(socket); }); } void Server::socketDisconnected(QTcpSocket *socket) { m_connections.remove(socket); socket->deleteLater(); } void Server::readData(QTcpSocket *socket) { // Some code to receive data from the specified socket // and emit a signal if the transaction is complete }
I then saw the Threaded Fortune Server example. Not that I have any clue about multithreading yet, but I thought creating a process for each connection might be a good idea. So I took most of the code and tried to connect via telnet.
Everything works fine, but the problem with this example is still that is uses one-time connections. So when
FortuneThread::run()
is started, the socket created goes out of scope at once and the connection is terminated.I tried to replace the socket variable with a member variable and with a pointer. For both cases, as soon as I connect, I get
QObject: Cannot create children for a parent that is in a different thread. (Parent is ..., parent's thread is ..., current thread is ...
. This is surely due to my lack of knowledge when using threads.So … do I really have to mess with multithreading to create a server-client implementation with persistent connections? Do I even need persistent connections? Or asking the other way round: Is one of the two approaches even the right way?
Thanks for everybody pointing me towards the right direction!
-
Hi :-)
I'm working on a server-client solution for an apparently simple task: Basically, I want different instances of my program running on different computers to keep a list of strings that can be changed in sync. Like one is the server and all others are clients. When the list is changed on the server, it tells all clients to also e. g. add or remove an entry, if something is changed on a client, the client asks the server that it wants to change something and if everything is okay, the server does it and notifies all clients.
I never did such stuff before, so perhaps, I'm on a comletely wrong way ;-) Here's my approach:
I basically took the Fortune Server example to start with. Problem is that, with this example, a connection is started, the client gets data, and the connection is terminated afterwards. I need a bidirectional and persistent connection (I do, yes?!). So i thought about simply keeping a map of connections with
QTcpSocket
pointers so that I know who's connected and I can send data to all clients.Something like this:
Server::Server() { m_tcpServer = new QTcpServer(this); connect(m_tcpServer, &QTcpServer::newConnection, this, &Server::newConnection); } void Server::newConnection() { QTcpSocket *socket = m_tcpServer->nextPendingConnection(); m_streams[socket] = QTextStream(); connect(socket, &QAbstractSocket::disconnected, [this, socket]() { socketDisconnected(socket); }); connect(socket, &QAbstractSocket::readyRead, [this, socket]() { readData(socket); }); } void Server::socketDisconnected(QTcpSocket *socket) { m_connections.remove(socket); socket->deleteLater(); } void Server::readData(QTcpSocket *socket) { // Some code to receive data from the specified socket // and emit a signal if the transaction is complete }
I then saw the Threaded Fortune Server example. Not that I have any clue about multithreading yet, but I thought creating a process for each connection might be a good idea. So I took most of the code and tried to connect via telnet.
Everything works fine, but the problem with this example is still that is uses one-time connections. So when
FortuneThread::run()
is started, the socket created goes out of scope at once and the connection is terminated.I tried to replace the socket variable with a member variable and with a pointer. For both cases, as soon as I connect, I get
QObject: Cannot create children for a parent that is in a different thread. (Parent is ..., parent's thread is ..., current thread is ...
. This is surely due to my lack of knowledge when using threads.So … do I really have to mess with multithreading to create a server-client implementation with persistent connections? Do I even need persistent connections? Or asking the other way round: Is one of the two approaches even the right way?
Thanks for everybody pointing me towards the right direction!
@l3u_
I'm on vacation(!) so these are some brief observations on your "theoretical approach".Using separate threads is hasslesome if you are new. As you are discovering, you have to be careful which thread each Qt object is in, otherwise you get the kind of error message you are seeing.
Using Qt's signals & slots, in my opinion you do not need to use multiple threads for this. You may get different advice from others, but I don't feel it scales well. Each thread is essentially a separate sub-process, and if you have 100 clients you'll need 100 processes on the server and that's not good. I'd stick to one.
Yes, in the end you will need to maintain permanent connections. For one thing the server needs to be able to send a message to each client and it needs a connection for that. (You could fiddle around with each client waiting for a new connection from the server on their IP address, but I wouldn't.)
To make it "permanent", just do not close the connection at either client or server side and it will persist. Whatever variables you use for your connections, just write code so they do not go out of scope (e.g. use
new
rather than local, stack variables).I think your code suggestion is admirably on the right track.
At both client & server sides you need to be looking for whatever causes your "strings changed" event. If you can implement that via signal & slot too you should be able to do that plus the socket stuff all in one thread. Otherwise you might have to have just two threads: one for (all) the TCP/IP stuff, another for just the "strings changed" handling. But in that case you will have to deal with passing messages across threads, and you'll be back to having to be careful about this.
One final note: what you want shouts of "broadcast/multicast". If you could use UDP instead of TCP/IP for this it would all be much more efficient/simpler to code. However, UDP is not "reliable": you have no way of knowing whether clients actually receive broadcast messages or not. Therefore it is not suitable for what you want, even though you might be tempted. Just saying!
-
@l3u_
I'm on vacation(!) so these are some brief observations on your "theoretical approach".Using separate threads is hasslesome if you are new. As you are discovering, you have to be careful which thread each Qt object is in, otherwise you get the kind of error message you are seeing.
Using Qt's signals & slots, in my opinion you do not need to use multiple threads for this. You may get different advice from others, but I don't feel it scales well. Each thread is essentially a separate sub-process, and if you have 100 clients you'll need 100 processes on the server and that's not good. I'd stick to one.
Yes, in the end you will need to maintain permanent connections. For one thing the server needs to be able to send a message to each client and it needs a connection for that. (You could fiddle around with each client waiting for a new connection from the server on their IP address, but I wouldn't.)
To make it "permanent", just do not close the connection at either client or server side and it will persist. Whatever variables you use for your connections, just write code so they do not go out of scope (e.g. use
new
rather than local, stack variables).I think your code suggestion is admirably on the right track.
At both client & server sides you need to be looking for whatever causes your "strings changed" event. If you can implement that via signal & slot too you should be able to do that plus the socket stuff all in one thread. Otherwise you might have to have just two threads: one for (all) the TCP/IP stuff, another for just the "strings changed" handling. But in that case you will have to deal with passing messages across threads, and you'll be back to having to be careful about this.
One final note: what you want shouts of "broadcast/multicast". If you could use UDP instead of TCP/IP for this it would all be much more efficient/simpler to code. However, UDP is not "reliable": you have no way of knowing whether clients actually receive broadcast messages or not. Therefore it is not suitable for what you want, even though you might be tempted. Just saying!
@JonB Thanks a lot, this really helps! So I'll investigate further my original approach without threads, and leave programming multithreading as a future challenge ;-)
After all, it's only intended to handle a few clients (2 or 3) not, 1.000.000 so it doesn't have to be optimized that much. Important thing is to get it to work at all. Let's see what I can manage.
-
@JonB Thanks a lot, this really helps! So I'll investigate further my original approach without threads, and leave programming multithreading as a future challenge ;-)
After all, it's only intended to handle a few clients (2 or 3) not, 1.000.000 so it doesn't have to be optimized that much. Important thing is to get it to work at all. Let's see what I can manage.
@l3u_
In a word, yes. Especially if you're new, avoid threads. If you do turn out to need them, you must read wherever about how Qt requires you to deal with passing object references across thread boundaries, which is what your initial error message was all about.P.S.
To be clear: at the server, you can re-use the samereadyRead
etc. slot repeatedly for each client connection all within one thread. If you need to, you can identify which client is actually sending the data received from theQTcpSocket
instance of theQAbstractSocket::readyRead
. -
@l3u_
In a word, yes. Especially if you're new, avoid threads. If you do turn out to need them, you must read wherever about how Qt requires you to deal with passing object references across thread boundaries, which is what your initial error message was all about.P.S.
To be clear: at the server, you can re-use the samereadyRead
etc. slot repeatedly for each client connection all within one thread. If you need to, you can identify which client is actually sending the data received from theQTcpSocket
instance of theQAbstractSocket::readyRead
.@JonB Quite some time passed, but finally, I'm working on my server again ;-)
I'm struggling a but on the bidirectional communication. At the moment, it's about implementing some handshake when initializing a connection.
On the client side, it's no problem. I implemented it according to the fortune cookie server example. It's nice and working: Everything the server writes to the socket can be read out of it on the client side.
The problem is the answer of a client to the server.
To have a list of all connections, I implemented the
newConnection
slot like this:void ServerDialog::newConnection() { QTcpSocket *socket = m_server->nextPendingConnection(); connect(socket, &QAbstractSocket::readyRead, [this, socket] { readData(socket); }); ... m_pendingConnections.append(socket); ... }
This way, I know which socket has data to read in my
readyRead
slot.Problem is implementing the slot: Do I understand it correctly that when reading data from a socket, one can't be sure to get it all at once. Thus, one does a
startTransaction()
on the stream, reads data and tries to do acommitTransaction()
. And if this fails, it's not all there yet. Like in the fortune cookie example – right?So I think I need a
QDataStream
for each connection. I tried to create aQMap<QTcpSocket *, QDataStream>
member variable that I fill with data when a new connection comes in like so:m_streams[socket] = QDataStream(); m_streams[socket].setDevice(socket); m_streams[socket].setVersion(QDataStream::Qt_5_6);
Doing so, I get the following error:
error: 'QDataStream& QDataStream::operator=(const QDataStream&)' is private within this context
So I tried to use pointers (with a
QMap<QTcpSocket *, QDataStream *>
) instead like so:QDataStream *stream = new QDataStream; stream->setDevice(socket); stream->setVersion(QDataStream::Qt_5_6); m_streams[socket] = stream;
This actually works, but as soon as I try to read data from the stream (in my
readyRead
slot), I geterror: no match for 'operator>>' (operand types are 'QDataStream*' and 'QString')
So … I can't use a
QDataStream
object in aQMap
, and I can't read data from aQDataStream
pointer … I'm a bit lost at this point … is this even the right approach?! -
@JonB Quite some time passed, but finally, I'm working on my server again ;-)
I'm struggling a but on the bidirectional communication. At the moment, it's about implementing some handshake when initializing a connection.
On the client side, it's no problem. I implemented it according to the fortune cookie server example. It's nice and working: Everything the server writes to the socket can be read out of it on the client side.
The problem is the answer of a client to the server.
To have a list of all connections, I implemented the
newConnection
slot like this:void ServerDialog::newConnection() { QTcpSocket *socket = m_server->nextPendingConnection(); connect(socket, &QAbstractSocket::readyRead, [this, socket] { readData(socket); }); ... m_pendingConnections.append(socket); ... }
This way, I know which socket has data to read in my
readyRead
slot.Problem is implementing the slot: Do I understand it correctly that when reading data from a socket, one can't be sure to get it all at once. Thus, one does a
startTransaction()
on the stream, reads data and tries to do acommitTransaction()
. And if this fails, it's not all there yet. Like in the fortune cookie example – right?So I think I need a
QDataStream
for each connection. I tried to create aQMap<QTcpSocket *, QDataStream>
member variable that I fill with data when a new connection comes in like so:m_streams[socket] = QDataStream(); m_streams[socket].setDevice(socket); m_streams[socket].setVersion(QDataStream::Qt_5_6);
Doing so, I get the following error:
error: 'QDataStream& QDataStream::operator=(const QDataStream&)' is private within this context
So I tried to use pointers (with a
QMap<QTcpSocket *, QDataStream *>
) instead like so:QDataStream *stream = new QDataStream; stream->setDevice(socket); stream->setVersion(QDataStream::Qt_5_6); m_streams[socket] = stream;
This actually works, but as soon as I try to read data from the stream (in my
readyRead
slot), I geterror: no match for 'operator>>' (operand types are 'QDataStream*' and 'QString')
So … I can't use a
QDataStream
object in aQMap
, and I can't read data from aQDataStream
pointer … I'm a bit lost at this point … is this even the right approach?!You may want to look at @VRonin's chat example for some ideas too.
https://wiki.qt.io/WIP-How_to_create_a_simple_chat_application
-
You may want to look at @VRonin's chat example for some ideas too.
https://wiki.qt.io/WIP-How_to_create_a_simple_chat_application
@kshegunov That looks like a far better example for my usecase. I'll have a look at it! Thanks for the link :-)
-
@kshegunov That looks like a far better example for my usecase. I'll have a look at it! Thanks for the link :-)
As this is a work in progress, we also appreciate feedback on what isn't clearly spelt or is to be improved ... :)
PS. Note that second part isn't really complete.
-
Speaking of the code you referenced: in
ChatClient::onReadyRead()
, an infinite loop is started (viafor(;;) { ... }
). This will block the event loop until all data is transmitted, won't it?I read about the TCP communication via a QDataStream, that one can't expect all data to arrive at once. I'm not sure that I understood what's happening there correctly.
The fortune cookie example handles
readyRead()
without an infinite loop. What does happen there? Calling the function again won't reuse the stack variables created there, because they are deleted as soon as the function finishes (which won't happen in the inifinte loop approach). So will the data heap inside the socket, and each time a chunk arrives, thereadyRead
signal will be emitted, andstream >> variable
takes what it has and one can check if it's all?And what if the transmisson slows down or aborts, will the inifinite loop wait forever for the missing data?
-
Speaking of the code you referenced: in
ChatClient::onReadyRead()
, an infinite loop is started (viafor(;;) { ... }
). This will block the event loop until all data is transmitted, won't it?I read about the TCP communication via a QDataStream, that one can't expect all data to arrive at once. I'm not sure that I understood what's happening there correctly.
The fortune cookie example handles
readyRead()
without an infinite loop. What does happen there? Calling the function again won't reuse the stack variables created there, because they are deleted as soon as the function finishes (which won't happen in the inifinte loop approach). So will the data heap inside the socket, and each time a chunk arrives, thereadyRead
signal will be emitted, andstream >> variable
takes what it has and one can check if it's all?And what if the transmisson slows down or aborts, will the inifinite loop wait forever for the missing data?
Speaking of the code you referenced: in ChatClient::onReadyRead(), an infinite loop is started (via
for(;;) { ... })
. This will block the event loop until all data is transmitted, won't it?First, I'm an amateur, so I cannot promise this is right! I've looked at that code and it does seem strange at first understanding. I don't think it's working the way you think. I think the
for
loop does exit, and the wholereadyRead()
consequently, when there's insufficient data.} else { // the read failed, the socket goes automatically back to the state it was in before the transaction started // we just exit the loop and wait for more data to become available break; }
The transaction-y calls then make it so the socket behaves like the data read so far has not been read yet. You then re-enter the
readyRead()
when more data arrives. Then the new call to the slot doesn't maintain state about what was left over from last time, it simply gets to re-read the data from last time.Which then means all your questions about behaviour in multi-client are answered. The server does not block on one client.
-
Speaking of the code you referenced: in ChatClient::onReadyRead(), an infinite loop is started (via
for(;;) { ... })
. This will block the event loop until all data is transmitted, won't it?First, I'm an amateur, so I cannot promise this is right! I've looked at that code and it does seem strange at first understanding. I don't think it's working the way you think. I think the
for
loop does exit, and the wholereadyRead()
consequently, when there's insufficient data.} else { // the read failed, the socket goes automatically back to the state it was in before the transaction started // we just exit the loop and wait for more data to become available break; }
The transaction-y calls then make it so the socket behaves like the data read so far has not been read yet. You then re-enter the
readyRead()
when more data arrives. Then the new call to the slot doesn't maintain state about what was left over from last time, it simply gets to re-read the data from last time.Which then means all your questions about behaviour in multi-client are answered. The server does not block on one client.
Okay, after re-reading the code, I think the infinite loop is there in case multiple JSON "packets" are sent, and it's more than one available in one "readyRead" call.
Man, that TCP stuff is quite complicated ;-) But it's surely a huge help that this wiki entry exists. This really should go to the official docs as an example for persistent connections. That fortune cookie example is probably a good basic example for a single-use one-way connection, but there's so many things one has to think about if it's a two-way persistent connection, and esp. if there are multiple ones …
-
Speaking of the code you referenced: in
ChatClient::onReadyRead()
, an infinite loop is started (viafor(;;) { ... }
). This will block the event loop until all data is transmitted, won't it?I read about the TCP communication via a QDataStream, that one can't expect all data to arrive at once. I'm not sure that I understood what's happening there correctly.
The fortune cookie example handles
readyRead()
without an infinite loop. What does happen there? Calling the function again won't reuse the stack variables created there, because they are deleted as soon as the function finishes (which won't happen in the inifinte loop approach). So will the data heap inside the socket, and each time a chunk arrives, thereadyRead
signal will be emitted, andstream >> variable
takes what it has and one can check if it's all?And what if the transmisson slows down or aborts, will the inifinite loop wait forever for the missing data?
@l3u_ said in QTcpServer with multiple persistent connections:
This will block the event loop until all data is transmitted, won't it?
Nope, it has a different purpose. Also note that the socket is buffered itself (Qt-side) meaning the data is already read (or to be written) from (to) the actual device, but it sits inside Qt's buffer, in memory.
And what if the transmisson slows down or aborts, will the inifinite loop wait forever for the missing data?
No. The loop is there for another reason. You have to realize when working with the network (specific to TCP, not Qt), the data may arrive in chunks, granted, as you read already but there are 2 distinct cases:
- The data you sent may be split into pieces.
You have to handle partially reading a data block - this is where theQDataStream
transactions come in play. You start the transaction and try reading the data you had sent from the other side. Without going into too much details about the internals, if the data stream couldn't read the whole blockQDataStream::commitTransaction
will fail and is going to "restore" the internal buffer of the socket as if no reading had occurred at all. That's why it's enough in this case to do nothing - just wait for more data. - The data you sent may be appended into one bigger piece.
Now this means once you read a message successfully, i.e.QDataStream::commitTransaction
succeeds, and you process it you have still more in the socket's buffer. As it happens it may be yet a whole another message waiting there, but you're going to get only onereadyRead
emission, thus you need to make sure you loop around to read that second message as well. This is where the infinite loop comes in play. It just ensures all completely received messages that are sitting in the socket's buffer are read. If you have only one message, then the code in the loop does one iteration, otherwise it loops around until it extracts all completely received messages.
Okay, after re-reading the code, I think the infinite loop is there in case multiple JSON "packets" are sent, and it's more than one available in one "readyRead" call.
Yes. That is correct.
But it's surely a huge help that this wiki entry exists.
I'm sure @VRonin is going to be happy to read that (as well as I am).
@JonB said in QTcpServer with multiple persistent connections:
I don't think it's working the way you think. I think the for loop does exit, and the whole readyRead() consequently, when there's insufficient data.
Yes, you think correctly. ;)
The transaction-y calls then make it so the socket behaves like the data read so far has not been read yet. You then re-enter the readyRead() when more data arrives. Then the new call to the slot doesn't maintain state about what was left over from last time, it simply gets to re-read the data from last time.
Correct again.
- The data you sent may be split into pieces.
-
@l3u_ said in QTcpServer with multiple persistent connections:
This will block the event loop until all data is transmitted, won't it?
Nope, it has a different purpose. Also note that the socket is buffered itself (Qt-side) meaning the data is already read (or to be written) from (to) the actual device, but it sits inside Qt's buffer, in memory.
And what if the transmisson slows down or aborts, will the inifinite loop wait forever for the missing data?
No. The loop is there for another reason. You have to realize when working with the network (specific to TCP, not Qt), the data may arrive in chunks, granted, as you read already but there are 2 distinct cases:
- The data you sent may be split into pieces.
You have to handle partially reading a data block - this is where theQDataStream
transactions come in play. You start the transaction and try reading the data you had sent from the other side. Without going into too much details about the internals, if the data stream couldn't read the whole blockQDataStream::commitTransaction
will fail and is going to "restore" the internal buffer of the socket as if no reading had occurred at all. That's why it's enough in this case to do nothing - just wait for more data. - The data you sent may be appended into one bigger piece.
Now this means once you read a message successfully, i.e.QDataStream::commitTransaction
succeeds, and you process it you have still more in the socket's buffer. As it happens it may be yet a whole another message waiting there, but you're going to get only onereadyRead
emission, thus you need to make sure you loop around to read that second message as well. This is where the infinite loop comes in play. It just ensures all completely received messages that are sitting in the socket's buffer are read. If you have only one message, then the code in the loop does one iteration, otherwise it loops around until it extracts all completely received messages.
Okay, after re-reading the code, I think the infinite loop is there in case multiple JSON "packets" are sent, and it's more than one available in one "readyRead" call.
Yes. That is correct.
But it's surely a huge help that this wiki entry exists.
I'm sure @VRonin is going to be happy to read that (as well as I am).
@JonB said in QTcpServer with multiple persistent connections:
I don't think it's working the way you think. I think the for loop does exit, and the whole readyRead() consequently, when there's insufficient data.
Yes, you think correctly. ;)
The transaction-y calls then make it so the socket behaves like the data read so far has not been read yet. You then re-enter the readyRead() when more data arrives. Then the new call to the slot doesn't maintain state about what was left over from last time, it simply gets to re-read the data from last time.
Correct again.
Without going into too much details about the internals, if the data stream couldn't read the whole block QDataStream::commitTransaction will fail
That's the bit I wasn't sure about and had to guess! So, "Without going into too much details", how does it decide that (and where is it documented for us plebs)? Like, does it implement a timeout of some nature?
- The data you sent may be split into pieces.
-
Without going into too much details about the internals, if the data stream couldn't read the whole block QDataStream::commitTransaction will fail
That's the bit I wasn't sure about and had to guess! So, "Without going into too much details", how does it decide that (and where is it documented for us plebs)? Like, does it implement a timeout of some nature?
@JonB said in QTcpServer with multiple persistent connections:
how does it decide that (and where is it documented for us plebs)?
The method docs, I guess? Too lazy to check to be honest.
The decision is made based on theQDataStream
status. Whenever you're writing/reading from the stream, e.g. when you provide serialization for the objects (think<<
and>>
operators) you can (and should) set the status if the (de)serialization failed. This is already done for the Qt types anyway. Here, simply reading a byte array, the validation is rather straight-forward (internal to Qt):- Try to read the buffer size (i.e. an integer). If there's not enough data on the
QIODevice
to read an integer, the operation fails. - Try to read the actual buffer contents (you already know the size from 1). If there's not enough data on the device, the operation fails.
Like, does it implement a timeout of some nature?
No, it's simpler than that. The data is buffered in the socket object. So you just try to read it from that memory buffer. If reading fails, the transaction fails. If reading succeeds the buffer is "shortened" with the data you read. The buffer is filled asynchronously by Qt whenever the data comes through the underlying system socket.
- Try to read the buffer size (i.e. an integer). If there's not enough data on the
-
@JonB said in QTcpServer with multiple persistent connections:
how does it decide that (and where is it documented for us plebs)?
The method docs, I guess? Too lazy to check to be honest.
The decision is made based on theQDataStream
status. Whenever you're writing/reading from the stream, e.g. when you provide serialization for the objects (think<<
and>>
operators) you can (and should) set the status if the (de)serialization failed. This is already done for the Qt types anyway. Here, simply reading a byte array, the validation is rather straight-forward (internal to Qt):- Try to read the buffer size (i.e. an integer). If there's not enough data on the
QIODevice
to read an integer, the operation fails. - Try to read the actual buffer contents (you already know the size from 1). If there's not enough data on the device, the operation fails.
Like, does it implement a timeout of some nature?
No, it's simpler than that. The data is buffered in the socket object. So you just try to read it from that memory buffer. If reading fails, the transaction fails. If reading succeeds the buffer is "shortened" with the data you read. The buffer is filled asynchronously by Qt whenever the data comes through the underlying system socket.
@kshegunov
OK, so in a word,QDataStream::commitTransaction()
fails/rolls back if there is not enough data already there at the instant it is called, period.BTW: so if I decide to transfer a 1GB (or more) file content as one
QByteArray
, and it arrives in chunks, your transaction approach will have to buffer up to 1GB of previously-received bytes, and that's presumably in memory? Hmmmm. Nobody finds that a problem? - Try to read the buffer size (i.e. an integer). If there's not enough data on the
-
@kshegunov
OK, so in a word,QDataStream::commitTransaction()
fails/rolls back if there is not enough data already there at the instant it is called, period.BTW: so if I decide to transfer a 1GB (or more) file content as one
QByteArray
, and it arrives in chunks, your transaction approach will have to buffer up to 1GB of previously-received bytes, and that's presumably in memory? Hmmmm. Nobody finds that a problem?@JonB said in QTcpServer with multiple persistent connections:
OK, so in a word, QDataStream::commitTransaction() fails/rolls back if there is not enough data already there at the instant it is called, period.
Correct
so if I decide to transfer a 1GB (or more) file content as one QByteArray, and it arrives in chunks, your transaction approach will have to buffer up to 1GB of previously-received bytes, and that's presumably in memory? Hmmmm. Nobody finds that a problem?
In this case you are free to build your own buffer (i'll use the chat example again):
- add a private
QTemporaryFile m_dataBuffer;
toChatClient
; - in the constructor of
ChatClient
callm_dataBuffer.open();
- modify
ChatClient::onReadyRead()
as below
void ChatClient::onReadyRead() { const qint64 oldPos = m_dataBuffer.pos(); // save the position in the file m_dataBuffer.seek(m_dataBuffer.size()); // go to the end of the file m_dataBuffer.write(m_clientSocket.readAll()); // append all the data available on the socket into the file m_dataBuffer.seek(oldPos); // go back to the old position QDataStream socketStream(m_dataBuffer); // set the stream to read from the file //////////////////////////////////////////////////////////////////// // identical to the original version QByteArray jsonData; socketStream.setVersion(QDataStream::Qt_5_7); for (;;) { socketStream.startTransaction(); socketStream >> jsonData; if (!socketStream.commitTransaction()) break; QJsonParseError parseError; const QJsonDocument jsonDoc = QJsonDocument::fromJson(jsonData, &parseError); if (parseError.error == QJsonParseError::NoError) { if (jsonDoc.isObject()) jsonReceived(jsonDoc.object()); } } //////////////////////////////////////////////////////////////////// if(m_dataBuffer.atEnd()) // we read all the data there was on the file m_dataBuffer.resize(0); // clear everything from the file }
- add a private
-
@kshegunov
OK, so in a word,QDataStream::commitTransaction()
fails/rolls back if there is not enough data already there at the instant it is called, period.BTW: so if I decide to transfer a 1GB (or more) file content as one
QByteArray
, and it arrives in chunks, your transaction approach will have to buffer up to 1GB of previously-received bytes, and that's presumably in memory? Hmmmm. Nobody finds that a problem?@JonB said in QTcpServer with multiple persistent connections:
so if I decide to transfer a 1GB (or more) file content as one QByteArray, and it arrives in chunks, your transaction approach will have to buffer up to 1GB of previously-received bytes, and that's presumably in memory? Hmmmm. Nobody finds that a problem?
Not really, as you already committed a heinous crime against the system already: you read 1GB off the hard disk and send it directly to the socket. ;)
Joking aside, what @VRonin said could work. Or you could simply split it up at the peer before sending. -
@JonB said in QTcpServer with multiple persistent connections:
so if I decide to transfer a 1GB (or more) file content as one QByteArray, and it arrives in chunks, your transaction approach will have to buffer up to 1GB of previously-received bytes, and that's presumably in memory? Hmmmm. Nobody finds that a problem?
Not really, as you already committed a heinous crime against the system already: you read 1GB off the hard disk and send it directly to the socket. ;)
Joking aside, what @VRonin said could work. Or you could simply split it up at the peer before sending.@kshegunov
I realise @VRonin's reply is the way to go, and have already upvoted that.However, your
as you already committed a heinous crime against the system already: you read 1GB off the hard disk and send it directly to the socket. ;)
? My "TCP clients" in this case were the ones who read data off the disk and sent them to my server. How they did that is up to them, I have no knowledge, where's the crime? What's that got to do with your approach of having the server receive the 1GB into a memory buffer? And at least the clients, if they did read into memory, only did 1GB each. The server services 100 clients, they're all in the process of sending 1GB each, my server needs 100GB memory.... :(
-
@kshegunov
I realise @VRonin's reply is the way to go, and have already upvoted that.However, your
as you already committed a heinous crime against the system already: you read 1GB off the hard disk and send it directly to the socket. ;)
? My "TCP clients" in this case were the ones who read data off the disk and sent them to my server. How they did that is up to them, I have no knowledge, where's the crime? What's that got to do with your approach of having the server receive the 1GB into a memory buffer? And at least the clients, if they did read into memory, only did 1GB each. The server services 100 clients, they're all in the process of sending 1GB each, my server needs 100GB memory.... :(
-
@kshegunov
I realise @VRonin's reply is the way to go, and have already upvoted that.However, your
as you already committed a heinous crime against the system already: you read 1GB off the hard disk and send it directly to the socket. ;)
? My "TCP clients" in this case were the ones who read data off the disk and sent them to my server. How they did that is up to them, I have no knowledge, where's the crime? What's that got to do with your approach of having the server receive the 1GB into a memory buffer? And at least the clients, if they did read into memory, only did 1GB each. The server services 100 clients, they're all in the process of sending 1GB each, my server needs 100GB memory.... :(
@JonB I think what he wanted to say is that your protocol shouldn't allow sending huge packets of data in one big piece at all, and that – if you want to handle such amounts – you will have to implement it in the proper way so that everything is fine. Because if you send 1 GB to a socket using the default behavior (which will probably result in the reveiver's socket buffer filling up to 1 GB until you can read it out), you will have to read it into memory before. And you shouldn't have done that in the first place.