Unsolved QUdpSocket does not trigger ReadyRead signal
-
Just for the sake of simplification, did you take a look at the examples linked in the QUdpSocket documentation ?
Thinking about:
Broadcast Sender
Broadcast Receiver
Multicast Sender
Multicast Receiver -
@aha_1980 I now realized that I can't even send the socket due to large size which is 1 mb. Do you know how can I divide it into chunks of bytes ? Or should I create new topic?
-
What exactly is the goal of your application ?
-
@SGaist I want to take pictures from camera then send and recieve them continuously in order to implement a real time video streaming application. The reason I am sending images instead of video is because I will later work on image processing. I realized that image size is too big for this which is 6404803 BYTES. But I also can't switch to TCP because I need the speed of UDP.
-
Ok,
I apologize to mother tongue people for my English, I know, it isn’t good
Anyway, If I have understood correctly there is a limit of 1Mb, but limit apart, to do this I always do:The server application
- Send a header with a fixed size and with the following information: a security key (to recognise a my flow), data size, …. and then send data
- You can get and save a pointer of data from QbyteArray by “data” function
- With a loop I send few pieces of data ( < 1Mb) by writeDatagram, it is easy, all you need to do you have to use an index to move on the data pointer
The client application:
- When it will receive your header and recognise it, it will know data size too
- read all data that will receive and reassemble the all image
For me UDP is a good choice.
I hope to help you.
-
@CP71 said in QUdpSocket does not trigger ReadyRead signal:
The client application:
- When it will receive your header and recognise it, it will know data size too
- read all data that will receive and reassemble the all image
For me UDP is a good choice.
How do you manage to "read all data" corresponding to "it will know data size" given that you're using UDP?
-
If you want to stream videos then you should follow the standards currently used to minimise bandwidth usage while keeping image quality.
As for video processing, you should give more information about what you want to do.
-
@JonB Ok,
you are right, I think to know what you want to say, all depend of kind of application you are doing.
You believe me, I done real-time application with UDP and they work well.
Ok, TCP is more safety but UDP is faster, I’m not game developer but as far as I know more games are based on UDP, because it is faster.
It's obvious, the client must check the integrity of data and discard the bad data, e.g by checksum.
I don’t know the goal of @onurcevik, but if a frame video is delayed or lost it will be discarded.
All depend if you want to prefer the speed or the quality of information therefore what the client application must do. -
@CP71
You may well know more than I. But as I understand it UDP can happily lose any packets at the client, that's the point of it. By splitting your datagrams like you said if one does not arrive what happens at the client, it seems to me that protocol just won't work? Maybe in your "real-world" apps none get missed, I don't actually have experience, I am just interested/concerned. :) -
@JonB Don’t worry, I happy to communicate with you, so I’m training in English that is useful for me ;).
I don’t know if I can post a link to other site or forum, so I avoid.
I don’t say UDP is better than of TCP, I say that depend of your goal of application, each protocol has its scope.
For me, I would use UDP for stream video, especially if the client must be only show the image.
But it is only my idea.