'Garbage at the end of the document' error on parsing QJsonDocument
-
@mrjj hi,
i receive this:"{\"contacts\":[{\"first_name\":\"\",\"last_name\":\"\",\"nick_name\":\"test\",\"user_name\":\"test\"}],\"event\":1,\"type\":7}{\"conversations\":[{\"id\":68,\"members\":[\"test\"],\"starter\":\"test1\",\"title\":\"\",\"type\":1}],\"event\":3,\"type\":7}" "garbage at the end of the document"
but i expect see this , if contacts object parse without any error:
{\"conversations\":[{\"id\":68,\"members\":[\"test\"],\"starter\":\"test1\",\"title\":\"\",\"type\":1}],\"event\":3,\"type\":7}" "garbage at the end of the document"
and if contacts object parse with error:
"{\"contacts\":[{\"first_name\":\"\",\"last_name\":\"\",\"nick_name\":\"test\",\"user_name\":\"test\"}],\"event\":1,\"type\":7} "Any error report"
because i send tow separated request for getting contacts and conversations !
-
@returnx
Hello,
If you parse the data each time you get a read on your socket, and if your data doesn't come into one packet you'll naturally get an error. You could send an integer from the server side before sending the data through the socket, so on the client side you know how much data to expect. Then you read the data, and only when all the data has arrived (your buffer has a size that matches the integer you had sent) you parse the JSON data. Additionally I see no reason to callQByteArray::simplified();
on the client side at all, if a call is to be made, it certainly should be put before sending the data. Moreover, you probably are getting several packets as one (sockets have buffers and there is no guarantee that what you send as separatewrite
calls will be read from separateread
s). That's why you have to implement a simple protocol by which your server and client will communicate.Kind regards.
-
@returnx
ok seems fine.
As @kshegunov is talking about,Are you sure this error comes even when all data have been sent/read?
It will/might come in several blocks so on_socket_ready_read() will be called
more than one time for say "contacts"If you check packet_bytes.size() and
m_socket->bytesAvailable(); ( when reading) , do they match up?
also please look at
http://doc.qt.io/qt-5/qjsonparseerror.html#offset-var
to get hint of where it think error is. -
@mrjj
i checked it!
when i write tow time,for example size of the first data block is 123 bytes and size of the second data is 118,on client side i received total of the data (241) just one time!and for this case:
"{\"contacts\":[{\"first_name\":\"test\",\"last_name\":\"test\",\"nick_name\":\"test\",\"user_name\":\"test\"}],\"event\":1,\"type\":7}{\"conversations\":[{\"id\":68,\"members\":[\"test1\"],\"starter\":\"test\",\"title\":\"\",\"type\":1}],\"event\":3,\"type\":7}"
error offset is :112
-
@returnx
well it seems u get both docs in same string
so there is 2 JSON root elements which I think it will not like
if u paste it into
https://jsonformatter.curiousconcept.com/
you will see it not happy with that so my guess
is that the qt json parser don't like either.
so you should not send them as you do now.
Or at least do as @kshegunov suggest and make a small protocol so u know which is which.{ "contacts":[ { "first_name":"test", "last_name":"test", "nick_name":"test", "user_name":"test" } ], "event":1, "type":7 } /////// new root { "conversations":[ { "id":68, "members":[ "test1" ], "starter":"test", "title":"", "type":1 } ], "event":3, "type":7 }
-
@returnx
Hello,
Then your data is buffered before sending/receiving (either on server or client side, it doesn't really matter). This is very characteristic of network communications. The simplest way to solve your problem is either to send a header with each write (as my suggestion for an integer) or split the written data by a special character, for example the NULL character should suffice.Kind regards.
-
@returnx
Ok. super. please mark as solved :)Final note:
When you deploy,
there might be more reads for full json string.
The code you shown, will try to parse on each read. Make sure the code can
handle that it comes in blocks and not bail out if parse fails.