QSerialPort: delayed signal on new received bytes



  • Hi

    I moved from QextSerialPort to QSerialPort because QextSerialPort was droping bytes sometimes.
    With QSerialPort I did not have droped/missing bytes so far.
    However, I have a different issue that I never had with QextSerialPort.

    Sometimes, it seems as if the readyRead signal is somehow delayed.

    Good Case:

    • Sending about 150 bytes to the devicee
    • readyRead signal is emitted after n bytes (n varies)
    • reading these n bytes and buffering
    • readyRead after about 10ms (time varies < 50ms)
    • reading the rest of the bytes
    • evaluating full protcol

    Bad Case:

    • Sending about 150 bytes to the devicee
    • readyRead signal is emitted after n bytes (n varies)
    • reading these n bytes and buffering
    • readyRead after about 70ms (time varies > 50ms)
    • ... this is where the rest of the bytes would be readable

    Since the protocol framing is defined that no inter-protocol gaps of more than 50ms are allowed, I have to delete the data in the second case and the protocol is lost and has to be resent.
    In my simulating environment, there is absolutely no gap between the bytes. All the bytes are perfectly back-to-back.

    Does somebody have an idea why this behavior occurs, especially in comparison with QextSerialPort, and how this can be solved?
    Do I need a separate thread for the serial port stuff .. I hope not because I never before used QThread.
    I am on Qt 4.6.3 on eLinux, baudrate is 115200.
    Thanks



  • bq. Doeas somebody have an idea why this behaviour occurs

    Most likely delays with ~70 msec brings the scheduler of OS. Important is that used OS is not an Real-Time OS, so this delays can be occurs.

    Usual, the temporary requirements imposed on the protocol mostly belong to the hardware with the real-time OS. And imho, silly try to implement such protocol on non-realtime OS's.

    I can advise the following:

    1. To try to increase a thread priority with QSerialPort.

    2. Try to use system read timeouts since by default the QSerialPort doesn't use them. E.g. After open device, try to get handle() and try to setup VMIN/VTIME parameters on Linux (see man termios), or "SetCommTimeouts":http://msdn.microsoft.com/en-us/library/windows/desktop/aa363437(v=vs.85).aspx on Windows. Theoretically it will give the chance to measure time more precisely.

    3. Do not use Qt && QtSerialPort in favor to pure native C/C++ code.



  • kuzulius, thanks for trying to help.

    To 1:
    As far as I can see, QSerialPort does not run any parts of its code in a separate thread under Linux, or did I miss something?
    How would I increase priority in this case?

    To 2:
    I'v read out the values with stty -a -F /dev/ttyS1 which returned 0 for min and 0 for time. As far as I can tell, this already is the best setting.

    To 3:
    Sorry, I'm not sure what you mean exactly.



  • bq. How would I increase priority in this case?

    Move the QSerialPort to separate QThread and set "thread priority": http://qt-project.org/doc/qt-4.8/qthread.html#setPriority.

    bq. As far as I can tell, this already is the best setting.

    Quite possibly that these 0/0 remained from QSerialPort. You can re-start your board and try do run the stty (before use of QSerialPort), to see an default VMIN/VTIME. So, you can change VMIN/VTIME as you want.

    bq. Sorry, I’m not sure what you mean exactly.

    I mean: do not use of heavy QtSerialPort && Qt on an applications which required precise time for communications (I/O) issues. E.g. use the pure C/C++ code && others lite GUI toolkits.. :)



  • [quote author="kuzulis" date="1405088316"]
    Move the QSerialPort to separate QThread and set "thread priority": http://qt-project.org/doc/qt-4.8/qthread.html#setPriority.

    [/quote]

    Sounds like quite some piece of work since I have no experience with QThread .. never used before.

    Thanks



  • What I don't get is the difference between QextSerialPort and QSerialPort. since they both rely on the native Linux IO functions (AFAIK, wrapped by QIO device) how comes they have such a huge difference in emitting the readyRead signal?

    I am not really sure putting the serial port in a seperate thread is the best solution. It is also recommended "Best Practice" to not put it in a separate thread for asynchronous communication. When I see that QextSerialPort does not have this delay and emits the readyRead like immediately, I strongly feel that something with the QSerialPort code is wrong.



  • bq. I strongly feel that something with the QSerialPort code is wrong.

    You can itself try to find a problem place. The main places (difference) at which I would to check, is:

    1. Try to add this code (from the QextSerialPort "sources":https://code.google.com/p/qextserialport/source/browse/src/qextserialport_unix.cpp ,line 86):

    @
    const long vdisable = ::fpathconf(fd, _PC_VDISABLE);
    currentTermios.c_cc[VINTR] = vdisable;
    currentTermios.c_cc[VQUIT] = vdisable;
    currentTermios.c_cc[VSTART] = vdisable;
    currentTermios.c_cc[VSTOP] = vdisable;
    currentTermios.c_cc[VSUSP] = vdisable;
    @

    at initialization of device.

    1. Try to change the QSocketNotifier's event handling from the direct events to the signal/slot. The QSerialPort override the QSocketNotifier::event() method, but the QextSerialPort use the QSocketNotifier::activated(int) signal.

    2. Maybe something excessively in QSerialPortPrivate::readNotification() method. Maybe is reasonable to simplify it up to:

    @
    // Always buffered, read data from the port into the read buffer
    qint64 newBytes = readBuffer.size();
    qint64 bytesToRead = policy == QSerialPort::IgnorePolicy ? ReadChunkSize : 1;

    if (readBufferMaxSize && bytesToRead > (readBufferMaxSize - readBuffer.size())) {
        bytesToRead = readBufferMaxSize - readBuffer.size();
        if (bytesToRead == 0) {
            // Buffer is full. User must read data from the buffer
            // before we can read more from the port.
            return false;
        }
    }
    
    char *ptr = readBuffer.reserve(bytesToRead);
    const qint64 readBytes = readFromPort(ptr, bytesToRead);
    
    if (readBytes <= 0) {
        QSerialPort::SerialPortError error = decodeSystemError();
        if (error != QSerialPort::ResourceError)
            error = QSerialPort::ReadError;
        else
            setReadNotificationEnabled(false);
        q->setError(error);
        readBuffer.chop(bytesToRead);
        return false;
    }
    
    readBuffer.chop(bytesToRead - qMax(readBytes, qint64(0)));
    
    newBytes = readBuffer.size() - newBytes;
    
    // If read buffer is full, disable the read port notifier.
    if (readBufferMaxSize && readBuffer.size() == readBufferMaxSize)
        setReadNotificationEnabled(false);
    
    // only emit readyRead() when not recursing, and only if there is data available
    const bool hasData = newBytes > 0;
    
    if (!emittedReadyRead && hasData) {
        emittedReadyRead = true;
        emit q->readyRead();
        emittedReadyRead = false;
    }
    
    if (!hasData)
        setReadNotificationEnabled(true);
    

    @



  • I'll read through that.
    One additional question: I assume it doesn't make a difference if I use
    @QByteArray read(qint64 maxSize)@

    instead of
    @virtual qint64 readData(char * data, qint64 maxSize)@

    to read out data, though readData seems to read from a ringbuffer and read reads from IODevice?



  • The readData(() it is protected method of QIODevice. You can't use it in your user code. The common chain is following: read() -> readData() -> read from ring buffer.

    The implementation of QSerialPort::readData() always read data from the internal ring buffer, but implementation of QextSerialPort try to read from the internal buffer + read directly from device.

    So, it make difference between read() and readData(). :)



  • Arrgghh ... trying to modify code of QSerialPort as you suggested.
    Last week I could build successfully on Windows and Linux, now I crash even building on windows with a
    @DEVINST was not declared in this scope @



  • Try to replace the DEVINST by DWORD. Most likely your compiler do not contains the "cfgmgr32.h" file (or file has other place).



  • Solved the DEVINST issue and added the termios settings. No change in the behaviour. There are only 3 diffs left from stty --a --F /dev/ttyS1 between QextSerialPort and QSerialPort. The following 3 option are deactivated (-) with QextSerialPort and activated with QSerialPort:
    @ignpar
    echoe
    echok
    @

    and I don't think that this will make the difference. Now going to look into your further suggestions.



  • Found that I have bytes dropping with QSerial as well. Seems to be some issue with underlying native linux or alike.
    Coming back ...



  • bq. Found that I have bytes dropping with QSerial as well.

    There can't be.

    Btw: this is an other issue not related with this thread.



  • You're right, it's not exactly related to the topic of this thread.
    I just wanted to update this thread with the fact that I am not further investigating the delayed readyRead as long as I have bytes dropping.
    I actually implemented QSerial on my 4.6.3 because I thought that the bytes dropping was an issue of QextSerialPort. As it turns out, the issue is somewhere else (HW / underlying Linux).


Log in to reply
 

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.