Good resource on how to use QNetworkAccessManager & QNetworkReply correctly?



  • I'm struggling to figure how to best use these two classes, or maybe there's a better way I'm not aware of. In my app, I need to be able to log events back to a web server (as long as it is accessible and if not, write the url to a local file to be processed later when the network IS accessible.) I thought the best way would be to use these two class similar to this:

    @
    void Logger::log(QString msg)
    {
    loggerUrl.append("/" + msg);

    if (DEBUG > 8)
        qDebug() << "log url:" << loggerUrl;
    
    request.setUrl(QUrl(loggerUrl));
    request.setRawHeader("User-Agent", "MVFSCA 1.0");
    
    QNetworkReply *reply = manager->get(request);
    
    connect(reply,SIGNAL(readyRead()),this,SLOT(slotReadyRead()));
    connect(reply,SIGNAL(error(QNetworkReply::NetworkError)),this,SLOT(slotError(QNetworkReply::NetworkError)));
    connect(reply,SIGNAL(finished()),this,SLOT(replyFinished()));
    

    }
    @

    I thought I had it working when I used the error signal from QNetworkReply to trigger the local write. That doesn't work - reliably.

    In my test scenario, I started the program and began to log data to the web server. I simulated an issue first by stopping the web server on the remote host. It seems this causes the GET requests to stack up in memory because after ~30 seconds of web server down time, I started it back up and the program spewed out network GET requests to the web server in order to catch up. But if I keep the web server down for say 20 minutes, the program never seems to recognize the network resource is available and memory usage by the program keeps rising.

    I'm fairly new to C++/Qt, but I'm not totally green and of course I can read. If there's an example or a more detailed explanation of what's going on with these two classes or even a better way to show how to correctly do what I want to do, I'd welcome the opportunity to study it.

    Thanks for any help you can give me.



  • I think your first problem is that you are adding on a new signal/slot connection every single time you write a message to the log: that's why your memory usage keeps going up. I think what you probably really want to do is check the accessibility of the network each time, maybe using something as simple as the "NetworkAccessibility property":http://developer.qt.nokia.com/doc/qt-4.8/qnetworkaccessmanager.html#networkAccessible-prop, and use that to decide where to send the message.



  • Thanks for the speedy reply CH!!!

    I thought the signal/slot connections were a bit much, but I didn't think the QNetworkAccessible property would tell me if the web server was operational.

    I'll give your suggestion a whirl and see what I can kludge together.



  • The property won't tell you if the server itself is down, so maybe checking just that isn't good enough for your purposes. And signals and slots may well be a nice way of doing this, since they will play nice with asynchronous network requests (that is, in your case, when log() gets called multiple times before any response is received from the server). I think that if it's set up properly Qt will deal with deleting the slots when the QNetworkReply object is no longer referenced. Unfortunately, however, I don't think you can count on the NetworkError signal being raised consistently for your case. You could consider manually timing-out the request with a QTimer, maybe (pretty ugly, though).



  • Counting on the error signals of the network access manager isn't too reliable. The approach has some weak corners anyways, even in case the network and server are ok. In case you emit many messages in a very short time, it could be that you run into a connection congestion on the server side and/or that the messages do not arrive in order at the server.

    I would collect the messages in an internal FIFO using [[Doc:QQueue]] and send only one message at a time. Together with a timeout timer, as suggested by Chris, you can stop transmitting the messages over the network and fall back to some local storage. As you still have the pending messages in the queue, you won't loose them. If you have sent out, say, 10 messages and after 30 seconds the first one times out, you have lost this one and the other 9 after.



  • I'm not sure how to quantify "many". There will be anywhere from 15 to 20 clients running the app. They each can generate 1-20 messages per minute. The hardware is all local network connections, so when it's up, it should be pretty fast responses. The server hardware is pretty beefy as well. I'm testing the app in a LARGER-THAN-LIFE virtual world example. I want to be sure it can handle the worst possible case, so when it's in production and barely breaking a sweat, there won't be any issues.

    I was just thinking about some sort of Queue mechanism as you described. I'm just not sure how to reliably detect when there is an issue and to stop processing the queue via the network and direct it to local disk. Each client (shouldn't) be sending more than one message (log entry) at a time to the web server.



  • Detecting an issue is probably something as simple as attaching a QTimer and having it check the upload progress of your message every second or something: after a few seconds of no change, call that a timeout, and start dumping the queued items to the disk instead. Unless your messages are very large or very frequent, I'd think that you could easily queue up, say 5-10 seconds before you start storing them locally.



  • I appreciate the guidance! I'll see if I can mash something together and update the post if I can get it to work the way I need it to.

    I wish there were more content (examples) out there for QNetwork guidance. I like to build off examples.



  • I found a great article in the "Qt Quarterly" that talked about GUIs freezing during long operations. It seemed to fit my scenario, so here's how I decided to implement it to solve my logging to a web server problem:

    @
    void Logger::log(QString msg)
    {
    loggerUrl.append("/" + msg);

    if (DEBUG > 8)
        qDebug() << "log url:" << loggerUrl;
    
    request.setUrl(QUrl(loggerUrl));
    
    managerReply = manager->get(request);
    
    timer->start(5000);  // give the web server 5s to respond
    loop->exec&#40;&#41;;    // QEventLoop created on setup to convert async ops to sync using SIGNAL/SLOT
    
    if (timer->isActive()) {
        // log upload complete
        qDebug() << "upload complete";
        timer->stop();
    } else {
        // timeout
        qDebug() << "manager timed out";
        managerReply->abort();
        managerReply->deleteLater();
    
        // Log to local disk and process later
    }
    

    }
    @



  • Well. The saga continues. Now I have a memory leak (albeit a small one) with either the QTimer or the QEventLoop. I've run it through valgrind to trim out as much as I could and it did find some issues that I fixed, but it still leaks if I use this routine with the timeout code. Any ideas where I'm losing memory? It's ~400KB per every 100 executions of this function.
    @
    bool Logger::logNetwork(QString &msg)
    {
    QTimer timer;
    timer.setSingleShot(true);

    QEventLoop loop;
    
    connect(&timer,SIGNAL(timeout()),&loop,SLOT(quit()));
    connect(manager,SIGNAL(finished(QNetworkReply*)),&loop,SLOT(quit()));
    
    QString logUrl = msg;
    
    if (!msg.startsWith("http")) {
        QStringList fields = msg.split(" ");
        logUrl = loggerUrl;
    }
    
    if (DEBUG > 8)
        qDebug() << "log url:" << logUrl;
    
    request.setUrl(QUrl(logUrl));
    QNetworkReply *managerReply = manager->get(request);
    
    timer.start(5000); // give the web server 5s to respond
    loop.exec&#40;&#41;;
    
    if (timer.isActive()) {
        // log upload complete
        try {
            qDebug() << "stop timer";
    
            timer.stop();
        } catch (int i) {
            qDebug() << "timer.stop() emitted an error";
        }
    
        timer.deleteLater();
        return true;
    } else {
        // timed out
        networkSuccess = false;
        managerReply->abort();
        managerReply->deleteLater();
        return false;
    }
    

    }
    @



  • Calling timer.deleteLater() is an absolute no-go and can crash your application. The timer is deleted on leaving method logNetwork, as it is stack allocated. You must not call delete (or deleteLater) on a stack based variable.

    In the if branch the handles the not time out case, you do not delete the managerReply (resp. do not call managerReply->deleteLater()).



  • Actually, I added the timer.deleteLater() further into the debug process. I originally didn't have it in there. As for the managerReply->deleteLater(), I thought I read somewhere in the docs that I HAD to delete it myself and that the way to do it was with deleteLater? At any rate, in the initial testing no timeout is reached. So the "else" section isn't hit. I know this because I'm returning "true" during the initial test.

    When I comment out the timer & loop stuff and just return true after manager->get(request), it doesn't seem to hold on to memory. Granted, I'm looking at top output and watching the web server log to determine usage and run count. But in the tests w/o timer & loop, I don't see the same memory climb stats I see with them.

    I'm wondering if maybe it has to do with there being a QList<T> in either of these classes and it's not being deleted, cleared or reassigned? I think I read something about the difference between QList & QVector and how it allocates to the heap.



  • top is not a reliable means to check for memory leaks.

    For when to delete - this is plain C++ basics:

    • if you allocate on the stack you don't need to do anything
      the object is deleted once it goes out of scope
    • if you allocate on the heap using new, you must call delete
      ** directly: <code>delete obj;</code>
      ** delayed: <code>obj.deleteLater()</code> if it's QObject based
      ** indirectly: by passing a parent QObject to the QObject based object
      the parent will eventually delete the child object once itself is deleted
    • if you allocate an array on the heap using <code>new[]</code>, you must call <code>delete[]</code>
    • if you allocate on the heap using malloc, you must call free

    And you never ever must mix the three toplevel approaches. Calling delete on an malloc'ed object or calling free on a new'ed object or calling free/delete on a stack based object leads to earth destruction eventually. This is one of the very rare occasions where "never" is appropriate in programming C++.

    For the QList question: Sorry, the crystal balls are still on new year's holidays, so please accept that we cannot comment on this.



  • OUCH! The back of my hands are starting to turn red! ;-)

    So reading between the lines, I should ignore the fact that I can see a repeatable difference in the memory usage (albeit from top) between using the QTimer/QEventLoop and not?!?!?



  • its kind of difficult



  • [quote author="shorawitz" date="1325697752"]OUCH! The back of my hands are starting to turn red! ;-)

    So reading between the lines, I should ignore the fact that I can see a repeatable difference in the memory usage (albeit from top) between using the QTimer/QEventLoop and not?!?!?[/quote]

    top is not a tool to debug memory leaks! It's ok to use it to monitor the overall trend of the memory consumption of an application, but nothing more please! If you need to hunt down memory leaks, use a dedicated tool like valgrind. Qt Creator even has support for it.



  • I did use valgrind, but I didn't see any noticeable information to point to a specific class/function that was causing the memory leak. I'll keep fiddling with the various options to valgrind and update my Qt libs to something newer that 4.7.2


Log in to reply
 

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.