Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. General and Desktop
  4. QNetworkAccessManager HTTP(S) upload failure
Forum Updated to NodeBB v4.3 + New Features

QNetworkAccessManager HTTP(S) upload failure

Scheduled Pinned Locked Moved Solved General and Desktop
4 Posts 2 Posters 250 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • warped-rudiW Offline
    warped-rudiW Offline
    warped-rudi
    wrote on last edited by
    #1

    I'm trying to upload some files to a web service proxied by nginx. This works flawlessly as long as the file size is relatively small (i.e. < 64k). When sending larger files, the remote server simply closes the connection. This happens for both, PUT and POST methods. Qt version is 5.15.2.
    Uploading the same files using cURL is O.K. However, I noticed that cURL uses the HTTP 'Expect: 100-continue' mechanism to defer the content transmission. Could that be the reason? It looks like QNetworkAccessManager doesn't do that.

    Axel SpoerlA 1 Reply Last reply
    0
    • warped-rudiW warped-rudi

      I'm trying to upload some files to a web service proxied by nginx. This works flawlessly as long as the file size is relatively small (i.e. < 64k). When sending larger files, the remote server simply closes the connection. This happens for both, PUT and POST methods. Qt version is 5.15.2.
      Uploading the same files using cURL is O.K. However, I noticed that cURL uses the HTTP 'Expect: 100-continue' mechanism to defer the content transmission. Could that be the reason? It looks like QNetworkAccessManager doesn't do that.

      Axel SpoerlA Offline
      Axel SpoerlA Offline
      Axel Spoerl
      Moderators
      wrote on last edited by
      #2

      @warped-rudi
      Please post your code. Difficult to guess in the dark.

      Software Engineer
      The Qt Company, Oslo

      warped-rudiW 1 Reply Last reply
      0
      • Axel SpoerlA Axel Spoerl

        @warped-rudi
        Please post your code. Difficult to guess in the dark.

        warped-rudiW Offline
        warped-rudiW Offline
        warped-rudi
        wrote on last edited by
        #3

        @Axel-Spoerl The source code is pretty much standard. However, I think I could figure out, what the problem is using WireShark:

        I forgot to mention, that the service requires authentication. The parameter 'proxy_request_buffering off;' in nginx' config is present, which means that it will stream the HTTP message to the destination service as it arrives. I.e. nginx will not do any caching. The target service now replies with HTTP 401 as soon as it has received the headers of the incoming message. However, QNetworkAccessManager insists in pumping the whole message body even though it is clear at this point that the data will be discarded. The service and/or nginx do not expect more data to come in after the 401 was sent to the client and therefore close the connection.

        A possible solution is to remove 'proxy_request_buffering off;' from nginx config. In this case, nginx will receive and cache the whole message before passing it on to the target service. QNetworkAccessManager now doesn't have to deal with the premature 401 response. The overhead introduced by this setting is acceptable for me.

        The second solution seems to be using Qt6. A short test with 6.4.3 didn't show the problem at all.

        Axel SpoerlA 1 Reply Last reply
        0
        • warped-rudiW warped-rudi

          @Axel-Spoerl The source code is pretty much standard. However, I think I could figure out, what the problem is using WireShark:

          I forgot to mention, that the service requires authentication. The parameter 'proxy_request_buffering off;' in nginx' config is present, which means that it will stream the HTTP message to the destination service as it arrives. I.e. nginx will not do any caching. The target service now replies with HTTP 401 as soon as it has received the headers of the incoming message. However, QNetworkAccessManager insists in pumping the whole message body even though it is clear at this point that the data will be discarded. The service and/or nginx do not expect more data to come in after the 401 was sent to the client and therefore close the connection.

          A possible solution is to remove 'proxy_request_buffering off;' from nginx config. In this case, nginx will receive and cache the whole message before passing it on to the target service. QNetworkAccessManager now doesn't have to deal with the premature 401 response. The overhead introduced by this setting is acceptable for me.

          The second solution seems to be using Qt6. A short test with 6.4.3 didn't show the problem at all.

          Axel SpoerlA Offline
          Axel SpoerlA Offline
          Axel Spoerl
          Moderators
          wrote on last edited by
          #4

          @warped-rudi
          The only change from Qt5 to At6, that comes to my mind right away, is an internal change of the proxy authentication on MacOS. A few other things might also have changed in QNetworkAccessManager, but I don't know from the top of my head.
          In any case, QNetworkAccessManagershould handle proxy buffering just as well as nginx does. And if Qt6 succeeds in doing the job, solution number 2 is probably the best way to go.

          Software Engineer
          The Qt Company, Oslo

          1 Reply Last reply
          0
          • warped-rudiW warped-rudi has marked this topic as solved on

          • Login

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • Users
          • Groups
          • Search
          • Get Qt Extensions
          • Unsolved