Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. General and Desktop
  4. How to avoid tearing with GLWidget?
Forum Updated to NodeBB v4.3 + New Features

How to avoid tearing with GLWidget?

Scheduled Pinned Locked Moved General and Desktop
10 Posts 5 Posters 2.8k Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • G Offline
    G Offline
    gi-mar
    wrote on last edited by
    #1

    I am writing a QGLWidget that countinuosly updates and displays a QImage in FullScreen: a QTimer call the update() function 30 times per second, and this cause flickering...
    With OpenGL I remember to have solved this issue calling glFlush() and glFinish(). But in this case they do not do the trick.

    I am quite new to QT, so maybe I am wrong with the update function, here is the implementation:
    @
    void MyClass::paintEvent(QPaintEvent *event){
    glMatrixMode(GL_MODELVIEW);
    QPainter painter(this);
    glDrawPixels(img.width(), img.height(), GL_RGBA, GL_UNSIGNED_BYTE, img.bits());
    painter.end();
    }
    @

    Is there a way to achieve a good fps?
    Thank you in advance.

    1 Reply Last reply
    0
    • SGaistS Offline
      SGaistS Offline
      SGaist
      Lifetime Qt Champion
      wrote on last edited by
      #2

      Hi and welcome to devnet,

      Since you are using OpenGL, why are you also using QPainter ?

      Interested in AI ? www.idiap.ch
      Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

      1 Reply Last reply
      0
      • G Offline
        G Offline
        gi-mar
        wrote on last edited by
        #3

        I found a tutorial somewhere over the net and seemed that it worked....
        I thought that was the way to display a QImage through a GLWidget...
        Can you provide me a guide or a tutorial that explain how to properly display an Image using GLWidget, or OpenGL and Qt in general?

        Thank you in advance,
        gm

        1 Reply Last reply
        0
        • B Offline
          B Offline
          BlastDV
          wrote on last edited by
          #4

          Hi gi-mar. What if you try alternating your drawing by one horizontal line?

          I mean, think about this, on the first frame you draw:

          [Line 0]

          [Line 2]

          [Line 4]

          And on the next frame, you draw:

          [Line 1]

          [Line 3]

          [Line 5]

          Do you get the point? Alternate the drawing and your flickering problem may dissapear or fade a little bit. I'm pretty sure I saw this somewhere, and maybe the GLWidget is doing that too but anyway, it won't take you so much time trying it.

          (8) Just live your life blind like me (8)

          1 Reply Last reply
          0
          • SGaistS Offline
            SGaistS Offline
            SGaist
            Lifetime Qt Champion
            wrote on last edited by
            #5

            Since you know OpenGL: you can use QGLWidget::bindTexture with your QImage and then draw that texture

            Interested in AI ? www.idiap.ch
            Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

            1 Reply Last reply
            0
            • B Offline
              B Offline
              BlastDV
              wrote on last edited by
              #6

              SGaist wouldn't that be slower? By binding a texture 30 times per second, you are passing XX amount of bytes to the graphics card with every call to bindTexture().

              I just want to be sure, maybe if we're talking about small pictures it may work perfectly but for large ones, that RAM->CPU->GPU trip will become a "bottle neck".

              (8) Just live your life blind like me (8)

              1 Reply Last reply
              0
              • Chris KawaC Offline
                Chris KawaC Offline
                Chris Kawa
                Lifetime Qt Champion
                wrote on last edited by
                #7

                You have to get that data to the GPU anyway. It doesn't matter if you use opengl texture, QPainter or anything else.
                [quote author="BlastDV" date="1405086066"]SGaist wouldn't that be slower? By binding a texture 30 times per second, you are passing XX amount of bytes to the graphics card with every call to bindTexture().[/quote]
                That's not true at all. Binding does not send any data. It merely "marks" a texture as a active one for other texture operations. glTexImage* and friends do the upload and you can do that whenever you need. There are also more modern ways to do it using extensions like bindless textures and buffer storage, but that might be a bit overkill for a single image.
                If you generate a different image every frame there's no way around it. You have to send it CPU -> GPU (unless of course you generate an image in the shaders, but that's not the case here I assume).

                As for the tearing. Don't call glFlush or glFinish. It's done by Qt when needed. What you need to do is turn on v-sync. This can be done by passing the right QGLFormat when you create QGLWidget. The bit that you're interested in is "setSwapInterval()":http://qt-project.org/doc/qt-4.8/qglformat.html#setSwapInterval. You should call it with value 1 to turn v-sync on.

                1 Reply Last reply
                0
                • V Offline
                  V Offline
                  Violet Giraffe
                  wrote on last edited by
                  #8

                  [quote author="BlastDV" date="1405086066"]SGaist wouldn't that be slower? By binding a texture 30 times per second, you are passing XX amount of bytes to the graphics card with every call to bindTexture().

                  I just want to be sure, maybe if we're talking about small pictures it may work perfectly but for large ones, that RAM->CPU->GPU trip will become a "bottle neck".[/quote]
                  Nope, it won't be slower, it will be faster. My application renders a 1280x800 image (or more, up to full HD on PC / Mac) at 25-30 fps, and it's way way faster than using software rendering with QPainter. In fact, if you're also scaling the image to fit screen you're looking at 10 fps, 15 tops. OpenGL has no problem keeping up with my 30 fps, and with minimal CPU load, too, unlike software rendering. Even on mobile ARM Android devices!
                  I think one of the keys here is DMA - direct memory access. The video card loads textures from RAM directly without any CPU load at all. And PCI-E bandwidth is huge, as well as GDDR memory bandwidth. That's on PC. But even mobile GPUs are fairly fast nowadays and has sufficient bandwidth.

                  1 Reply Last reply
                  0
                  • B Offline
                    B Offline
                    BlastDV
                    wrote on last edited by
                    #9

                    Hi, you're right, bindTexture() does not send the data to the GPU, and for this thread, it will do the job. I was wrong with that part.

                    But if we talk about passing data every frame to the GPU then no, it won't be that fast all the time. That's why people uses VBOs and stuff like that. The GPU generates frames really fast because of it's architecture, but only because every frame it's generated there. If you send a bunch of vertices every frame from CPU to GPU then your FPS will start to fall, even with PCI-E. OpenGL has no problem scaling or doing any common transformation to vertices because everything there works with matrices, and the GPU is meant to use them. That's what I was talking about.

                    But this is not what the thread is about, I think SGaist solution or Chris Kawa's one will work.

                    (8) Just live your life blind like me (8)

                    1 Reply Last reply
                    0
                    • Chris KawaC Offline
                      Chris KawaC Offline
                      Chris Kawa
                      Lifetime Qt Champion
                      wrote on last edited by
                      #10

                      Well yeah, it's always better if you can keep the data on the GPU side, but if you're generating new data every frame, like what this thread is about then there's no escaping the transfer cost. It's just a matter of finding the quickest way to do it.

                      1 Reply Last reply
                      0

                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Users
                      • Groups
                      • Search
                      • Get Qt Extensions
                      • Unsolved