OpenGL Pixel Drawing

  • Hi guys, I'm writing a nes emulator using Qt and C++, and now I'm writing the NES PPU (picture processing unit)
    and I want to use opengl, I already read the opengl window example, but I don't think this is what I need for this project, as it seems that this example is aimed toward drawing a whole frame more than drawing individual pixels, and I guess it would be too slow for this project as I need to call painter.setPen() everytime I need to draw a new pixel with different color.
    also I guess it would be slow because I need to access the on screen pixels around 90000 times in each frame and I need to acheive at least 60 FPS.
    the size of the window is 256*240, someone may thing that this is small enough to be fast even when using a rasterizer, but it can be slow when you draw it pixel by pixel.
    any help would be appreciated.

  • Moderators

    You shouldn't try to send painting commands to the OpenGL pixel by pixel. This will have terrible performance. Instead draw a single quad that fills the window and use a texture with desired dimensions for it .
    This will also scale up nicely as the filtered scaling is basically free on today's hardware.
    You can then "stream" the texture updates (computed on the CPU) by mapping it to the client memory space via PBO. If that still doesn't perform good enough you can switch two PBOs to create sorta double buffering.
    Here's a nice article about it. Take a look at the "Streaming Texture Uploads": example

    Mind though that while this will secure a good drawing performance on the GPU side, your actual pixel operations done on the CPU should be optimized as much as possible (possibly in multiple threads?).

  • thanks Chris Kawa,
    what I did was sending the drawing commands pixel by pixel and you're right it gave my a very slow performance, so I modified my code to create a QImage and fill it's pixels one by one like this:
    @QImage* frameBuffer;
    QOpenGLPaintDevice* device;
    QPainter* screenPainter;
    void WndNESScreen::PutPixel(unsigned int x, unsigned int y, unsigned char red, unsigned char green, unsigned char blue)
    frameBuffer->bits()[y2563 + x3] = red;
    2563 + x3 + 1] = green;
    frameBuffer->bits()[y2563 + x*3 + 2] = blue;
    and when all the pixels are filled I use a QPainter::DrawImage() to draw the image scaled to the window size like this:

    @void WndNESScreen::FlushFrameBuffer()
    device->setSize(size());//set device size to window size
    screenPainter->begin(device);//lock the device to the painter
    screenPainter->drawImage(0,0,frameBuffer->scaled(size()));//draw the QImage scaled to window size
    screenPainter->end();//release the device
    context->swapBuffers(this);//then swap the buffers to draw the image on screen

    and this gives me a good performance, is this right or am I doing something wrong ?

  • Moderators

    What I suggested was conceptually the same, but more low level, using core OpenGL functions instead of Qt wrappers. Also somewhat more complex.

    This will not get you the best that you could get. For example the scaled() method copies the image(not sure what the underlying method for that is) where i was talking about the hardware accelerated texture stretching.

    But if it's good it's good :) No reason to complicate things if the simple solution performs good enough.

  • ok then, I guess I will use my current method for this project.
    thanks again for your help.

Log in to reply

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.