[quote author="Seba84" date="1324591465"]
But regarding the problems you have found with OpenGL on different platforms and hardware, these should be reflected on Qt painting tools as the underlaying code is done on OpenGL, am I right?
I honestly don't know. I did not use the qpainter/opengl routines at all. Software raster QPainters are used elsewhere (i.e. for painting to offscreen buffers, in separate processes; but that is another story).
My application is slightly different than the common video game style application. Namely I push video, web pages, images up to the GPU as textures at a high rate. Normal video game usage pre-pushes the textures than uses them in rendering.
Also, gamers probably get tired after six hours then shut down for the night. Our application needs to run continuously; hence the testing pattern is different. Think of walking through the airport and looking at the digital signs - I generally see one windows error (an exposed dialog, or an application error message) every time I travel; such things are unacceptable. Linux would be a reliability improvement for all of those displays; but politics generally forces otherwise.
My point being that for most use cases (e.g. 5 minute app run), couple hours of gaming - it probably doesn't matter that these errors exist under the hood.
Essentially I started with the tutorial: http://doc.qt.nokia.com/qq/qq06-glimpsing.html . I implemented a thread which performed the GL drawing, and the SwapBuffers call. Then I implemented a texture upload thread using PBOs (which in retrospect was silly because I had a separate thread doing this). The upload thread used a shared GL context with the mainwidget (a QGLwidget).
This worked fine on NVidia. The synchronization logic was all fairly standard, since I had to communicate completed uploads to the render thread; so for playing ffmpeg videos (one of many supported media types), the upload thread would scream ahead filling up a ring-buffer of YUV format texture triples. The render thread would then consume them at video frame rate, painting them with a fragment shader performing the yuv->rgb conversion on the GPU - all fairly standard stuff, but lots of fun!
Moving from NVidia to ATI/win32 my program would end up rendering green textures instead of frames, or just garbage memory areas after a period of about 5 minutes of playback. XP would BSOD; Vista and 7 would limp on. Out went the PBOs; same problem, less frquent, so out went the upload thread - this got things working again at a cost in latency; all OpenGL operations are now performed in 1 thread. We suspected that the calls to glGenTexture/glDeleteTexture are not thread safe on ATI. We have observed similar behavior on ATI's proprietary linux drivers.
With just a single opengl paint thread all was working again, until along came a Sandybridge (core i3) / windows 7 computer to test on. This worked great; except for a significant memory leak. About every 30 seconds I'd loose about 8 to 10mb of heap space. Now you may say - ok make a huge page file and reboot every 12 hours.... After about three weeks of analysis (and ensuring our application did not have any memory leaks) this is what I have concluded (which may or may not be accurate; since I can't see the driver source code I have had to guess): The Intel drivers do some memory manipulation under the hood; I believe marking texture memory as write protected. Thus, when a user calls glTexSubImage2D or glTexImage2D the pages backing the memory pointer are locked in physical ram and marked as write protected. If you modify the memory, the ensuing trap copies the memory to a driver backed buffer and internally re-associates this new copied memory with the texture id; allowing the user process to carry on. This makes a lot of sense on an UMA architecture - after all, why copy if the user is just going to leave that memory mapped and not touch it; the UMA means that you can use your entire system memory as a texture buffer! However, touching it is just what I did; after all I had to! It appears that these texture buffers, allocated by the driver, have issues being freed when the texture is no longer needed. The buffer I passed in was from a memory mapped file (e.g. MapViewOfFile) and this may have complicated the underlying drivers - however, I have seen similar leaks when using a stack-allocated QImage and passing img.bits() to glTexImage2D. Forcing a glFinish call in the render loop has removed these leaks; however, I don't believe this is a sane way to address this problem.
Anyhow - I've driveled long enough on this - I hope it helps others out there getting their software reliable.