QPixmap and 16-bit colour depth
-
Hi,
I'm developing for Windows CE 5.0 and for a display that only has 16-bit colour depth. Is there a performance hit for having all the QPixmaps at 32-bit colour depth, if so is there a way to solve this? It seems that even if I load 16-bit bitmaps they get converted internally to 32-bit. I'm just concerned that the processor is doing lots of 32 -> 16 conversions every paint.
Thanks,
Steve
-
-
Yes, blitting a 32bit pixmap to 16bit screen, or the other way 16bit to 32bit, will typically give a significant performance hit. For every blit the system needs to rearrange the colors you are blitting into the right format. This is bad.
Even 32bit to 32bit or 16bit to 16bit can give a performance hit if the bit-order, the format of RGB, is different. For example it could be a 565 being blitted to a 556 or whatver funky stuff you have :) Check out "http://doc.qt.nokia.com/4.6/qimage.html#Format-enum":http://doc.qt.nokia.com/4.6/qimage.html#Format-enum to see a list of the different formats we're talking about here.
Typically Qt will detect what the proper depth and format is for your system, and automatically choose that for the pixmaps in Qt, but it could be an idea for you to really check that what you're blitting (number of bits and also the format) is the same as what the screen is running on.
"Depending on the system, QPixmap is stored using a RGB32 or a premultiplied alpha format. If the image has an alpha channel, and if the system allows, the preferred format is premultiplied alpha."
from "http://doc.qt.nokia.com/4.6/qpixmap.html#pixmap-information
":http://doc.qt.nokia.com/4.6/qpixmap.html#pixmap-informationHowever, you QImages are probably still 32bit I would assume. Unless you need an alpha channel when you paint on them , you might consider using QImages of QImage::Format_RGB16 instead. It's not like you need 32bit color depth if you end up blitting it all to a 16bit display. Saves a lot of memory and should be faster as well.
[edit: fixed links / $chetankjain]