Converting 16 bit per channel images for QImage
-
After dealing with gray scale images and QImage without much trouble, I am stepping into color images and I am confused by the various methods to convert images, into various image formats. I work with Qt 5.5 for mac.
My problem is rather simple. I have raw images from DSLR cameras which I want do display, and, eventually, rescaled (intensity/contrast stretching). It is read in with some external library (Libraw, to be exact).
Let's consider for this example a case of 16 bits per channel. With only 3 channels, R , G and B, we would end up with a 48 bits image. Here I simplified a bit as Raw images have 4 channels, with 2 different green channels, but I have some processing that gives me the 3 effective channels (with some average of the 2 green channel into a single green channel).So for each pixel I dispose of 3 different variables, the red, green and blue value. The minimum possible value is 0 and the maximum possible value 65535 (so, 16 bits per channel). Let's call those 3 variables:
red16
,green16
, andblue16
(for each pixel).
I understand that Qt and QImages supports up to 8 bits per channels,
so let's get the 8 bits converted values, calledred8
,green8
, andblue8
.So far, I naively tried this to convert to 8 bits, without any stretching.
red8 = (int) 255 * red16 / 65535 green8 = (int) 255 * green16 / 65535 blue8 = (int) 255 * blue16 / 65535
This goes into a buffer declared as:
quint32 coloredImage;
Which I populated with each pixel colored value using qRgb like this:
coloredImage[ii] = qRgb(red8, green8, blue8);
Finally, I give it to QImage:
myQImage = new QImage((uchar*)coloredImage, width, height, QImage::Format_ARGB32);
In the latter, I use Format_ARGB32 only because the documentation on qRgb says it gives out ARGB32 format. But maybe that's not necessary.
I have something displayed, but I see weird things when I do some intensity/contrast stretching. Before incriminating how I do that, I am not even sure that I have done the above conversion properly. Is that not the case? Is there a more proper/robust way to do it?
I am not confident at all with the data types i'm using, and if there's a better way than using qRgb given that i dispose of the 3 channels separately.
For example, I have seen this:
http://stackoverflow.com/questions/6310969/convert-16bit-qimage-to-8bit-unsigned-char-in-qtBut in there, because the whole image is converted to 8 bits, it is unclear to me whether the answer was for monochrome 16 bits images or for coloured 16 x 3 bits images.
To me converting like this the coloured image to 8 bits (total for all channels) means we have, at best 2 to 3 bits per channel. In addition, I have little experience with working with>>
operators.Thanks
-
@CamelFrog lt was to show that you should use variables which are unsigned and have the proper bite size. u_int16_t is data type able to hold values <0, 65535>. What date type are your variables red16, green16, and blue16?
Also since it's about communication between two systems (camera, pc) you can have problem with endianess of your incoming data. Try showing the camera white paper and see if the values are increasing appropriately.
-
yes, i am already using 16 bits data type for red16, green16 and blue16, unsigned. (i forgot to say unsigned, that's true). They are all between 0 and 65335.
What you're suggesting to do is the same as in
http://stackoverflow.com/questions/6310969/convert-16bit-qimage-to-8bit-unsigned-char-in-qtSo, in fact, if I apply it to my case, do you mean to do (using some random number for my colored 16 bits variables):
u_int16_t red16 = 14
u_int16_t green16 = 325
u_int16_t blue16 = 64000u_int8_t red8 = red16 >> 8;
u_int8_t green8 = green16 >> 8;
u_int8_t blue8 = blue16 >> 8;looping that over all the pixels:
coloredImage[ii] = qRgb(red8, green8, blue8);
Then consider this:
qRgb just returnsint
. What datatype shall becoloredImage
which ends up as an argument passed to the QImage? if qRgb returnsint
then coloredImage is a list of that. So, after the conversion above, is it correct to do:int coloredImage;
then loop over pixel index:
coloredImage[pixelIndex] = qRgb(red8, green8, blue8);
and instantiate the QImage:
myQImage = new QImage((uchar*)coloredImage, width, height, QImage::Format_ARGB32);
Does that make sense?
-
-
Hi,
If I may, you should maybe consider using OpenCV for the image manipulation and only convert to QImage once you're done. IIRC they have a Qt backend for the UI part so you probably can take some ideas on how they do the conversion.
-
I have considered this but for now, this is quite an overkill if I just want to display images. I don't want to manipulate, and If I do, the point of my project is to understand the basics before going (maybe) with yet another framework.
The question remains. With the
red8
,green8
, andblue8
channels above, 8 bits unsigned, how do I get to build my coloured image with QImage?As I said, I have no trouble for displaying grayscale images. I simply would like to use those channels to get a coloured QImage. So far, the propositions above are giving me worse display than what I first did in my 1st post.
Thanks
-
I may have found what I was missing:
http://stackoverflow.com/questions/1982878/how-to-display-image-from-array-of-colors-data-in-qt
Will report back.
-
I looked at the stackoverflow post. I think that
new QImage((uchar*)coloredImage, width, height, QImage::Format_ARGB32);
is equivalent to and QImage image( data, width, height, 4, QImage::Format_ARGB32 );
http://doc.qt.io/qt-5/qimage.html#QImage-5
So you should have the same results as you first results with this function.The image.setPixel( i, argb ); should work since you are setting each pixel individually., So, if this method does not work, then, your source data is incorrect.
We will wait for your report once you experiment ^^. Good luck.
-
Ok. In fact, I am confused with qRgb. It accepts "int", which is 32 bit:
http://doc.qt.io/qt-5/qcolor.html#qRgb
But the documentation doesn't say what is the accepted range.When I look at this setRgb(): http://doc.qt.io/qt-5/qcolor.html#setRgb
it also accepts "int" (which is 32 bits) and yet is says the range must be [0-255]So, is that the same for qRgb? It takes argument as "int", but they need to be within [0-255]?
-
I made some progress. I'll give some up-to-date code for Qt 5.5 as the information in the link at
http://stackoverflow.com/questions/1982878/how-to-display-image-from-array-of-colors-data-in-qt
is outdated. In my case, setPixel() does not accept 1D indexing but needs a QPoint instead.So, still using
red16
,green16
,blue16
as my unsigned 16 bits values of the color channels from my original image. They have a size of nPixels, i.e, the total number of pixels in the image. So, e.g.,red16[ii]
is giving me the red value of pixel ii.int range = 65535; // below, naxis1 is the width, naxis2 is the height (in pixels) newPaintImage = new QImage(naxis1, naxis2, QImage::Format_ARGB32 ); for ( int ii = 0; ii < nPixels; ++ii ) { // Below we scale the channel value to fit within [0-255] // This assumes that qRgba() needs that range. This is undocumented. cred = (int) 255 * red16[ii] / range; cgreen = (int) 255 * green16[ii]/ range; cblue = (int) 255 * blue16[ii] / range; QRgb argb = qRgba( cred, //red cgreen, //green cblue, //blue 255); //alpha //1D index to 2D (x,y) coordinates. QPoint loc(ii%naxis1, ii/naxis1); newPaintImage->setPixel( loc, argb); }
This is finally giving the "expected" results (from comparison with a reference .tiff image), and that result is indeed different from what I had at the very beginning. So, since i'm using the same channel values as in the 1st post, my source image is correct but my original implementation was not.
I think I misunderstood how i'm supposed to populate my 32 bits buffer with the QRgb converted colors. I'm probably not looping over the right indices.So, let's start over, and i'll show you how i'm looping. What follows is more compact version of my 1st post, but I stick to a 32 bits image buffer which is casted when passing it to the QImage constructor.
// initialize an image buffer, 32 bits unsigned. nPixels is my total number of pixels. image32 = new quint32[nPixels]; for (long ii = 0; ii < nPixels; ii++) { cred = (int) 255 * red16[ii] / range; cgreen = (int) 255 * green16[ii] / range; cblue = (int) 255 * blue16[ii] / range; QRgb colorV = qRgb(cred, cgreen, cblue); // Can also be quint32, i tried both, same results. image32[ii] = colorV; // so the image32 is populated here at pixel index ii, with the quadruplet. } // Finally, send the buffer to QImage myQImage = new QImage((uchar*)image32, naxis1, naxis2, QImage::Format_ARGB32);
With this i'm assuming that image32[ii] gives me the address of the pixel ii, to which I assign a color value.
Yet when I look at the stackoverflow above, it seems I should maybe account for the "Bytes_per_line" (aka, the stride?) if I want to populate the buffer directly, and that I cannot just give the QRgb color directly to image32[ii] like above, can I? If not, then how should I do it?Thanks
-
Just a side note. I don't really need to use an image buffer at all. I want to avoid using setPixel(), which is a bit slow, that's all. I tried working with scanLine, but again, using the exact same loop, it crashed, and I believe it's caused again because I'm not accounting for the Bytes_per_line properly.
-
I've got it working with scanLine().
The only things that changed from the usage of setPixel(), in my above codes, is:
QPoint loc(ii%naxis1, ii/naxis1); newPaintImage->setPixel( loc, argb);
Simply replaced with:
QRgb* rowData = (QRgb*) newPaintImage->scanLine(ii/naxis1); rowData[ii%naxis1] = argb;
Then scanLine, given the row number (i.e, the Y coordinates), works out the stride for you and gives the proper pointer for the whole row.
-
Something interesting came up...
When I was comparing my methods, I had a pragma directives for openMP for parallelising my for loop. They were active for the methods i used with my buffer
image32
tests.
Then for the new tests with setPixels, writing another block of code, I didn't use the openMP directive. This directive was simply:#pragma omp parallel for for (long ii = 0; ii < nPixels; ii++) {
for every for loops in the method using the buffer explicitly. (image32)
But i went back comparing without openMP... I realised the latter was actually messing with the shared buffer then...
Here are some screenshots below.- image32 (openMP): https://www.dropbox.com/s/9lw8gs4oczrm2b7/image32_A.jpg?dl=0
shows the output image when using the original method, with openMP. Note the scrambled horizontal lines. This is non-deterministic. Other instances of the program gives me different results. - image32(no openMP): https://www.dropbox.com/s/w854ahn309nbo33/image32_no_OMP.jpg?dl=0
same as above, without openMP. The scrambled lines are gone, image is cleaner, and identical across different runs. - setPixel(no openMP): https://www.dropbox.com/s/h6su6tp1qjn7gqt/setPixels_no_OMP.jpg?dl=0
It is identical as above, in fact. So my original method was ok, as anticipated by @Yostane. - setPixel(with openMP again): https://www.dropbox.com/s/onmhbk8pppuuc4o/setPixels.jpg?dl=0
The scrambled lines show up again.
Here is the code with the openMP directive:
int range = 65535; newPaintImage = new QImage(naxis1, naxis2, QImage::Format_ARGB32 ); #pragma omp parallel for // THAT WAS THE CULPRIT!!! for (long ii = 0; ii < nPixels; ii++) { cred = (int) 255 * red16 / range; cgreen = (int) 255 * green16 /range; cblue = (int) 255 * blue16/ range; QRgb argb = qRgba( cred, cgreen, cblue, 255); QPoint loc(ii%naxis1, ii/naxis1); newPaintImage->setPixel( loc, argb); } }
Then the newPaintImage is sent to a widget with a paintEvent implemented.
So, I have been abusively using openMP for this, this was the source of my troubles. Is there a way to make this more thread safe? So that I can still use parallelisation while making sure that things get displayed once all the threads are done ( i have 4)?
Thanks
- image32 (openMP): https://www.dropbox.com/s/9lw8gs4oczrm2b7/image32_A.jpg?dl=0
-
Fixed!
I posted the problem there
http://stackoverflow.com/questions/33130691/qimage-and-openmp-when-updating-image-display/33133653#33133653I needed to declare variables private for the pragma directive. So, instead of using cred, cgreen, and cblue as I was, the code becomes:
int cred2, cgreen2, cblue2 #pragma omp parallel for private(cred2, cgreen2, cblue2) for ( int ii = 0; ii < nPixels; ++ii ) { cred2 = (int) 255 * red16[ii] / range; cgreen2 = (int) 255 * green16[ii] / range; cblue2 = (int) 255 * blue16[ii] / range; QRgb argb = qRgba( cred2, cgreen2, cblue2, 255); QRgb* rowData = (QRgb*) newPaintImage->scanLine(ii/naxis1); rowData[ii%naxis1] = argb; }
That's all. Then all methods become strictly equivalent regarding the end result. Haven't compared speed though, which is a concern in my context.