# Convert Yuv420 to rgb and show on qt pixmap

• Hi,

I'm trying to convert a YUV420 image to a RGB image to display in QT because Qt can't display YUV files.

I think the conversion is right but Qt doesn't show anything, so I think there is something wrong with the conversion to QByteArray or something.

Can someone help me?

@unsigned int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;

``````r = y + (int)(1.402f*v);
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)(1.772f*u);

r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
``````

}

unsigned int * convertYUV420_NV21toRGB8888(unsigned char data[78080], int width, int height) {
int size = width*height;
int offset = size;
unsigned int * pixels = new unsigned int[size];
int u, v, y1, y2, y3, y4;

``````// i percorre os Y and the final pixels
// k percorre os pixles U e V
for(int i=0, k=0; i < size; i+=2, k+=2) {
y1 = data[i  ]&0xff;
y2 = data[i+1]&0xff;
y3 = data[width+i  ]&0xff;
y4 = data[width+i+1]&0xff;

u = data[offset+k  ]&0xff;
v = data[offset+k+1]&0xff;
u = u-128;
v = v-128;

pixels[i  ] = convertYUVtoRGB(y1, u, v);
pixels[i+1] = convertYUVtoRGB(y2, u, v);
pixels[width+i  ] = convertYUVtoRGB(y3, u, v);
pixels[width+i+1] = convertYUVtoRGB(y4, u, v);

if (i!=0 && (i+2)%width==0)
i+=width;
}

return pixels;
``````

}
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
unsigned char * buffer;
unsigned int * image = NULL;
QPixmap pixmap;
QImage img(352, 288, QImage::Format_ARGB32_Premultiplied);
img.fill(QColor(Qt::white).rgb());

``````ofstream outfile &#40;"debug.txt",ofstream::binary&#41;;
ifstream is;
is.open ("sample00.yuv", ios::binary );
//    is.seekg (0, ios::end);
//    length = is.tellg();
//    is.seekg (0, ios::beg);
buffer = new unsigned char[101376];
is.close();
for(int x=0; x<101376; x++)
{
outfile.write((char*)buffer + x, 1);
}
outfile.close();

/*
for (int x = 0; x < 10; ++x) {
for (int y = 0; y < 10; ++y) {
img.setPixel(x, y, qRgb(0, 0, 0));
}
}
*/

for(int i = 0; i < 101376; i++)
qDebug() << buffer[i]<< endl;

image = convertYUV420_NV21toRGB8888(buffer,352,288);
QByteArray byteImage((const char *)image);

QLabel myLabel;
myLabel.setPixmap(pixmap);
myLabel.setGeometry(20,100,320,122);
myLabel.show();

return a.exec&#40;&#41;;
``````

}@

Kind regards,

• I used following snippet to fill QImage from yuv420 data:

@
for (int y = 0; y < frame->height; y++) {
for (int x = 0; x < frame->width; x++) {
const int xx = x >> 1;
const int yy = y >> 1;
const int Y = frame->data[0][y * frame->linesize[0] + x] - 16;
const int U = frame->data[1][yy * frame->linesize[1] + xx] - 128;
const int V = frame->data[2][yy * frame->linesize[2] + xx] - 128;
const int r = qBound(0, (298 * Y + 409 * V + 128) >> 8, 255);
const int g = qBound(0, (298 * Y - 100 * U - 208 * V + 128) >> 8, 255);
const int b = qBound(0, (298 * Y + 516 * U + 128) >> 8, 255);

``````    image->setPixel(x, y, qRgb(r, g, b));
}
``````

}@

• Thanks for your help. But from what QT type is frame or is it a own class file of yours? Because linesize and etc..

• In may case frame is AVFrame from ffmpeg. But in general video frame can has some planes of data - @data[0], data[1], data[2]@ and linesize for each data plane @linesize[0], linesize[1], linesize[2].@ Linesize is width (in pixel) multiplied by bytes per pixel. You can also check wiki page on yuv420 and read in more details on fourcc.org.