QImage::pixel returns the same RGB values for entire image on iOS



  • Hi!

    I have a QVideoWidget from which i grab a pixmap which i convert to a QImage. When looking at the rgb values compiling on my desktop mac i get reasonable values (matching the actual image). When compiling to iOS however all rgb values are the same, eg ( 239 , 235 , 231 , 255 ), for each pixel.

    Anybody knows whats going on?

    image = videoWidget->grab().toImage();
    ...
    for ( int row = 0; row < image.height(); ++row )
            for ( int col = 0; col < image.width(); ++col )
            {
                QColor clrCurrent( image.pixel( col, row ) );
    
    
                qDebug()<< "Pixel at [" << col << "," << row << "] contains color ("
                        << clrCurrent.red() << ", "
                        << clrCurrent.green() << ", "
                        << clrCurrent.blue() << ", "
                        << clrCurrent.alpha() << ").";
    
    
            }
    

  • Lifetime Qt Champion

    Hi,

    Silly question but: are you sure that image is a valid image ?



  • @SGaist Hi and thanks for responding :-) Yes the image seems valid since i can show it as a pixmap on a QLabel on the iPhone. Everything looks fine on screen (when i fixed QSize policy as you suggested in another post): in the upper part of the screen i have the video widget showing a live camera feed, in the bottom the image grab from the video widget is shown as a still picture; the live video widget and the grabbed image look exactly the same and have the same dimensions.

    Since i made the changes you suggested concerning QSize policy, the situation has improved: RGB values for the middle portion of the image (on the y axis) are now reasonable, mirroring what the camera is pointed at. But RGB values (from the output of clrCurrent(image.pixel(col, row));) for the upper and lower part of the image matrix (on the y axis) are all the same.

    Is
    image = videoWidget->grab().toImage(); grabbing more than the actual video widget?

    On the other hand the RGB values I'm getting might actually make sense... The actual video shown occupies a smaller portion of the upper 50% part of the screen (to preserve aspect ratio); the rest of the area is white'ish. This fits well with the kind of RGB values I'm getting (e.g. 239 , 235 , 231 , 255 ) above and below the RGB values that are reasonable and correspond to what the camera is pointed at. So in summary my understanding is that videoWidget->grab takes 50% of the screen (because i added two things to the layout), the actual video is smaller, maybe 30%. Thats why I'm getting white in the upper and lower part when calling videoWidget->grab.

    Thanks!

    Qt code:

    #include "mainwindow.h"
    #include "ui_mainwindow.h"
    
    MainWindow::MainWindow(QWidget *parent) :
        QMainWindow(parent),
        ui(new Ui::MainWindow)
    {
    
        ui->setupUi(this);
        
        camera = new QCamera(this);
        videoWidget = new QVideoWidget(this);
        imageLabel = new QLabel(this);
        
        camera->setViewfinder(videoWidget);
        ui->verticalLayout->addWidget(videoWidget);
        ui->verticalLayout->addWidget(imageLabel);
        videoWidget->setSizePolicy(QSizePolicy::Fixed,QSizePolicy::Fixed);
        videoWidget->show();
        camera->start();
    
        QTimer::singleShot(3000, this, SLOT(captureFrame()));
    
    }
    
    void MainWindow::captureFrame(){
    
       image = videoWidget->grab().toImage();
       imageLabel->setPixmap(QPixmap::fromImage(image));
       imageLabel->setSizePolicy(QSizePolicy::Fixed,QSizePolicy::Fixed);
       
       QTimer::singleShot(3000, this, SLOT(outputInfo()));
    
    }
    
    void MainWindow::outputInfo(){
    
        qDebug()<<image;
        for (int row = 0; row < image.height(); row = row + 10){
            for (int col = 0; col < image.width(); col = col + 10){
                 QColor clrCurrent(image.pixel(col, row));
                 qDebug()<< "Pixel at [" << col << "," << row << "] contains color ("
                         << clrCurrent.red() << ", "
                         << clrCurrent.green() << ", "
                         << clrCurrent.blue() << ", "
                         << clrCurrent.alpha() << ").";
             }
        }
        camera->stop();
    }
    
    MainWindow::~MainWindow()
    {
        delete ui;
    }
    
    

  • Lifetime Qt Champion

    Why are you using grab to get the image from the camera ?



  • @SGaist I want to use pixel information in an object recognition app, passing RGB values to a feature detection algorithm. Grabbing a QImage was what i thought of first...

    Is there a better way? I saw somewhere that you can set capture destination for QCamera to a byte buffer. Problem is that i have no idea how to get RGB values from that point (I'm kind of new to C++ and Qt).

    Thanks :-)


  • Lifetime Qt Champion

    QVideoProbe comes to mind for that



  • @SGaist Thanks! Yes this seems like a good idea.


Log in to reply
 

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.