Qt Mobility Tech Preview - Camera API
-
[quote author="Faid" date="1284634565"]I'm looking for a way to get the current frame rate of the camera, to do platform performance testing of different resolutions, but I have not found any API for this. Is it possible to do or would it be possible to add it to the camera APIs?[/quote]
If you are interested in testing the viewfinder frame rate, I'd suggest to write a simple video surface (QAbstractVideoSurfcae) which only collects stats and discard video frames data, and use it as a video output.
-
[quote author="dmytro.poplavskiy" date="1284695555"][quote author="Faid" date="1284634565"]I'm looking for a way to get the current frame rate of the camera, to do platform performance testing of different resolutions, but I have not found any API for this. Is it possible to do or would it be possible to add it to the camera APIs?[/quote]
If you are interested in testing the viewfinder frame rate, I'd suggest to write a simple video surface (QAbstractVideoSurfcae) which only collects stats and discard video frames data, and use it as a video output.
[/quote]
Similar to the multimedia/videowidget example? How do I use QAbstractVideoSurface with QCamera though? It requires a QVideoWidget or a QGraphicsVideoItem to draw to, so I tried subclassing QVideoWidget (and make it own a QAbstractVideoSurface, which I would then pass "paint" events to) and pass it on to QCamera. When the camera draws, my overridden paintEvent does not get called. I put breakpoints in QVideoWidget::paintEvent as well and it doesn't seem to be called neither when the camera draws to the screen. The camera output shows up in the widget, though I don't understand where the drawing occurs. I get the feeling I'm approaching this from the wrong angle.
-
It's possible not to use QGraphicsVideoItem or QVideoWidget, but request QVideoRendererControl yourself and pass your video surface for camera to render frames to.
The code may looks like:
@
class VideoSurface : public QAbstractVideoSurface {
QListQVideoFrame::PixelFormat supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType ) const
{
//return the list of formats you can handle.
}bool present(const QVideoFrame &frame)
{ // process the frame, or just count them.
}
};QVideoRendererControl *control = camera->service()->requestControl<QVideoRendererControl>();
if (control) control->setSurface(yourSurface);
@
-
[quote author="dmytro.poplavskiy" date="1285028529"]It's possible not to use QGraphicsVideoItem or QVideoWidget, but request QVideoRendererControl yourself and pass your video surface for camera to render frames to.
The code may looks like:
@
class VideoSurface : public QAbstractVideoSurface {
QList<QVideoFrame::PixelFormat> supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType ) const
{
//return the list of formats you can handle.
}bool present(const QVideoFrame &frame)
{ // process the frame, or just count them.
}
};QVideoRendererControl *control = camera->service()->requestControl<QVideoRendererControl>();
if (control) control->setSurface(yourSurface);
@
[/quote]
Thanks Dmytro, that worked very well!
-
Hi Guys,
I am using the QCamera and dispalying the viewport on a QMainWindow. But if the camera is On and my application goes to background & then comes to foreground only the camera Viewport is visible.Rest of the colors of my app do not get repainted although the buttons are there & I can perform operations.
How can I solve this app repaint problem when camera is ON?Thnx
ABag -
Hi Dmytro,
I'm new with Qt,and I have a confusing how the real thing works from QtMobilty to Camera hardware.
Let's say. There are 2 kind of cameras:- Camera with encoder. Camera output is compressed data (H264,MPEG4,H263...).
- Camera without encoder. Camera output is RAW data (YUV,RGB..).
This is my understanding about how Qt work:
In the first case, it needs to decompress video data and then display it.
To decompress video data, it need some software encoders (come from GStreamer plugin).
In the second case,it is needed to encode RAW data. This compressed data is used for 2 purposes
-
Send to other ( ex, when video conference, we send compressed data instead of raw data)
-
Decompress to display in local computer.
I hope that my thinking above is wrong (in second case)
Because I think, if we just want to preview video from camera ( with output is RAW), it is unnecessary to compress/decompress raw data. We can use RAW data directly to display.Could you please help me to make clear about how it work from QtMobility-->GStreamer-->Camera plug-in-->V4l2-->Camera-Hardware?
Thank you very much! -
Hi,
QCamera relies on camerabin gstreamer element, I'm not sure it currently supports the first case with compressed viewfinder frames.
In the second case there is no reason to compress/uncompress frames for displaying, video frames from camera (in uncompressed YUV or RGB formats) are displayed directly and only encoded if video recording is active.
-
Hi Dmytro,
I am trying to get image frames from the camera on N950.
I know that i should reimplement QAbstractVideoSurface, So i did that like follows:
@VideoSurface::VideoSurface(QWidget* widget, VideoIF* target, QObject* parent)
: QAbstractVideoSurface(parent)
{
m_targetWidget = widget;
m_target = target;
m_imageFormat = QImage::Format_Invalid;
orientationSensor=new QOrientationSensor();
m_orientation = ORIENTATION_LANDSCAPE;
orientationSensor->start();
}VideoSurface::~VideoSurface()
{
orientationSensor->stop();delete orientationSensor;
}
bool VideoSurface::start(const QVideoSurfaceFormat &format)
{
m_videoFormat = format;
const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat());
const QSize size = format.frameSize();if (imageFormat != QImage::Format_Invalid && !size.isEmpty()) { m_imageFormat = imageFormat; QAbstractVideoSurface::start(format); return true; } else { return false; }
}
unsigned char* VideoSurface::createGrayscaleBuffer(const QImage &dstImage, const int dWidth, const int dHeight)const
{
unsigned char* grayscaledBuffer = new unsigned char [dWidth*dHeight];
int offset = 0;
// default QT grayscale
for(int y = 0; y< dHeight; y++)
for(int x = 0; x< dWidth; x++)
grayscaledBuffer[offset++]=qGray(dstImage.pixel(x,y));return grayscaledBuffer;
}
bool VideoSurface::present(const QVideoFrame &frame)
{
m_frame = frame;// number of frames received for display numFrames++; if (surfaceFormat().pixelFormat() != m_frame.pixelFormat() || surfaceFormat().frameSize() != m_frame.size()) { stop(); return false; } else { m_frame.map(QAbstractVideoBuffer::ReadOnly); iWidth = m_frame.width(); iHeight = m_frame.height(); int line = m_frame.bytesPerLine(); // build QImage from frame m_completeImage = QImage(m_frame.bits(), iWidth, iHeight, line, m_frame.imageFormatFromPixelFormat(m_frame.pixelFormat())); m_frame.unmap(); QImage dstImage = scaleImage(m_completeImage); int dHeight = dstImage.height(); int dWidth = dstImage.width(); unsigned char* grayscaledBuffer = createGrayscaleBuffer(dstImage, dWidth, dHeight); m_orientation = ORIENTATION_CCW; QOrientationReading* reading= orientationSensor->reading(); if ( orientationSensor->isActive() ){ if (reading->orientation() == QOrientationReading::RightUp){ //rotate with -90 (ccw) m_orientation = ORIENTATION_LANDSCAPE; } } // do some image processing work ////////////// delete grayscaledBuffer; // convert points back to original size double iWi = (double)iWidth/dWidth; double iHi = (double)iHeight/dHeight; // should keep aspect ratio iWi = iHi = qMin(iWi, iHi); // enlarge faces int marginX, marginY; m_target->updateVideo(); return true; }
}
QImage VideoSurface::scaleImage(const QImage & srcImage)const
{
QImage dstImage;
if(MAX_DIM < iWidth || MAX_DIM < iHeight){
if(iWidth > iHeight)
dstImage = srcImage.scaledToWidth(MAX_DIM, Qt::SmoothTransformation);
else
dstImage = srcImage.scaledToHeight(MAX_DIM, Qt::SmoothTransformation);
}
else
dstImage = srcImage;
return dstImage;
}void VideoSurface::paint(QPainter *painter)
{
if (m_frame.map(QAbstractVideoBuffer::ReadOnly)) {
QImage image(
m_frame.bits(),
m_frame.width(),
m_frame.height(),
m_frame.bytesPerLine(),
m_imageFormat);
QRect r = m_targetWidget->rect();int shiftX = qAbs(r.size().width() - image.size().width()) / 2; int shiftY = qAbs(r.size().height() - image.size().height()) / 2; QPoint centerPic(shiftX , shiftY); if (!image.isNull()) { painter->drawImage(centerPic,image); // draw faces } m_frame.unmap(); }
}
QListQVideoFrame::PixelFormat VideoSurface::supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType) const
{
if (handleType == QAbstractVideoBuffer::NoHandle) {
return QListQVideoFrame::PixelFormat()
<< QVideoFrame::Format_RGB32
<< QVideoFrame::Format_ARGB32
<< QVideoFrame::Format_ARGB32_Premultiplied
<< QVideoFrame::Format_RGB565
<<QVideoFrame::Format_UYVY
<< QVideoFrame::Format_RGB555;
} else {
return QListQVideoFrame::PixelFormat();
}
}@
note that i added the QVideoFrame::Format_UYVY to the supported pixel formate.
but the problem now is that i always get the following error
@Failed to start video surface
CameraBin error: "Internal data flow error." @I think the problem is because I should convert the frame from UYVY to RGB, but i don't know how to do this.
could you please help me?