有关QCamera的使用说明和常见问题

QCamera常见问题

QCamera使用start() 、 stop()时报错,Qt版本5.15.2,可能原因是调用了驱动不支持的功能,但不影响主要功能的正常使用:
Unsupported media type: “{47504A4D-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{47504A4D-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{47504A4D-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{47504A4D-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{47504A4D-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{32595559-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{32595559-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{32595559-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{32595559-0000-0010-8000-00AA00389B71}”
Unsupported media type: “{32595559-0000-0010-8000-00AA00389B71}”

相关类库

QCameraInfo:+multimedia可获取或处理相机描述、设备名称、角度、前后位置、可用设备列表等信息,用以区分和识别相机。
QCamera :+multimedia提供设备总接口,可获取或处理相机运行模式、控制指令、状态、报警信息、曝光、对焦、图像处理、取景框、支持的锁、取景框帧率、像素格式、取景框分辨率等。
QCameraExposure:+multimedia获取或设置曝光相关参数,主要用于控制闪光灯、感光信号放大器、光通路等,直接影响RAW成像效果:aperture光圈、flash闪光灯、exposure曝光、IsoSensitivity感光度、ShutterSpeed快门速度、Metering测光
QCameraFocus :+multimedia获取或设置变焦相关参数:光学变焦和数字变焦
QCameraViewFinder :+multimediawidgets提供取景框widget显示窗口。
QCameraViewfinderSettings :+multimedia对取景框widget的相关设置。最大最小帧率、像素比例、像素色彩格式、分辨率等。参数变更需要使用setviewfindersettings()函数才能生效。
QCameraImageProcessing:+multimedia对感光器件成像(RAW)进行相应图像处理(处理后的图像通常为jpeg等格式,并输出到取景框),影响成像输出jpeg等格式的亮度、对比度、饱和度、滤镜、白平衡、锐化等,对RAW图像无效。
QCameraImageCapture :+multimedia用于获取一帧取景框的图像。缓存色彩格式、捕获存储位置、编码参数
(buffer/file)、支持的配置参数列表。

关于QCamera的使用,只要通读官方说明文档,便能够掌握的八九不离十,就是英文原文需要耐心。

camera基本工作原理

The Qt Multimedia API provides a number of camera related classes, so you can access images and videos from mobile device cameras or webcameras. There are both C++ and QML APIs for common tasks.

  • Camera Features

In order to use the camera classes a quick overview of the way a camera works is needed. If you’re already familiar with this, you can skip ahead to Camera implementation details.

  • The Lens Assembly

At one end of the camera assembly is the lens assembly (one or more lenses, arranged to focus light onto the sensor). The lenses themselves can sometimes be moved to adjust things like focus and zoom, or they might be fixed in an arrangement to give a good balance between objects in focus, and cost.
Some lens assemblies can automatically be adjusted so that an object at different distances from the camera can be kept in focus. This is usually done by measuring how sharp a particular area of the frame is, and by adjusting the lens assembly until it is maximally sharp. In some cases the camera will always use the center of the frame for this. Other cameras may also allow the region to focus to be specified (for “touch to zoom”, or “face zoom” features).

  • The Sensor

Once light arrives at the sensor, it gets converted into digital pixels. This process can depend on a number of things but ultimately comes down to two things - how long the conversion is allowed to take, and how bright the light is. The longer a conversion can take, the better the quality. Using a flash can assist with letting more light hit the sensor, allowing it to convert pixels faster, giving better quality for the same amount of time. Conversely, allowing a longer conversion time can let you take photos in darker environments, as long as the camera is steady.

  • Image Processing

After the image has been captured by the sensor, the camera firmware performs various image processing tasks on it to compensate for various sensor characteristics, current lighting, and desired image properties. Faster sensor pixel conversion times tend to introduce digital noise, so some amount of image processing can be done to remove this based on the camera sensor settings.
The color of the image can also be adjusted at this stage to compensate for different light sources - fluorescent lights and sunlight give very different appearances to the same object, so the image can be adjusted based on the white balance of the picture (due to the different color temperatures of the light sources).
Some forms of “special effects” can also be performed at this stage. Black and white, sepia, or “negative” style images can be produced.

  • Recording for Posterity

Finally, once a perfectly focused, exposed and processed image has been created, it can be put to good use. Camera images can be further processed by application code (for example, to detect barcodes, or to stitch together a panoramic image), or saved to a common format like JPEG, or used to create a movie. Many of these tasks have classes to assist them.

Camera Implementation Details

  • Detecting and Selecting Camera

Before using the camera APIs, you should check that a camera is available at runtime. If there is none, you could for example disable camera related features in your application. To perform this check in C++, use the QCameraInfo::availableCameras() function, as shown in the example below:

 bool checkCameraAvailability()
 {
     if (QCameraInfo::availableCameras().count() > 0)
         return true;
     else
         return false;
 }

In QML, use the QtMultimedia.availableCameras property:

 Item {
     property bool isCameraAvailable: QtMultimedia.availableCameras.length > 0
 }

After determining whether a camera is available, access it using the QCamera class in C++ or the Camera type in QML.
When multiple cameras are available, you can specify which one to use.
In C++:

 const QList<QCameraInfo> cameras = QCameraInfo::availableCameras();
 for (const QCameraInfo &cameraInfo : cameras) {
     if (cameraInfo.deviceName() == "mycamera")
         camera = new QCamera(cameraInfo);
 }

In QML, you can set the Camera deviceId property. All available IDs can be retrieved from QtMultimedia.availableCameras:

 Camera {
     deviceId: QtMultimedia.availableCameras[0].deviceId
 }

You can also select the camera by its physical position on the system rather than its device ID. This is useful on mobile devices, which often have a front-facing and a back-facing camera.
In C++:

 camera = new QCamera(QCamera::FrontFace);

In QML:

 Camera {
     position: Camera.FrontFace
 }

If neither device ID nor position is specified, the default camera will be used. On desktop platforms, the default camera is set by the user in the system settings. On a mobile device, the back-facing camera is usually the default camera. You can get information about the default camera using QCameraInfo::defaultCamera() in C++ or QtMultimedia.defaultCamera in QML.

  • Viewfinder

While not strictly necessary, it’s often useful to be able to see what the camera is pointing at. Most digital cameras allow an image feed from the camera sensor at a lower resolution (usually up to the size of the display of the camera) so you can compose a photo or video, and then switch to a slower but higher resolution mode for capturing the image.
Depending on whether you’re using QML or C++, you can do this in multiple ways. In QML, you can use Camera and VideoOutput together to show a simple viewfinder:

 VideoOutput {
     source: camera

     Camera {
         id: camera
         // You can adjust various settings in here
     }
 }

In C++, your choice depends on whether you are using widgets, or QGraphicsView. The QCameraViewfinder class is used in the widgets case, and QGraphicsVideoItem is useful for QGraphicsView.

 camera = new QCamera;
 viewfinder = new QCameraViewfinder;
 camera->setViewfinder(viewfinder);
 viewfinder->show();

 camera->start(); // to start the viewfinder

For advanced usage (like processing viewfinder frames as they come, to detect objects or patterns), you can also derive from QAbstractVideoSurface and set that as the viewfinder for the QCamera object. In this case you will need to render the viewfinder image yourself.

 camera = new QCamera;
 mySurface = new MyVideoSurface;
 camera->setViewfinder(mySurface);

 camera->start();
 // MyVideoSurface::present(..) will be called with viewfinder frames

On mobile devices, the viewfinder image might not always be in the orientation you would expect. The camera sensors on these devices are often mounted in landscape while the natural orientation of the screen is portrait. This results in the image appearing sideways or inverted depending on the device orientation. In order to reflect on screen what the user actually sees, you should make sure the viewfinder frames are always rotated to the correct orientation, taking into account the camera sensor orientation and the current display orientation.

 // Assuming a QImage has been created from the QVideoFrame that needs to be presented
 QImage videoFrame;
 QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation

 // Get the current display orientation
 const QScreen *screen = QGuiApplication::primaryScreen();
 const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());

 int rotation;
 if (cameraInfo.position() == QCamera::BackFace) {
     rotation = (cameraInfo.orientation() - screenAngle) % 360;
 } else {
     // Front position, compensate the mirror
     rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
 }

 // Rotate the frame so it always shows in the correct orientation
 videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
  • Still Images

After setting up a viewfinder and finding something photogenic, to capture an image we need to initialize a new QCameraImageCapture object. All that is then needed is to start the camera, lock it so that things are in focus and the settings are not different from the viewfinder while the image capture occurs, capture the image, and finally unlock the camera ready for the next photo.

 imageCapture = new QCameraImageCapture(camera);

 camera->setCaptureMode(QCamera::CaptureStillImage);
 camera->start(); // Viewfinder frames start flowing

 //on half pressed shutter button
 camera->searchAndLock();

 //on shutter button pressed
 imageCapture->capture();

 //on shutter button released
 camera->unlock();
  • Movies

Previously we saw code that allowed the capture of a still image. Recording video requires the use of a QMediaRecorder object.
To record video we need to create a camera object as before but this time as well as creating a viewfinder, we will also initialize a media recorder object.

 camera = new QCamera;
 recorder = new QMediaRecorder(camera);

 camera->setCaptureMode(QCamera::CaptureVideo);
 camera->start();

 //on shutter button pressed
 recorder->record();

 // sometime later, or on another press
 recorder->stop();

Signals from the mediaRecorder can be connected to slots to react to changes in the state of the recorder or error events. Recording itself starts with the record() function of mediaRecorder being called, this causes the signal stateChanged() to be emitted. The recording process can be changed with the record(), stop() and setMuted() slots in QMediaRecorder.

Controlling the Imaging Pipeline

Now that the basics of capturing images or movies are covered, there are a number of ways to control the imaging pipeline to implement some interesting techniques. As explained earlier, several physical and electronic elements combine to determine the final images, and you can control them with different classes.

  • Focus and Zoom

Focusing (and zoom) is managed primarily by the QCameraFocus class. QCameraFocus allows the developer to set the general policy by means of the enums for the FocusMode and the FocusPointMode. FocusMode deals with settings such as AutoFocus, ContinuousFocus and InfinityFocus, whereas FocusPointMode deals with the various focus zones within the view that are used for autofocus modes. FocusPointMode has support for face recognition (where the camera supports it), center focus and a custom focus where the focus point can be specified.
For camera hardware that supports it, Macro focus allows imaging of things that are close to the sensor. This is useful in applications like barcode recognition, or business card scanning.
In addition to focus, QCameraFocus allows you to control any available optical or digital zoom. In general, optical zoom is higher quality, but more expensive to manufacture, so the available zoom range might be limited (or fixed to unity).

  • Exposure, Aperture, Shutter Speed and Flash

There are a number of settings that affect the amount of light that hits the camera sensor, and hence the quality of the resulting image. The QCameraExposure class allows you to adjust these settings. You can use this class to implement some techniques like High Dynamic Range (HDR) photos by locking the exposure parameters (with QCamera::searchAndLock()), or motion blur by setting slow shutter speeds with small apertures.
The main settings for automatic image taking are the exposure mode and flash mode. Several other settings (aperture, ISO setting, shutter speed) are usually managed automatically but can also be overridden if desired.
You can also adjust the QCameraExposure::meteringMode() to control which parts of the camera frame to measure exposure at. Some camera implementations also allow you to specify a specific point that should be used for exposure metering - this is useful if you can let the user touch or click on an interesting part of the viewfinder, and then use this point so that the image exposure is best at that point.
Finally, you can control the flash hardware (if present) using this class. In some cases the hardware may also double as a torch (typically when the flash is LED based, rather than a xenon or other bulb). See also Torch for an easy to use API for torch functionality.

  • Image Processing

The QCameraImageProcessing class lets you adjust the image processing part of the pipeline. This includes the white balance (or color temperature), contrast, saturation, sharpening and denoising. Most cameras support automatic settings for all of these, so you shouldn’t need to adjust them unless the user wants a specific setting.
If you’re taking a series of images (for example, to stitch them together for a panoramic image), you should lock the image processing settings so that all the images taken appear similar with QCamera::searchAndLock(QCamera::LockWhiteBalance)/

  • Canceling Asynchronous Operations

Various operations such as image capture and auto focusing occur asynchrously. These operations can often be canceled by the start of a new operation as long as this is supported by the camera. For image capture, the operation can be canceled by calling cancelCapture(). For AutoFocus, autoexposure or white balance cancellation can be done by calling QCamera::unlock(QCamera::LockFocus).

  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值