ios学习--camera capture

原创 2012年03月31日 10:43:10

1,ACCaptureSession:

    用于组织Device,input和output之间的连接,类似于DShow的filter的连接。如果能够将input和output连接,则在start之后,数据将冲input输入到output。

    主要的几个点:

    a)ACCaptureDevice 用于设备的定义,既camera device。

    b)AVCaptureInput

    c)AVCaptureOutput

输入和输出都不是一对一的,如video output可以同时又video+audio的input。

 

切换前后摄像头:

AVCaptureSession *session = <#A capture session#>;
[session beginConfiguration];
[session removeInput:frontFacingCameraDeviceInput];
[session addInput:backFacingCameraDeviceInput];
[session commitConfiguration];

 添加capture input:

To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete
subclass of the abstract AVCaptureInput class). The capture device input manages the device’s ports.
NSError *error = nil;
AVCaptureDeviceInput *input =
[AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
}

添加output,output的分类:

To get output from a capture session, you add one or more outputs. An output is an instance of a concrete
subclass of AVCaptureOutput; you use:
● AVCaptureMovieFileOutput to output to a movie file
● AVCaptureVideoDataOutput if you want to process frames from the video being captured
● AVCaptureAudioDataOutput if you want to process the audio data being captured
● AVCaptureStillImageOutput if you want to capture still images with accompanying metadata

You add outputs to a capture session using addOutput:. You check whether a capture output is compatible
with an existing session using canAddOutput:. You can add and remove outputs as you want while the
session is running.
AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureMovieFileOutput *movieInput = <#Create and configure a movie output#>;
if ([captureSession canAddOutput:movieInput]) {
[captureSession addOutput:movieInput];
}
else {

// Handle the failure.
}

 

保存一个video文件,既添加video file output:

You save movie data to a file using an AVCaptureMovieFileOutputobject. (AVCaptureMovieFileOutput
is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure
various aspects of the movie file output, such as the maximum duration of the recording, or the maximum file
size. You can also prohibit recording if there is less than a given amount of disk space left.
AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc]
init];
CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>;
aMovieFileOutput.maxRecordedDuration = maxDuration;
aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality
of the movie format and the duration#>;

 

处理preview video frame数据,既每一帧的view finder数据,可以用于后续的一些高级处理,如人脸检测等等:

An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using
setSampleBufferDelegate:queue:. In addition to the delegate, you specify a serial queue on which they
delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate
in the proper order. You should not pass the queue returned by dispatch_get_current_queue since there
is no guarantee as to which thread the current queue is running on. You can use the queue to modify the
priority given to delivering and processing the video frames.

对于frame数据的处理,必须有大小的限制(image size)和处理时间的限制,如果处理了时间过长,则底层sensor将不会送数据给layouter和这个回调。

You should set the session output to the lowest practical resolution for your application. Setting the output
to a higher resolution than necessary wastes processing cycles and needlessly consumes power.
You must ensure that your implementation of
captureOutput:didOutputSampleBuffer:fromConnection: is able to process a sample buffer within
the amount of time allotted to a frame. If it takes too long, and you hold onto the video frames, AV Foundation
will stop delivering frames, not only to your delegate but also other outputs such as a preview layer.

 

处理capture的流程:

AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc]
init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,
AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];

能够支持不同的format,也支持直接生成jpg流。

If you want to capture a JPEG image, you should typically not specify your own compression format. Instead,
you should let the still image output do the compression for you, since its compression is hardware-accelerated.
If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to
get an NSData object without re-compressing the data, even if you modify the image’s metadata.

 

Camera preview显示:

You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer
object. AVCaptureVideoPreviewLayer is a subclass ofCALayer (see Core Animation Programming Guide .
You don’t need any outputs to show the preview.

AVCaptureSession *captureSession = <#Get a capture session#>;
CALayer *viewLayer = <#Get a layer from the view in which you want to present the
preview#>;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer
alloc] initWithSession:captureSession];
[viewLayer addSublayer:captureVideoPreviewLayer];

In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation
Programming Guide ). You can scale the image and perform transformations, rotations and so on just as you
would any layer. One difference is that you may need to set the layer’s orientation property to specify how
it should rotate images coming from the camera. In addition, on iPhone 4 the preview layer supports mirroring
(this is the default when previewing the front-facing camera).

 

android6.0源码分析之Camera API2.0下的Capture流程分析

之前的文章对Camera2的架构,以及初始化,preview等流程进行了分析,本文将对capture流程进行分析...
  • yangzhihuiguming
  • yangzhihuiguming
  • 2016年07月04日 17:25
  • 3828

Camera2 打开相机预览界面

camrea2 api打卡相机预览界面
  • xiaomingdbaba
  • xiaomingdbaba
  • 2016年06月13日 14:19
  • 9667

Capture Camera

  • 2012年09月14日 17:32
  • 225KB
  • 下载

Android Camera2 自定义相机

Android Camera2 自定义相机在Android API21中Google发布了Camera2类来取代Camera类,两者的变动也是比较大的,所以需要重新学习一下Camera2的用法,写下这...
  • liuweihhhh
  • liuweihhhh
  • 2017年03月13日 22:33
  • 1190

Android实战技巧之三十三:android.hardware.camera2使用指南

API 21中将原来的camera API弃用转而推荐使用新增的camera2 API,这是一个大的动作,因为新API换了架构,让开发者用起来更难了。 先来看看camera2包架构示意图: 这...
  • lincyang
  • lincyang
  • 2015年05月24日 15:19
  • 52709

android Camera2 API使用详解

由于最近需要使用相机拍照等功能,鉴于老旧的相机API问题多多,而且新的设备都是基于安卓5.0以上的,于是本人决定研究一下安卓5.0新引入的Camera2 API 来实现 Camera2API地址 首...
  • qq_25817651
  • qq_25817651
  • 2016年12月11日 15:33
  • 6344

Android Camera HAL3中拍照Capture模式下多模块间的交互与帧Result与帧数据回调

本文均属自己阅读源码的点滴总结,转账请注明出处谢谢。欢迎和大家交流。qq:1037701636 email:gzzaigcn2009@163.comSoftware:系统源码Android5.1前沿:...
  • gzzaigcn
  • gzzaigcn
  • 2015年10月21日 18:17
  • 6052

android camera2 API流程分析

Android camera2 API流程分析 Android5.0之后,新推出来了一个类,android.hardware.camera2,与原来的camera的类实现照相和拍视频的流程有所不同,...
  • u011404910
  • u011404910
  • 2016年02月29日 15:10
  • 7779

Android Camera HAL3中拍照Capture模式下多模块间的交互与帧Result与帧数据回调

前沿:         前面博文大多少总结的是Camera HAL1到HAL3的系统架构,但这些架构对于Camera APP开发来说依旧还是处于Camera API1.0的标准。而随着Camera3...
  • sadamoo
  • sadamoo
  • 2016年06月02日 20:33
  • 966

IOS:Camera的特性分析与使用2_AVCapture

AVCapture 前面我们已经分析了Camera的UIImageViewController使用,这个部分我们再来看下AVCapture怎么使用的。 (1)输入源设置:相机、照片库 (2)设置前...
  • u014011807
  • u014011807
  • 2015年07月16日 09:52
  • 757
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:ios学习--camera capture
举报原因:
原因补充:

(最多只允许输入30个字)