ios学习--How to capture video frames from the camera as images using AV Foundation

翻译 2012年03月29日 13:18:25

Q:  How do I capture video frames from the camera as images using AV Foundation?

A: How do I capture video frames from the camera as images using AV Foundation?

To perform a real-time capture, first create a capture session by instantiating an AVCaptureSession object. You use an AVCaptureSession object to coordinate the flow of data from AV input devices to outputs.

Next, create a input data source that provides video data to the capture session by instantiating a AVCaptureDeviceInput object. Call addInput to add that input to the AVCaptureSession object.

Create an output destination by instantiating an AVCaptureVideoDataOutput object , and add it to the capture session using addOutput.

AVCaptureVideoDataOutput is used to process uncompressed frames from the video being captured. An instance of AVCaptureVideoDataOutput produces video frames you can process using other media APIs. You can access the frames with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. Use setSampleBufferDelegate:queue: to set the sample buffer delegate and the queue on which callbacks should be invoked. The delegate of an AVCaptureVideoDataOutputSampleBuffer object must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. Use the sessionPreset property to customize the quality of the output.

You invoke the capture session startRunning method to start the flow of data from the inputs to the outputs, and stopRunning to stop the flow.

Listing 1 shows an example of this. setupCaptureSession creates a capture session, adds a video input to provide video frames, adds an output destination to access the captured frames, then starts flow of data from the inputs to the outputs. While the capture session is running, the captured video sample buffers are sent to the sample buffer delegate using captureOutput:didOutputSampleBuffer:fromConnection:. Each sample buffer (CMSampleBufferRef) is then converted to a UIImage in imageFromSampleBuffer.

Listing 1  Configuring a capture device to record video with AV Foundation and saving the frames as UIImage objects.

#import <AVFoundation/AVFoundation.h>

// Create and configure a capture session and start it running
- (void)setupCaptureSession 
    NSError *error = nil;

    // Create the session
    AVCaptureSession *session = [[AVCaptureSession alloc] init];

    // Configure the session to produce lower resolution video frames, if your 
    // processing algorithm can cope. We'll specify medium quality for the
    // chosen device.
    session.sessionPreset = AVCaptureSessionPresetMedium;

    // Find a suitable AVCaptureDevice
    AVCaptureDevice *device = [AVCaptureDevice

    // Create a device input with the device and add it to the session.
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device 
    if (!input) {
        // Handling the error appropriately.
    [session addInput:input];

    // Create a VideoDataOutput and add it to the session
    AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
    [session addOutput:output];

    // Configure your output.
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];

    // Specify the pixel format
    output.videoSettings = 
                [NSDictionary dictionaryWithObject:
                    [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 

    // If you wish to cap the frame rate to a known value, such as 15 fps, set 
    // minFrameDuration.
    output.minFrameDuration = CMTimeMake(1, 15);

    // Start the session running to start the flow of data
    [session startRunning];

    // Assign session to an ivar.
    [self setSession:session];

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
         fromConnection:(AVCaptureConnection *)connection
    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

     < Add your code here that uses the image >


// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
      bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer

    // Free up the context and color space

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image

    return (image);


How to capture video frames from the camera as images using AV Foundation

Q:  How do I capture video frames from the camera as images using AV Foundation? A: How do I capt...
  • erica_sadun
  • erica_sadun
  • 2012年08月31日 14:48
  • 776

How to capture video frames from the camera as images using AV Foundation on iOS

Technical Q&A QA1702 How to capture video frames from the camera as images using AV Foundation on i...
  • jeffasd
  • jeffasd
  • 2016年07月29日 17:36
  • 535

ios学习--camera capture

1,ACCaptureSession:     用于组织Device,input和output之间的连接,类似于DShow的filter的连接。如果能够将input和output连接,则在start...
  • yanfangjin
  • yanfangjin
  • 2012年03月31日 10:43
  • 7092

AVFoundation Programming Guide(官方文档翻译5)Still and Video Media Capture - 静态视频媒体捕获。

从一个设备,例如照相机或者麦克风管理捕获,组合对象来表示输入和输出,并使用 AVCaptureSession 的实例来协调它们之间的数据流。你需要最低限度的了解: AVCaptureDevice 的...
  • zyq522376829
  • zyq522376829
  • 2016年08月07日 19:35
  • 1983

How to grab video frames directly from QCamera

How to grab video frames directly from QCameraposted on October 3, 2014 by jacob in Free Software, ...
  • flfihpv259
  • flfihpv259
  • 2017年04月01日 09:06
  • 716

【IOS学习】AV Foundation 学习笔记

HAHAHA 应用场景: 进行媒体回放时,应用必须创建一个AVPlayerLayer屠城,并将视频数据渲染到这个图层 需要通过屏幕上得控件,对媒体进行暂停、停止或者快捷操作,需要自己创建空间...
  • mangosnow
  • mangosnow
  • 2014年07月13日 23:28
  • 5110

《AV Foundation 开发秘籍》读书笔记(五)

第五章 AV Kit 用法AV Kit 可以简化基于 AV Foundation 框架且满足默认操作系统视觉效果和体验的视频播放器的创建过程。AV Kit 框架第一次出现在 Mac OS X Mave...
  • null29
  • null29
  • 2018年01月02日 15:17
  • 28

《AV Foundation 开发秘籍》读书笔记(二)

《AV Foundation 开发秘籍》第二章 音频播放和录制
  • null29
  • null29
  • 2017年11月17日 09:56
  • 126

iOS AV Foundation 二维码扫描 04 使用合成语音朗读二维码内容

前一节,我们为程序识别到的二维码提供了可视化的显示,这一节,我们使用合成语音朗读扫描到的二维码的内容。 修改ViewController.m,定义以下实例变量并进行初始化: AVSpeechSyn...
  • yamingwu
  • yamingwu
  • 2015年03月22日 09:26
  • 1203

How to use OpenCV to capture and display images from a camera

#include "stdafx.h"#include "cv.h" // includes OpenCV definitions#include "highgui.h" // includes hi...
  • tyq0902
  • tyq0902
  • 2006年03月12日 12:34
  • 940
您举报文章:ios学习--How to capture video frames from the camera as images using AV Foundation