iOS 实时滤镜 AVCapture Filter

转自:http://altitudelabs.com/blog/real-time-filter/

Introduction

In this tutorial, we will create an iOS app with Objective-C which apply vignette effect and a cold-color filter and display the result in real time. The application will look like this: 
img

This tutorial will be divided into two parts. First part will use AVFoundation framework to capture camera input. Second part will use CoreImage to process the captured input and then display it on screen.

Create New Project

Open Xcode, press Shift+Command+N to create a new project. Choose the iOS\Application\Single View Applicationtemplate and click Next.

img

Then enter "RealTimeFilter" for the product name. Click Next and choose a location to save your project.

img

Configure Orientation

This app only support portrait mode. Select the root of the project in the Project Navigator pane in the left sidebar to bring up the project information in the central pane. Select project target, under General tab, uncheck Landscape Leftand Landscape Right.

img

Add Libraries

Switch to the Build Phases tab. 
Expand the Link Binary With Libraries section and add the following additional libraries to your project:

  • CoreImage
  • AVFoundation
  • OpenGLES

img

Capture video using AVFoundation

AVFoundation is a powerful framework which can play and create time-based audiovisual media. In this project, we will use it to capture video from camera.

To capture video from camera, we will need objects of following classes:

  • AVCaptureDevice: represent an input device, such as back camera, front camera, microphone.
  • AVCaptureVideoDataOutput: manage the output of video frames.
  • AVCaptureDeviceInput: configure and manage the input from AVCaptureDevice instance.
  • AVCaptureSession: coordinate the data flow from the input to the output.

Let's start coding!

1) Declare the following properties in ViewController.m:

@interface ViewController ()
@property AVCaptureDevice *videoDevice;
@property AVCaptureSession *captureSession;
@property dispatch_queue_t captureSessionQueue;
@end

2) Create

-(void)_start
{
}

method. In this method, we will get the back camera of iPhone and start the video capture session using following code.

i) Get back camera:

NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

AVCaptureDevicePosition position = AVCaptureDevicePositionBack;

for (AVCaptureDevice *device in videoDevices)  
{
    if (device.position == position) {
        _videoDevice = device;
        break;
    }
}

ii) Obtain camera input:

// obtain device input
NSError *error = nil;  
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:_videoDevice error:&error];  
if (!videoDeviceInput)  
{
    NSLog(@"%@", [NSString stringWithFormat:@"Unable to obtain video device input, error: %@", error]);
    return;
}

iii) Initiate AVCaptureSession and then set a media frame size-specific configuration. You need to check whether it is supported by device before setting it:

// obtain the preset and validate the preset
NSString *preset = AVCaptureSessionPresetMedium;  
if (![_videoDevice supportsAVCaptureSessionPreset:preset])  
{
    NSLog(@"%@", [NSString stringWithFormat:@"Capture session preset not supported by video device: %@", preset]);
    return;
}

// create the capture session
_captureSession = [[AVCaptureSession alloc] init];  
_captureSession.sessionPreset = preset;  

iv) Configure captureSession. Since we will use CIImage and CIFilter to apply filter to video output, we need to change the output format to kCVPixelFormatType32BGRA which can be used by CIImage.

// CoreImage wants BGRA pixel format
NSDictionary *outputSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInteger:kCVPixelFormatType_32BGRA]};  
// create and configure video data output
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];  
videoDataOutput.videoSettings = outputSettings;  

v) AVCaptureVideoDataOutput object vend video frames by delegate(captureOutput:didOutputSampleBuffer:fromConnection:). Set the delegate using setSampleBufferDelegate:queue:. You also need to pass a serial queue to ensure that frames are delivered to the delegate in the proper order.

// create the dispatch queue for handling capture session delegate method calls
_captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);  
[videoDataOutput setSampleBufferDelegate:self queue:_captureSessionQueue];

Set alwaysDiscardsLateVideoFrames property to YES. This ensures that any late video frames are dropped rather than output to delegate.

videoDataOutput.alwaysDiscardsLateVideoFrames = YES;  

vi) Add videoDeviceInput and videoDataOutput to _captureSession and start it. The beginConfiguration andcommitConfiguration methods ensure that devices changes occur as a group, minimizing visibility or inconsistency of state.

// begin configure capture session
[_captureSession beginConfiguration];

if (![_captureSession canAddOutput:videoDataOutput])  
{
    NSLog(@"Cannot add video data output");
    _captureSession = nil;
    return;
}

// connect the video device input and video data and still image outputs
[_captureSession addInput:videoDeviceInput];
[_captureSession addOutput:videoDataOutput];

[_captureSession commitConfiguration];

// then start everything
[_captureSession startRunning];

Get Output Data And Apply Filter

We will use GLKView to render preview.

Declare new properties like following:

@property GLKView *videoPreviewView;
@property CIContext *ciContext;
@property EAGLContext *eaglContext;
@property CGRect videoPreviewViewBounds;

Initiate the properties in viewDidLoad

- (void)viewDidLoad {
    [super viewDidLoad];

    // remove the view's background color; this allows us not to use the opaque property (self.view.opaque = NO) since we remove the background color drawing altogether
    self.view.backgroundColor = [UIColor clearColor];

    // setup the GLKView for video/image preview
    UIView *window = ((AppDelegate *)[UIApplication sharedApplication].delegate).window;
    _eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    _videoPreviewView = [[GLKView alloc] initWithFrame:window.bounds context:_eaglContext];
    _videoPreviewView.enableSetNeedsDisplay = NO;

    // because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
    _videoPreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
    _videoPreviewView.frame = window.bounds;

    // we make our video preview view a subview of the window, and send it to the back; this makes ViewController's view (and its UI elements) on top of the video preview, and also makes video preview unaffected by device rotation
    [window addSubview:_videoPreviewView];
    [window sendSubviewToBack:_videoPreviewView];

    // bind the frame buffer to get the frame buffer width and height;
    // the bounds used by CIContext when drawing to a GLKView are in pixels (not points),
    // hence the need to read from the frame buffer's width and height;
    // in addition, since we will be accessing the bounds in another queue (_captureSessionQueue),
    // we want to obtain this piece of information so that we won't be
    // accessing _videoPreviewView's properties from another thread/queue
    [_videoPreviewView bindDrawable];
    _videoPreviewViewBounds = CGRectZero;
    _videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
    _videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;

Introducing CoreImage

CoreImage contains multitude of filters ready to support image processing. We need CIImageCIFilter and CIContext to take advantage of the built-in Core Image filters when processing images. Core Image uses CPU or GPU for rendering. Since this project require real time image processing, we need to force the Core Image to use GPU all the time by creating a CIContext instance with EAGLContext instance.

i) At the end of viewDidLoad, create CIContext instance with eaglContext and then call the start method.

// create the CIContext instance, note that this must be done after _videoPreviewView is properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:@{kCIContextWorkingColorSpace : [NSNull null]} ];

if ([[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count] > 0)  
{

    [self _start];
}
else  
{
    NSLog(@"No device with AVMediaTypeVideo");
}

ii) Add delegate to listen video output by modifying @interface ViewController () to become this:

@interface ViewController () <AVCaptureVideoDataOutputSampleBufferDelegate>

and add this method.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{

}

iii) Inside captureOutput: didOutputSampleBuffer: fromConnection: method, convert output data to CIImage object by adding this code:

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);  
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];  
CGRect sourceExtent = sourceImage.extent;  

iv) Then apply two filters to image

// Image processing
CIFilter * vignetteFilter = [CIFilter filterWithName:@"CIVignetteEffect"];  
[vignetteFilter setValue:sourceImage forKey:kCIInputImageKey];
[vignetteFilter setValue:[CIVector vectorWithX:sourceExtent.size.width/2 Y:sourceExtent.size.height/2] forKey:kCIInputCenterKey];
[vignetteFilter setValue:@(sourceExtent.size.width/2) forKey:kCIInputRadiusKey];
CIImage *filteredImage = [vignetteFilter outputImage];

CIFilter *effectFilter = [CIFilter filterWithName:@"CIPhotoEffectInstant"];  
[effectFilter setValue:filteredImage forKey:kCIInputImageKey];
filteredImage = [effectFilter outputImage];  

V) Finally, display the new image by videoPreviewView:

CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;  
CGFloat previewAspect = _videoPreviewViewBounds.size.width  / _videoPreviewViewBounds.size.height;

// we want to maintain the aspect radio of the screen size, so we clip the video image
CGRect drawRect = sourceExtent;  
if (sourceAspect > previewAspect)  
{
    // use full height of the video image, and center crop the width
    drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
    drawRect.size.width = drawRect.size.height * previewAspect;
}
else  
{
    // use full width of the video image, and center crop the height
    drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
    drawRect.size.height = drawRect.size.width / previewAspect;
}

[_videoPreviewView bindDrawable];

if (_eaglContext != [EAGLContext currentContext])  
    [EAGLContext setCurrentContext:_eaglContext];

// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);  
glClear(GL_COLOR_BUFFER_BIT);

// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);  
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

if (filteredImage)  
    [_ciContext drawImage:filteredImage inRect:_videoPreviewViewBounds fromRect:drawRect];

[_videoPreviewView display];

Done! Click Command+R to run the project. Make sure to run the project with real iPhone or iPad instead of simulator because simulator wouldn't connect to the camera of you mac computer. 
Here's the example project with all of the code from this tutorial. If you have any questions in this tutorial, please let me know by leaving a comment.

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值