ios学习--iOS CVImageBuffer distorted from AVCaptureSessionDataOutput with AVCaptureSessionPresetPhoto

转载 2012年03月29日 14:06:31

At a high level, I created an app that lets a user point his or her iPhone camera around and see video frames that have been processed with visual effects. Additionally, the user can tap a button to take a freeze-frame of the current preview as a high-resolution photo that is saved in their iPhone library.

To do this, the app follows this procedure:

1) Create an AVCaptureSession

captureSession = [[AVCaptureSession alloc] init]; 
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
 

2) Hook up an AVCaptureDeviceInput using the back-facing camera.

videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease]; 
[captureSession addInput:videoInput];
 

3) Hook up an AVCaptureStillImageOutput to the session to be able to capture still frames at Photo resolution.

stillOutput = [[AVCaptureStillImageOutput alloc] init]; 
[stillOutput setOutputSettings:[NSDictionary
 
    dictionaryWithObject
:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
 
    forKey
:(id)kCVPixelBufferPixelFormatTypeKey]];
 
[captureSession addOutput:stillOutput];
 

4) Hook up an AVCaptureVideoDataOutput to the session to be able to capture individual video frames (CVImageBuffers) at a lower resolution

videoOutput = [[AVCaptureVideoDataOutput alloc] init]; 
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
 
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
 
[captureSession addOutput:videoOutput];
 

5) As video frames are captured, the delegate's method is called with each new frame as a CVImageBuffer:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{
 
   
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 
   
[self.delegate processNewCameraFrame:pixelBuffer];
 
}
 

6) Then the delegate processes/draws them:

- (void)processNewCameraFrame:(CVImageBufferRef)cameraFrame { 
   
CVPixelBufferLockBaseAddress(cameraFrame, 0); 
   
int bufferHeight = CVPixelBufferGetHeight(cameraFrame); 
   
int bufferWidth = CVPixelBufferGetWidth(cameraFrame); 
 
    glClear
(GL_COLOR_BUFFER_BIT); 
 
    glGenTextures
(1, &videoFrameTexture_); 
    glBindTexture
(GL_TEXTURE_2D, videoFrameTexture_); 
    glTexParameteri
(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 
    glTexParameteri
(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 
    glTexParameteri
(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 
    glTexParameteri
(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 
 
    glTexImage2D
(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame)); 
 
    glBindBuffer
(GL_ARRAY_BUFFER, [self vertexBuffer]); 
    glBindBuffer
(GL_ELEMENT_ARRAY_BUFFER, [self indexBuffer]); 
 
    glDrawElements
(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); 
 
    glBindBuffer
(GL_ARRAY_BUFFER, 0); 
    glBindBuffer
(GL_ELEMENT_ARRAY_BUFFER, 0); 
   
[[self context] presentRenderbuffer:GL_RENDERBUFFER]; 
 
    glDeleteTextures
(1, &videoFrameTexture_); 
 
   
CVPixelBufferUnlockBaseAddress(cameraFrame, 0); 
} 

This all works and leads to the correct results. I can see a video preview of 640x480 processed through OpenGL. It looks like this:

However, if I capture a still image from this session, its resolution will also be 640x480. I want it to be high resolution, so in step one I change the preset line to:

[captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; 

This correctly captures still images at the highest resolution for the iPhone4 (2592x1936).

However, the video preview (as received by the delegate in steps 5 and 6) now looks like this:

I've confirmed that every other preset (High, medium, low, 640x480, and 1280x720) previews as intended. However, the Photo preset seems to send buffer data in a different format.

I've also confirmed that the data being sent to the buffer at the Photo preset is actually valid image data by taking the buffer and creating a UIImage out of it instead of sending it to openGL:

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(cameraFrame), bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
 
CGImageRef cgImage = CGBitmapContextCreateImage(context);
 
UIImage *anImage = [UIImage imageWithCGImage:cgImage];
 

This shows an undistorted video frame.

I've done a bunch of searching and can't seem to fix it. My hunch is that it's a data format issue. That is, I believe that the buffer is being set correctly, but with a format that this line doesn't understand:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame)); 

My hunch was that changing the external format from GL_BGRA to something else would help, but it doesn't... and through various means it looks like the buffer is actually in GL_BGRA.

Does anyone know what's going on here? Or do you have any tips on how I might go about debugging why this is happening? (What's super weird is that this happens on an iphone4 but not on an iPhone 3GS ... both running ios4.3)



 


 

相关文章推荐

[IOS开发]自定义使用AVCaptureSession 拍照,摄像,载图总结

http://www.cnblogs.com/liangzhimy/archive/2012/10/26/2740905.html [IOS开发]拍照,摄像,载图总结 1 建立Sess...

iphone ios 视频采集AVCaptureSessionPresetHigh/Medium/Low分辨率等参数

以下大家友情支持一下: 做了一个产品,需要人气支持一下,android和iphone上91市场搜索#super junior粉丝团#,或者直接到页面下载http://m.ixingji.com/m....

ios 相机开发 自动对焦

相信大家开发自定义相机的时候都有碰到过 相机怎么自定对焦。 百度查找到的一般都是检查照片的模糊度。其实不用这么麻烦苹果有自带的方法。 不多说附上代码 /**  *  AVCaptureSessi...

xcode中如何把lib工程加入主工程

转自:http://zhidao.baidu.com/link?url=i-Tp1X4poR_d61WtaOCTEzIEJydhb5GXmANeyLaCigmjHZFCKRp9qb8q46KnlPBv...
  • oiken
  • oiken
  • 2015-10-21 21:46
  • 641

Xcode 之自己编译静态库

今天介绍下,如何利用Xcode,

ios学习--XCode添加lib库(c/c++编译,采用gcc)

主要的步骤: 1,将***.a拖入到Frameworks中,既制定了libs的search目录。 2,修改"Header search Paths"将header的目录路径填入,其中“$(SRCR...

ios学习--How to capture video frames from the camera as images using AV Foundation

Q:  How do I capture video frames from the camera as images using AV Foundation? A: How do I captur...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)