iOS 利用摄像头闪光灯测心率绘画心率图

         最近做的项目是医疗相关的, 其中有个功能是开启摄像头和闪光灯, 把手指放在摄像头处,便可以绘画心率曲线, 并估出心跳次数.刚听到这个项目功能点的时候,头很大 毫无头绪,在网上查了查资料 小demo, 最后算是实现了, 但是还是有点bug(线不太稳定, 测得不太准)

一.   实现原理(来自知乎):

     用高光(摄像头旁的 LED 闪光灯,或者其他足够亮的光源也可)照亮指尖皮下毛细血管,当心脏将新鲜的血液压入毛细血管时,亮度(红色的深度)会有轻微变化,通过摄像头监测这一有规律变化的间隔,即可算出心跳了(通过摄像头采集的图像红色色调的变化的值,来计算绘图, 算法很难)。

二.  实现代码:

参考了github上一个外国大神的心跳demo

功能主要分为以下几个模块:

1.开启摄像头闪光灯, 展示图层:
   需要用到的几个类

   AVCaptureDevice :代表抽象的摄像头设备

   AVCaptureSession (很重要) :代表着input 和 output 的桥梁, 协调着input到output的数据传输.

   AVCaptureInput :代表输入设备(可以是它的子类), 它配置抽象硬件设备的ports

   AVCaptureDeviceInput (AVCaptureInput的子类) : 提供一个接口来捕捉AVCaptureDevice的媒体数据

   AVCaptureOutput :代表输出数据, 管理着输出的一个视频或者图像

   AVCaptureVideoDataOutput (AVCaptureOutput子类) :代表捕捉摄像中压缩或者未压缩的视频帧

   AVCaptureConnection :代表一个连接(AVCaptureInputPort端口和AVCaptureOutput端口AVCaptureVideoPreviewLayer 当前session)

   AVCaptureVideoDataOutputSampleBufferDelegate (AVCaptureVideoDataOutput 代理) :接收视频捕获缓冲区和示例通知的样本缓冲

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; :每当AVCaptureVideoDataOutput实例输出一个新视频帧(开启摄像头之后, 摄像头显示图像数据会走这个代理方法)

  

    若想在一个已经使用上的session中(已经startRunning了)做更换新的device、删除旧的device等一系列操作,那么就需要使用如下方法:

  [session beginConfiguration];

  // Remove an existing capture device.

  // Add a new capture device.

  // Reset the preset.

   [session commitConfiguration];


2.根据显示图层的rgb颜色得到变化值

3.根据变化值,绘图

不说废话直接上代码, 注释还是挺全的,如有不对欢迎校对:

//
//  MainViewController.h
//  HeartBeats
//
//  Created by 帝炎魔 on 25/04/15.
//  Copyright (c) 2015 帝炎魔. All rights reserved.
//

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>

@interface MainViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>

@end


//
//  MainViewController.m
//  HeartBeats
//
//  Created by 帝炎魔 on 25/04/15.
//  Copyright (c) 2015 帝炎魔. All rights reserved.
//

#import "MainViewController.h"


@interface MainViewController ()
{
    AVCaptureSession *session;
    CALayer* imageLayer;
    NSMutableArray *points;
}

@end

@implementation MainViewController

- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
    self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
    if (self) {
        // Custom initialization
    }
    return self;
}

- (void)viewDidLoad
{
    [super viewDidLoad];
    
    imageLayer = [CALayer layer];
    imageLayer.frame = self.view.layer.bounds;
    imageLayer.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:imageLayer];
    
    [self setupAVCapture];
}

- (void)viewWillDisappear:(BOOL)animated {
    [self stopAVCapture];
}

- (void)setupAVCapture
{
    // Get the default camera device
    /**
     *  1. 获取摄像头硬件设备(类型 : Video 摄像类型)
     */
	AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    //   开启摄像头闪光灯
	if([device isTorchModeSupported:AVCaptureTorchModeOn]) {
		[device lockForConfiguration:nil];
        device.torchMode=AVCaptureTorchModeOn;
        [device setTorchMode:AVCaptureTorchModeOn];
        // 当关掉摄像头的时候 关闭闪光灯
		[device unlockForConfiguration];
	}
    
	// Create the AVCapture Session
    // 2. 创建session
	session = [AVCaptureSession new];
    // 开始配置intput output
    [session beginConfiguration];
    
	// Create a AVCaptureDeviceInput with the camera device
    // 3. 配置input
    NSError *error = nil;
	AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
	if (error) {
        UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:[NSString stringWithFormat:@"Error %d", (int)[error code]]
                                                            message:[error localizedDescription]
                                                           delegate:nil
                                                  cancelButtonTitle:@"OK"
                                                  otherButtonTitles:nil];
        [alertView show];
        //[self teardownAVCapture];
        return;
    }
    // 给session设置input
    if ([session canAddInput:deviceInput])
		[session addInput:deviceInput];
    
    // AVCaptureVideoDataOutput
    // 3. 配置output
    AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
	NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
									   [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
	[videoDataOutput setVideoSettings:rgbOutputSettings];
	[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
    // 开启摄像头采集图像输出的子线程
	dispatch_queue_t videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
    // 设置子线程执行代理方法
	[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
    
    // session配置output
    if ([session canAddOutput:videoDataOutput])
		[session addOutput:videoDataOutput];
    // 4. AVCaptureConnection : 代表一个连接(AVCaptureInputPort端口和AVCaptureOutput端口AVCaptureVideoPreviewLayer 当前session)
    // AVCaptureVideoPreviewLayer : output输出的展示图层
    // 用当前的output 初始化connection
    AVCaptureConnection* connection = [videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
    // 设置最小的视频帧输出间隔
    [connection setVideoMinFrameDuration:CMTimeMake(1, 10)];
    [connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
    
    // 5.session 配置完成
    [session commitConfiguration];
    // session <span style="font-size:12px;">开始运行<span style="font-family:宋体;line-height: 150%;">发</span><span style="line-height: 150%;" lang="EN-US">running</span><span style="font-family:宋体;line-height: 150%;">消息给它,它会自动跑起来,把输入设备的东西,提交到输出设备中</span></span>
    [session startRunning];
}
#pragma mark -- 终止session (停止摄像头和闪光灯)
- (void)stopAVCapture
{
    [session stopRunning];
    session = nil;
    points = nil;
}

#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate
/**
 *  每当AVCaptureVideoDataOutput实例输出一个新视频帧(开启摄像头之后, 摄像头显示图像数据会走这个代理方法)
 *
 *  @param captureOutput 当前output 对象
 *  @param sampleBuffer  样本缓冲 对象
 *  @param connection    捕获连接 对象
 */
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    // 获取图层缓冲
    CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    uint8_t *buf = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    
    float r = 0, g = 0,b = 0;
	for(int y = 0; y < height; y++) {
		for(int x = 0; x < width * 4; x += 4) {
			b += buf[x];
			g += buf[x+1];
			r += buf[x+2];
		}
		buf += bytesPerRow;
	}
	r /= 255 * (float)(width * height);
	g /= 255 * (float)(width * height);
	b /= 255 * (float)(width * height);
    
	float h,s,v;
    // 通过算法, 根据rgb值 得到相关的值
	RGBtoHSV(r, g, b, &h, &s, &v);
	static float lastH = 0;
	float highPassValue = h - lastH;
	lastH = h;
	float lastHighPassValue = 0;
	float lowPassValue = (lastHighPassValue + highPassValue) / 2;
	lastHighPassValue = highPassValue;
    
    // 画心率折现
    [self render:context value:[NSNumber numberWithFloat:lowPassValue]];
    
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    
    id renderedImage = CFBridgingRelease(quartzImage);
    
    // 主线程 把摄像头采集图像放到自定义imageView上
    dispatch_async(dispatch_get_main_queue(), ^(void) {
        [CATransaction setDisableActions:YES];
        [CATransaction begin];
		imageLayer.contents = renderedImage;
        [CATransaction commit];
	});
}
#pragma mark --- 画线方法
- (void)render:(CGContextRef)context value:(NSNumber *)value
{
    if(!points)
        points = [NSMutableArray new];
    [points insertObject:value atIndex:0];
    CGRect bounds = imageLayer.bounds;
	while(points.count > bounds.size.width / 2)
		[points removeLastObject];
    if(points.count == 0)
        return;
    
    CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
    CGContextSetLineWidth(context, 2);
    CGContextBeginPath(context);
    
    CGFloat scale = [[UIScreen mainScreen] scale];
    
    // Flip coordinates from UIKit to Core Graphics
    CGContextSaveGState(context);
    CGContextTranslateCTM(context, .0f, bounds.size.height);
    CGContextScaleCTM(context, scale, scale);
    
    float xpos = bounds.size.width * scale;
    float ypos = [[points objectAtIndex:0] floatValue];
    
    CGContextMoveToPoint(context, xpos, ypos);
    for(int i = 1; i < points.count; i++) {
        xpos -= 5;
        float ypos = [[points objectAtIndex:i] floatValue];
        CGContextAddLineToPoint(context, xpos, bounds.size.height / 2 + ypos * bounds.size.height / 2);
    }
    CGContextStrokePath(context);
    
    CGContextRestoreGState(context);
}

#pragma mark --- 获取颜色变化的算法
void RGBtoHSV( float r, float g, float b, float *h, float *s, float *v ) {
	float min, max, delta;
	min = MIN( r, MIN(g, b ));
	max = MAX( r, MAX(g, b ));
	*v = max;
	delta = max - min;
	if( max != 0 )
		*s = delta / max;
	else {
		*s = 0;
		*h = -1;
		return;
	}
	if( r == max )
		*h = ( g - b ) / delta;
	else if( g == max )
		*h = 2 + (b - r) / delta;
	else
		*h = 4 + (r - g) / delta;
	*h *= 60;
	if( *h < 0 )
		*h += 360;
}

@end

效果图:



  

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值