Android系统中的Camera系统

Android系统中的Camera系统

一、Android中camera简介

1.Camera

1.1camera初识
  • 摄像头模组,全称CameraCompact Module ,以下简称CCM,是影像捕捉至关重要的电子器件。

在这里插入图片描述

1.2.camera硬件组成
  • CCM组成

在这里插入图片描述

1.3.camera工作原理
  • 工作原理。物体通过镜头( lens )聚集的光,通过CMOS或CCD集成电路,把光信号转换成电信号,再经过内部图像处理器( ISP )转换成数字图像信号输出到数字信号处理器( DSP )加工处理,转换成标准的GRB、YUV等格式图像信号。
1.4.camera图像格式
  • RGB格式:
  • 采用这种编码方法,每种颜色都可用三个变量来表示红色、绿色以及蓝色的强度,每一个像素有三原色R红色、G绿色、B蓝色组成。
  • YUV格式:
  • 其中"Y" 表示明亮度(Luminance或Luma) ,就是灰阶值;而"U" 和"V" 表示色度(Chrominance或Chroma) ,是描述影像色彩及饱和度,用于指定像素的颜色。
  • RAW DATA 格式:
  • CCD或CMOS在将光信号转换为电信号时的电平高低的原始记录,单纯地将没有进行任何处理的图像数据,即摄像元件直接得到的电信号进行数字化处理而得到的。
  • YCbCr格式:
    YCbCr其中Y是指亮度分量,Cb指蓝色色度分量,而Cr指红色色度分量。人的肉眼对视频的Y分量更敏感,因此在通过对色度分量进行子采样来减少色度分量后,肉眼将察觉不到的图像质量的变化。主要的子采样格式有YCbCr4:2:0、YCbCr 4:2:2和YCbCr 4:4:4。
1.5.Camera分辨率
  • 分辨率就是显示像素点的数量。常见的分辨率如下:

在这里插入图片描述

1.6.Camera传输率
  • 传感器采集来的数据般由专用芯片进行处理 ,处理后的数据就是视频流格式也有很多。如MPEG (运动图像专家组Motion Picture Experts Group )、AVI (音频视频交错Audio Video Interleaved)、 MOV ( QuickTime影片格式)、ASF (高级流格式Advanced stift) WMV (windows media vido) 3GP ( 3C流媒体Streaming format)、 的视频编码格式)、 FLV ( FLASH VIDEO )、RM与RMVB等等。
  • 视频流的传输速度就是传输率,该参数主要对连拍和摄像有影响。一般传输速率越高,视频越流畅。常见的传输速率有15fps , 30fps,60fps,120fps等。
  • 传输速率与图像的分辨率有关,图像分辨率越低,传输速率越高,例如某摄像头在CIF ( 352x288 )分辨率下可实现30fps传输速率,则在VGA ( 640x480 )分辨率下就只有10fps左右。故此传输率的选择会参考到对应的分辨率。一般手机应用30fps的流畅度就足够了。

2.V4L2

2.1.V4L2框架
  • V4L2其全称为video for linux two. 是1 inux内核关于视频设备的API接口,涉及开关视频设备,以及该类设备采集并处理相关的音、视频信息。

  • V412有几个重要的文档是必须要读的,

    Documentat ion/v ideo4l inux目录下的V4L2- framework. txt和videobuf、V4L2的官方API文档V4L2 API Specification ,

    dri vers/ media/video目录下的vivi.c (虚拟视频驱动程序此代码模拟一个真正的视频设备V4L2 API )。

2.2.V412接口

V412可以支持多种设备,它可以有以下几种接口:

  • 视频采集接口(video capture interface) :这种应用的设备可以是高频头或者摄像头,V4L2的最初设计就是应用于这种功能的。
  • 视频输出接口(videooutputinterface) :可以驱动计算机的外围视频图像设备–像可以输出电视信号格式的设备。
  • 直接传输视频接1 (video overlay interface) :它的主要工作是把从视频采集设备采集过来的信号直接输出到输出设备之上, 而不用经过系统的CPU。
  • 视频间隔消隐信号接口(VBI interface) :它可以使应用可以访问传输消隐期的视频信号。
  • 收音机接口(radio interface) :可用来处理从AM或FM高频头设备接收来的音物流。

二、虚拟摄像头驱动分析

1.虚拟摄像头驱动vivi.c

  • 编译/drivers/media/video/vivi.c生成viiv.ko。

  • modprobe vivi装载驱动。(modprobe 将vivi.ko所以来的模块也进行装载。)

  • 利用xawtv工具测试虚拟video0驱动。

在这里插入图片描述

2.vivi源码

/*
 * Virtual Video driver - This code emulates a real video device with v4l2 api
 *
 * Copyright (c) 2006 by:
 *      Mauro Carvalho Chehab <mchehab--a.t--infradead.org>
 *      Ted Walther <ted--a.t--enumera.com>
 *      John Sokol <sokol--a.t--videotechnology.com>
 *      http://v4l.videotechnology.com/
 *
 *      Conversion to videobuf2 by Pawel Osciak & Marek Szyprowski
 *      Copyright (c) 2010 Samsung Electronics
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the BSD Licence, GNU General Public License
 * as published by the Free Software Foundation; either version 2 of the
 * License, or (at your option) any later version
 */
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/font.h>
#include <linux/mutex.h>
#include <linux/videodev2.h>
#include <linux/kthread.h>
#include <linux/freezer.h>
#include <media/videobuf2-vmalloc.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
#include <media/v4l2-ctrls.h>
#include <media/v4l2-fh.h>
#include <media/v4l2-event.h>
#include <media/v4l2-common.h>

#define VIVI_MODULE_NAME "vivi"

/* Wake up at about 30 fps */
#define WAKE_NUMERATOR 30
#define WAKE_DENOMINATOR 1001
#define BUFFER_TIMEOUT     msecs_to_jiffies(500)  /* 0.5 seconds */

#define MAX_WIDTH 1920
#define MAX_HEIGHT 1200

#define VIVI_VERSION "0.8.1"

MODULE_DESCRIPTION("Video Technology Magazine Virtual Video Capture Board");
MODULE_AUTHOR("Mauro Carvalho Chehab, Ted Walther and John Sokol");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(VIVI_VERSION);

static unsigned video_nr = -1;
module_param(video_nr, uint, 0644);
MODULE_PARM_DESC(video_nr, "videoX start number, -1 is autodetect");

static unsigned n_devs = 1;
module_param(n_devs, uint, 0644);
MODULE_PARM_DESC(n_devs, "number of video devices to create");

static unsigned debug;
module_param(debug, uint, 0644);
MODULE_PARM_DESC(debug, "activates debug info");

static unsigned int vid_limit = 16;
module_param(vid_limit, uint, 0644);
MODULE_PARM_DESC(vid_limit, "capture memory limit in megabytes");

/* Global font descriptor */
static const u8 *font8x16;

#define dprintk(dev, level, fmt, arg...) \
	v4l2_dbg(level, debug, &dev->v4l2_dev, fmt, ## arg)

/* ------------------------------------------------------------------
	Basic structures
   ------------------------------------------------------------------*/

struct vivi_fmt {
	char  *name;
	u32   fourcc;          /* v4l2 format id */
	int   depth;
};

static struct vivi_fmt formats[] = {
	{
		.name     = "4:2:2, packed, YUYV",
		.fourcc   = V4L2_PIX_FMT_YUYV,
		.depth    = 16,
	},
	{
		.name     = "4:2:2, packed, UYVY",
		.fourcc   = V4L2_PIX_FMT_UYVY,
		.depth    = 16,
	},
	{
		.name     = "4:2:2, packed, YVYU",
		.fourcc   = V4L2_PIX_FMT_YVYU,
		.depth    = 16,
	},
	{
		.name     = "4:2:2, packed, VYUY",
		.fourcc   = V4L2_PIX_FMT_VYUY,
		.depth    = 16,
	},
	{
		.name     = "RGB565 (LE)",
		.fourcc   = V4L2_PIX_FMT_RGB565, /* gggbbbbb rrrrrggg */
		.depth    = 16,
	},
	{
		.name     = "RGB565 (BE)",
		.fourcc   = V4L2_PIX_FMT_RGB565X, /* rrrrrggg gggbbbbb */
		.depth    = 16,
	},
	{
		.name     = "RGB555 (LE)",
		.fourcc   = V4L2_PIX_FMT_RGB555, /* gggbbbbb arrrrrgg */
		.depth    = 16,
	},
	{
		.name     = "RGB555 (BE)",
		.fourcc   = V4L2_PIX_FMT_RGB555X, /* arrrrrgg gggbbbbb */
		.depth    = 16,
	},
	{
		.name     = "RGB24 (LE)",
		.fourcc   = V4L2_PIX_FMT_RGB24, /* rgb */
		.depth    = 24,
	},
	{
		.name     = "RGB24 (BE)",
		.fourcc   = V4L2_PIX_FMT_BGR24, /* bgr */
		.depth    = 24,
	},
	{
		.name     = "RGB32 (LE)",
		.fourcc   = V4L2_PIX_FMT_RGB32, /* argb */
		.depth    = 32,
	},
	{
		.name     = "RGB32 (BE)",
		.fourcc   = V4L2_PIX_FMT_BGR32, /* bgra */
		.depth    = 32,
	},
};

static struct vivi_fmt *get_format(struct v4l2_format *f)
{
	struct vivi_fmt *fmt;
	unsigned int k;

	for (k = 0; k < ARRAY_SIZE(formats); k++) {
		fmt = &formats[k];
		if (fmt->fourcc == f->fmt.pix.pixelformat)
			break;
	}

	if (k == ARRAY_SIZE(formats))
		return NULL;

	return &formats[k];
}

/* buffer for one video frame */
struct vivi_buffer {
	/* common v4l buffer stuff -- must be first */
	struct vb2_buffer	vb;
	struct list_head	list;
	struct vivi_fmt        *fmt;
};

struct vivi_dmaqueue {
	struct list_head       active;

	/* thread for generating video stream*/
	struct task_struct         *kthread;
	wait_queue_head_t          wq;
	/* Counters to control fps rate */
	int                        frame;
	int                        ini_jiffies;
};

static LIST_HEAD(vivi_devlist);

struct vivi_dev {
	struct list_head           vivi_devlist;
	struct v4l2_device 	   v4l2_dev;
	struct v4l2_ctrl_handler   ctrl_handler;

	/* controls */
	struct v4l2_ctrl	   *brightness;
	struct v4l2_ctrl	   *contrast;
	struct v4l2_ctrl	   *saturation;
	struct v4l2_ctrl	   *hue;
	struct {
		/* autogain/gain cluster */
		struct v4l2_ctrl	   *autogain;
		struct v4l2_ctrl	   *gain;
	};
	struct v4l2_ctrl	   *volume;
	struct v4l2_ctrl	   *alpha;
	struct v4l2_ctrl	   *button;
	struct v4l2_ctrl	   *boolean;
	struct v4l2_ctrl	   *int32;
	struct v4l2_ctrl	   *int64;
	struct v4l2_ctrl	   *menu;
	struct v4l2_ctrl	   *string;
	struct v4l2_ctrl	   *bitmask;
	struct v4l2_ctrl	   *int_menu;

	spinlock_t                 slock;
	struct mutex		   mutex;

	/* various device info */
	struct video_device        *vfd;

	struct vivi_dmaqueue       vidq;

	/* Several counters */
	unsigned 		   ms;
	unsigned long              jiffies;
	unsigned		   button_pressed;

	int			   mv_count;	/* Controls bars movement */

	/* Input Number */
	int			   input;

	/* video capture */
	struct vivi_fmt            *fmt;
	unsigned int               width, height;
	struct vb2_queue	   vb_vidq;
	enum v4l2_field		   field;
	unsigned int		   field_count;

	u8			   bars[9][3];
	u8			   line[MAX_WIDTH * 8];
	unsigned int		   pixelsize;
	u8			   alpha_component;
};

/* ------------------------------------------------------------------
	DMA and thread functions
   ------------------------------------------------------------------*/

/* Bars and Colors should match positions */

enum colors {
	WHITE,
	AMBER,
	CYAN,
	GREEN,
	MAGENTA,
	RED,
	BLUE,
	BLACK,
	TEXT_BLACK,
};

/* R   G   B */
#define COLOR_WHITE	{204, 204, 204}
#define COLOR_AMBER	{208, 208,   0}
#define COLOR_CYAN	{  0, 206, 206}
#define	COLOR_GREEN	{  0, 239,   0}
#define COLOR_MAGENTA	{239,   0, 239}
#define COLOR_RED	{205,   0,   0}
#define COLOR_BLUE	{  0,   0, 255}
#define COLOR_BLACK	{  0,   0,   0}

struct bar_std {
	u8 bar[9][3];
};

/* Maximum number of bars are 10 - otherwise, the input print code
   should be modified */
static struct bar_std bars[] = {
	{	/* Standard ITU-R color bar sequence */
		{ COLOR_WHITE, COLOR_AMBER, COLOR_CYAN, COLOR_GREEN,
		  COLOR_MAGENTA, COLOR_RED, COLOR_BLUE, COLOR_BLACK, COLOR_BLACK }
	}, {
		{ COLOR_WHITE, COLOR_AMBER, COLOR_BLACK, COLOR_WHITE,
		  COLOR_AMBER, COLOR_BLACK, COLOR_WHITE, COLOR_AMBER, COLOR_BLACK }
	}, {
		{ COLOR_WHITE, COLOR_CYAN, COLOR_BLACK, COLOR_WHITE,
		  COLOR_CYAN, COLOR_BLACK, COLOR_WHITE, COLOR_CYAN, COLOR_BLACK }
	}, {
		{ COLOR_WHITE, COLOR_GREEN, COLOR_BLACK, COLOR_WHITE,
		  COLOR_GREEN, COLOR_BLACK, COLOR_WHITE, COLOR_GREEN, COLOR_BLACK }
	},
};

#define NUM_INPUTS ARRAY_SIZE(bars)

#define TO_Y(r, g, b) \
	(((16829 * r + 33039 * g + 6416 * b  + 32768) >> 16) + 16)
/* RGB to  V(Cr) Color transform */
#define TO_V(r, g, b) \
	(((28784 * r - 24103 * g - 4681 * b  + 32768) >> 16) + 128)
/* RGB to  U(Cb) Color transform */
#define TO_U(r, g, b) \
	(((-9714 * r - 19070 * g + 28784 * b + 32768) >> 16) + 128)

/* precalculate color bar values to speed up rendering */
static void precalculate_bars(struct vivi_dev *dev)
{
	u8 r, g, b;
	int k, is_yuv;

	for (k = 0; k < 9; k++) {
		r = bars[dev->input].bar[k][0];
		g = bars[dev->input].bar[k][1];
		b = bars[dev->input].bar[k][2];
		is_yuv = 0;

		switch (dev->fmt->fourcc) {
		case V4L2_PIX_FMT_YUYV:
		case V4L2_PIX_FMT_UYVY:
		case V4L2_PIX_FMT_YVYU:
		case V4L2_PIX_FMT_VYUY:
			is_yuv = 1;
			break;
		case V4L2_PIX_FMT_RGB565:
		case V4L2_PIX_FMT_RGB565X:
			r >>= 3;
			g >>= 2;
			b >>= 3;
			break;
		case V4L2_PIX_FMT_RGB555:
		case V4L2_PIX_FMT_RGB555X:
			r >>= 3;
			g >>= 3;
			b >>= 3;
			break;
		case V4L2_PIX_FMT_RGB24:
		case V4L2_PIX_FMT_BGR24:
		case V4L2_PIX_FMT_RGB32:
		case V4L2_PIX_FMT_BGR32:
			break;
		}

		if (is_yuv) {
			dev->bars[k][0] = TO_Y(r, g, b);	/* Luma */
			dev->bars[k][1] = TO_U(r, g, b);	/* Cb */
			dev->bars[k][2] = TO_V(r, g, b);	/* Cr */
		} else {
			dev->bars[k][0] = r;
			dev->bars[k][1] = g;
			dev->bars[k][2] = b;
		}
	}
}

#define TSTAMP_MIN_Y	24
#define TSTAMP_MAX_Y	(TSTAMP_MIN_Y + 15)
#define TSTAMP_INPUT_X	10
#define TSTAMP_MIN_X	(54 + TSTAMP_INPUT_X)

/* 'odd' is true for pixels 1, 3, 5, etc. and false for pixels 0, 2, 4, etc. */
static void gen_twopix(struct vivi_dev *dev, u8 *buf, int colorpos, bool odd)
{
	u8 r_y, g_u, b_v;
	u8 alpha = dev->alpha_component;
	int color;
	u8 *p;

	r_y = dev->bars[colorpos][0]; /* R or precalculated Y */
	g_u = dev->bars[colorpos][1]; /* G or precalculated U */
	b_v = dev->bars[colorpos][2]; /* B or precalculated V */

	for (color = 0; color < dev->pixelsize; color++) {
		p = buf + color;

		switch (dev->fmt->fourcc) {
		case V4L2_PIX_FMT_YUYV:
			switch (color) {
			case 0:
				*p = r_y;
				break;
			case 1:
				*p = odd ? b_v : g_u;
				break;
			}
			break;
		case V4L2_PIX_FMT_UYVY:
			switch (color) {
			case 0:
				*p = odd ? b_v : g_u;
				break;
			case 1:
				*p = r_y;
				break;
			}
			break;
		case V4L2_PIX_FMT_YVYU:
			switch (color) {
			case 0:
				*p = r_y;
				break;
			case 1:
				*p = odd ? g_u : b_v;
				break;
			}
			break;
		case V4L2_PIX_FMT_VYUY:
			switch (color) {
			case 0:
				*p = odd ? g_u : b_v;
				break;
			case 1:
				*p = r_y;
				break;
			}
			break;
		case V4L2_PIX_FMT_RGB565:
			switch (color) {
			case 0:
				*p = (g_u << 5) | b_v;
				break;
			case 1:
				*p = (r_y << 3) | (g_u >> 3);
				break;
			}
			break;
		case V4L2_PIX_FMT_RGB565X:
			switch (color) {
			case 0:
				*p = (r_y << 3) | (g_u >> 3);
				break;
			case 1:
				*p = (g_u << 5) | b_v;
				break;
			}
			break;
		case V4L2_PIX_FMT_RGB555:
			switch (color) {
			case 0:
				*p = (g_u << 5) | b_v;
				break;
			case 1:
				*p = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
				break;
			}
			break;
		case V4L2_PIX_FMT_RGB555X:
			switch (color) {
			case 0:
				*p = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
				break;
			case 1:
				*p = (g_u << 5) | b_v;
				break;
			}
			break;
		case V4L2_PIX_FMT_RGB24:
			switch (color) {
			case 0:
				*p = r_y;
				break;
			case 1:
				*p = g_u;
				break;
			case 2:
				*p = b_v;
				break;
			}
			break;
		case V4L2_PIX_FMT_BGR24:
			switch (color) {
			case 0:
				*p = b_v;
				break;
			case 1:
				*p = g_u;
				break;
			case 2:
				*p = r_y;
				break;
			}
			break;
		case V4L2_PIX_FMT_RGB32:
			switch (color) {
			case 0:
				*p = alpha;
				break;
			case 1:
				*p = r_y;
				break;
			case 2:
				*p = g_u;
				break;
			case 3:
				*p = b_v;
				break;
			}
			break;
		case V4L2_PIX_FMT_BGR32:
			switch (color) {
			case 0:
				*p = b_v;
				break;
			case 1:
				*p = g_u;
				break;
			case 2:
				*p = r_y;
				break;
			case 3:
				*p = alpha;
				break;
			}
			break;
		}
	}
}

static void precalculate_line(struct vivi_dev *dev)
{
	int w;

	for (w = 0; w < dev->width * 2; w++) {
		int colorpos = w / (dev->width / 8) % 8;

		gen_twopix(dev, dev->line + w * dev->pixelsize, colorpos, w & 1);
	}
}

static void gen_text(struct vivi_dev *dev, char *basep,
					int y, int x, char *text)
{
	int line;

	/* Checks if it is possible to show string */
	if (y + 16 >= dev->height || x + strlen(text) * 8 >= dev->width)
		return;

	/* Print stream time */
	for (line = y; line < y + 16; line++) {
		int j = 0;
		char *pos = basep + line * dev->width * dev->pixelsize + x * dev->pixelsize;
		char *s;

		for (s = text; *s; s++) {
			u8 chr = font8x16[*s * 16 + line - y];
			int i;

			for (i = 0; i < 7; i++, j++) {
				/* Draw white font on black background */
				if (chr & (1 << (7 - i)))
					gen_twopix(dev, pos + j * dev->pixelsize, WHITE, (x+y) & 1);
				else
					gen_twopix(dev, pos + j * dev->pixelsize, TEXT_BLACK, (x+y) & 1);
			}
		}
	}
}

static void vivi_fillbuff(struct vivi_dev *dev, struct vivi_buffer *buf)
{
	int wmax = dev->width;
	int hmax = dev->height;
	struct timeval ts;
	void *vbuf = vb2_plane_vaddr(&buf->vb, 0);
	unsigned ms;
	char str[100];
	int h, line = 1;
	s32 gain;

	if (!vbuf)
		return;

	for (h = 0; h < hmax; h++)
		memcpy(vbuf + h * wmax * dev->pixelsize,
		       dev->line + (dev->mv_count % wmax) * dev->pixelsize,
		       wmax * dev->pixelsize);

	/* Updates stream time */

	dev->ms += jiffies_to_msecs(jiffies - dev->jiffies);
	dev->jiffies = jiffies;
	ms = dev->ms;
	snprintf(str, sizeof(str), " %02d:%02d:%02d:%03d ",
			(ms / (60 * 60 * 1000)) % 24,
			(ms / (60 * 1000)) % 60,
			(ms / 1000) % 60,
			ms % 1000);
	gen_text(dev, vbuf, line++ * 16, 16, str);
	snprintf(str, sizeof(str), " %dx%d, input %d ",
			dev->width, dev->height, dev->input);
	gen_text(dev, vbuf, line++ * 16, 16, str);

	gain = v4l2_ctrl_g_ctrl(dev->gain);
	mutex_lock(dev->ctrl_handler.lock);
	snprintf(str, sizeof(str), " brightness %3d, contrast %3d, saturation %3d, hue %d ",
			dev->brightness->cur.val,
			dev->contrast->cur.val,
			dev->saturation->cur.val,
			dev->hue->cur.val);
	gen_text(dev, vbuf, line++ * 16, 16, str);
	snprintf(str, sizeof(str), " autogain %d, gain %3d, volume %3d, alpha 0x%02x ",
			dev->autogain->cur.val, gain, dev->volume->cur.val,
			dev->alpha->cur.val);
	gen_text(dev, vbuf, line++ * 16, 16, str);
	snprintf(str, sizeof(str), " int32 %d, int64 %lld, bitmask %08x ",
			dev->int32->cur.val,
			dev->int64->cur.val64,
			dev->bitmask->cur.val);
	gen_text(dev, vbuf, line++ * 16, 16, str);
	snprintf(str, sizeof(str), " boolean %d, menu %s, string \"%s\" ",
			dev->boolean->cur.val,
			dev->menu->qmenu[dev->menu->cur.val],
			dev->string->cur.string);
	gen_text(dev, vbuf, line++ * 16, 16, str);
	snprintf(str, sizeof(str), " integer_menu %lld, value %d ",
			dev->int_menu->qmenu_int[dev->int_menu->cur.val],
			dev->int_menu->cur.val);
	gen_text(dev, vbuf, line++ * 16, 16, str);
	mutex_unlock(dev->ctrl_handler.lock);
	if (dev->button_pressed) {
		dev->button_pressed--;
		snprintf(str, sizeof(str), " button pressed!");
		gen_text(dev, vbuf, line++ * 16, 16, str);
	}

	dev->mv_count += 2;

	buf->vb.v4l2_buf.field = dev->field;
	dev->field_count++;
	buf->vb.v4l2_buf.sequence = dev->field_count >> 1;
	do_gettimeofday(&ts);
	buf->vb.v4l2_buf.timestamp = ts;
}

static void vivi_thread_tick(struct vivi_dev *dev)
{
	struct vivi_dmaqueue *dma_q = &dev->vidq;
	struct vivi_buffer *buf;
	unsigned long flags = 0;

	dprintk(dev, 1, "Thread tick\n");

	spin_lock_irqsave(&dev->slock, flags);
	if (list_empty(&dma_q->active)) {
		dprintk(dev, 1, "No active queue to serve\n");
		spin_unlock_irqrestore(&dev->slock, flags);
		return;
	}

	buf = list_entry(dma_q->active.next, struct vivi_buffer, list);
	list_del(&buf->list);
	spin_unlock_irqrestore(&dev->slock, flags);

	do_gettimeofday(&buf->vb.v4l2_buf.timestamp);

	/* Fill buffer */
	vivi_fillbuff(dev, buf);
	dprintk(dev, 1, "filled buffer %p\n", buf);

	vb2_buffer_done(&buf->vb, VB2_BUF_STATE_DONE);
	dprintk(dev, 2, "[%p/%d] done\n", buf, buf->vb.v4l2_buf.index);
}

#define frames_to_ms(frames)					\
	((frames * WAKE_NUMERATOR * 1000) / WAKE_DENOMINATOR)

static void vivi_sleep(struct vivi_dev *dev)
{
	struct vivi_dmaqueue *dma_q = &dev->vidq;
	int timeout;
	DECLARE_WAITQUEUE(wait, current);

	dprintk(dev, 1, "%s dma_q=0x%08lx\n", __func__,
		(unsigned long)dma_q);

	add_wait_queue(&dma_q->wq, &wait);
	if (kthread_should_stop())
		goto stop_task;

	/* Calculate time to wake up */
	timeout = msecs_to_jiffies(frames_to_ms(1));

	vivi_thread_tick(dev);

	schedule_timeout_interruptible(timeout);

stop_task:
	remove_wait_queue(&dma_q->wq, &wait);
	try_to_freeze();
}

static int vivi_thread(void *data)
{
	struct vivi_dev *dev = data;

	dprintk(dev, 1, "thread started\n");

	set_freezable();

	for (;;) {
		vivi_sleep(dev);

		if (kthread_should_stop())
			break;
	}
	dprintk(dev, 1, "thread: exit\n");
	return 0;
}

static int vivi_start_generating(struct vivi_dev *dev)
{
	struct vivi_dmaqueue *dma_q = &dev->vidq;

	dprintk(dev, 1, "%s\n", __func__);

	/* Resets frame counters */
	dev->ms = 0;
	dev->mv_count = 0;
	dev->jiffies = jiffies;

	dma_q->frame = 0;
	dma_q->ini_jiffies = jiffies;
	dma_q->kthread = kthread_run(vivi_thread, dev, dev->v4l2_dev.name);

	if (IS_ERR(dma_q->kthread)) {
		v4l2_err(&dev->v4l2_dev, "kernel_thread() failed\n");
		return PTR_ERR(dma_q->kthread);
	}
	/* Wakes thread */
	wake_up_interruptible(&dma_q->wq);

	dprintk(dev, 1, "returning from %s\n", __func__);
	return 0;
}

static void vivi_stop_generating(struct vivi_dev *dev)
{
	struct vivi_dmaqueue *dma_q = &dev->vidq;

	dprintk(dev, 1, "%s\n", __func__);

	/* shutdown control thread */
	if (dma_q->kthread) {
		kthread_stop(dma_q->kthread);
		dma_q->kthread = NULL;
	}

	/*
	 * Typical driver might need to wait here until dma engine stops.
	 * In this case we can abort imiedetly, so it's just a noop.
	 */

	/* Release all active buffers */
	while (!list_empty(&dma_q->active)) {
		struct vivi_buffer *buf;
		buf = list_entry(dma_q->active.next, struct vivi_buffer, list);
		list_del(&buf->list);
		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
		dprintk(dev, 2, "[%p/%d] done\n", buf, buf->vb.v4l2_buf.index);
	}
}
/* ------------------------------------------------------------------
	Videobuf operations
   ------------------------------------------------------------------*/
static int queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
				unsigned int *nbuffers, unsigned int *nplanes,
				unsigned int sizes[], void *alloc_ctxs[])
{
	struct vivi_dev *dev = vb2_get_drv_priv(vq);
	unsigned long size;

	size = dev->width * dev->height * dev->pixelsize;

	if (0 == *nbuffers)
		*nbuffers = 32;

	while (size * *nbuffers > vid_limit * 1024 * 1024)
		(*nbuffers)--;

	*nplanes = 1;

	sizes[0] = size;

	/*
	 * videobuf2-vmalloc allocator is context-less so no need to set
	 * alloc_ctxs array.
	 */

	dprintk(dev, 1, "%s, count=%d, size=%ld\n", __func__,
		*nbuffers, size);

	return 0;
}

static int buffer_init(struct vb2_buffer *vb)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);

	BUG_ON(NULL == dev->fmt);

	/*
	 * This callback is called once per buffer, after its allocation.
	 *
	 * Vivi does not allow changing format during streaming, but it is
	 * possible to do so when streaming is paused (i.e. in streamoff state).
	 * Buffers however are not freed when going into streamoff and so
	 * buffer size verification has to be done in buffer_prepare, on each
	 * qbuf.
	 * It would be best to move verification code here to buf_init and
	 * s_fmt though.
	 */

	return 0;
}

static int buffer_prepare(struct vb2_buffer *vb)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
	struct vivi_buffer *buf = container_of(vb, struct vivi_buffer, vb);
	unsigned long size;

	dprintk(dev, 1, "%s, field=%d\n", __func__, vb->v4l2_buf.field);

	BUG_ON(NULL == dev->fmt);

	/*
	 * Theses properties only change when queue is idle, see s_fmt.
	 * The below checks should not be performed here, on each
	 * buffer_prepare (i.e. on each qbuf). Most of the code in this function
	 * should thus be moved to buffer_init and s_fmt.
	 */
	if (dev->width  < 48 || dev->width  > MAX_WIDTH ||
	    dev->height < 32 || dev->height > MAX_HEIGHT)
		return -EINVAL;

	size = dev->width * dev->height * dev->pixelsize;
	if (vb2_plane_size(vb, 0) < size) {
		dprintk(dev, 1, "%s data will not fit into plane (%lu < %lu)\n",
				__func__, vb2_plane_size(vb, 0), size);
		return -EINVAL;
	}

	vb2_set_plane_payload(&buf->vb, 0, size);

	buf->fmt = dev->fmt;

	precalculate_bars(dev);
	precalculate_line(dev);

	return 0;
}

static int buffer_finish(struct vb2_buffer *vb)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
	dprintk(dev, 1, "%s\n", __func__);
	return 0;
}

static void buffer_cleanup(struct vb2_buffer *vb)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
	dprintk(dev, 1, "%s\n", __func__);

}

static void buffer_queue(struct vb2_buffer *vb)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
	struct vivi_buffer *buf = container_of(vb, struct vivi_buffer, vb);
	struct vivi_dmaqueue *vidq = &dev->vidq;
	unsigned long flags = 0;

	dprintk(dev, 1, "%s\n", __func__);

	spin_lock_irqsave(&dev->slock, flags);
	list_add_tail(&buf->list, &vidq->active);
	spin_unlock_irqrestore(&dev->slock, flags);
}

static int start_streaming(struct vb2_queue *vq, unsigned int count)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vq);
	dprintk(dev, 1, "%s\n", __func__);
	return vivi_start_generating(dev);
}

/* abort streaming and wait for last buffer */
static int stop_streaming(struct vb2_queue *vq)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vq);
	dprintk(dev, 1, "%s\n", __func__);
	vivi_stop_generating(dev);
	return 0;
}

static void vivi_lock(struct vb2_queue *vq)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vq);
	mutex_lock(&dev->mutex);
}

static void vivi_unlock(struct vb2_queue *vq)
{
	struct vivi_dev *dev = vb2_get_drv_priv(vq);
	mutex_unlock(&dev->mutex);
}


static struct vb2_ops vivi_video_qops = {
	.queue_setup		= queue_setup,
	.buf_init		= buffer_init,
	.buf_prepare		= buffer_prepare,
	.buf_finish		= buffer_finish,
	.buf_cleanup		= buffer_cleanup,
	.buf_queue		= buffer_queue,
	.start_streaming	= start_streaming,
	.stop_streaming		= stop_streaming,
	.wait_prepare		= vivi_unlock,
	.wait_finish		= vivi_lock,
};

/* ------------------------------------------------------------------
	IOCTL vidioc handling
   ------------------------------------------------------------------*/
static int vidioc_querycap(struct file *file, void  *priv,
					struct v4l2_capability *cap)
{
	struct vivi_dev *dev = video_drvdata(file);

	strcpy(cap->driver, "vivi");
	strcpy(cap->card, "vivi");
	strlcpy(cap->bus_info, dev->v4l2_dev.name, sizeof(cap->bus_info));
	cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING |
			    V4L2_CAP_READWRITE;
	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
	return 0;
}

static int vidioc_enum_fmt_vid_cap(struct file *file, void  *priv,
					struct v4l2_fmtdesc *f)
{
	struct vivi_fmt *fmt;

	if (f->index >= ARRAY_SIZE(formats))
		return -EINVAL;

	fmt = &formats[f->index];

	strlcpy(f->description, fmt->name, sizeof(f->description));
	f->pixelformat = fmt->fourcc;
	return 0;
}

static int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
					struct v4l2_format *f)
{
	struct vivi_dev *dev = video_drvdata(file);

	f->fmt.pix.width        = dev->width;
	f->fmt.pix.height       = dev->height;
	f->fmt.pix.field        = dev->field;
	f->fmt.pix.pixelformat  = dev->fmt->fourcc;
	f->fmt.pix.bytesperline =
		(f->fmt.pix.width * dev->fmt->depth) >> 3;
	f->fmt.pix.sizeimage =
		f->fmt.pix.height * f->fmt.pix.bytesperline;
	if (dev->fmt->fourcc == V4L2_PIX_FMT_YUYV ||
	    dev->fmt->fourcc == V4L2_PIX_FMT_UYVY)
		f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
	else
		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
	return 0;
}

static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
			struct v4l2_format *f)
{
	struct vivi_dev *dev = video_drvdata(file);
	struct vivi_fmt *fmt;
	enum v4l2_field field;

	fmt = get_format(f);
	if (!fmt) {
		dprintk(dev, 1, "Fourcc format (0x%08x) invalid.\n",
			f->fmt.pix.pixelformat);
		return -EINVAL;
	}

	field = f->fmt.pix.field;

	if (field == V4L2_FIELD_ANY) {
		field = V4L2_FIELD_INTERLACED;
	} else if (V4L2_FIELD_INTERLACED != field) {
		dprintk(dev, 1, "Field type invalid.\n");
		return -EINVAL;
	}

	f->fmt.pix.field = field;
	v4l_bound_align_image(&f->fmt.pix.width, 48, MAX_WIDTH, 2,
			      &f->fmt.pix.height, 32, MAX_HEIGHT, 0, 0);
	f->fmt.pix.bytesperline =
		(f->fmt.pix.width * fmt->depth) >> 3;
	f->fmt.pix.sizeimage =
		f->fmt.pix.height * f->fmt.pix.bytesperline;
	if (fmt->fourcc == V4L2_PIX_FMT_YUYV ||
	    fmt->fourcc == V4L2_PIX_FMT_UYVY)
		f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
	else
		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
	return 0;
}

static int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
					struct v4l2_format *f)
{
	struct vivi_dev *dev = video_drvdata(file);
	struct vb2_queue *q = &dev->vb_vidq;

	int ret = vidioc_try_fmt_vid_cap(file, priv, f);
	if (ret < 0)
		return ret;

	if (vb2_is_streaming(q)) {
		dprintk(dev, 1, "%s device busy\n", __func__);
		return -EBUSY;
	}

	dev->fmt = get_format(f);
	dev->pixelsize = dev->fmt->depth / 8;
	dev->width = f->fmt.pix.width;
	dev->height = f->fmt.pix.height;
	dev->field = f->fmt.pix.field;

	return 0;
}

static int vidioc_reqbufs(struct file *file, void *priv,
			  struct v4l2_requestbuffers *p)
{
	struct vivi_dev *dev = video_drvdata(file);
	return vb2_reqbufs(&dev->vb_vidq, p);
}

static int vidioc_querybuf(struct file *file, void *priv, struct v4l2_buffer *p)
{
	struct vivi_dev *dev = video_drvdata(file);
	return vb2_querybuf(&dev->vb_vidq, p);
}

static int vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *p)
{
	struct vivi_dev *dev = video_drvdata(file);
	return vb2_qbuf(&dev->vb_vidq, p);
}

static int vidioc_dqbuf(struct file *file, void *priv, struct v4l2_buffer *p)
{
	struct vivi_dev *dev = video_drvdata(file);
	return vb2_dqbuf(&dev->vb_vidq, p, file->f_flags & O_NONBLOCK);
}

static int vidioc_streamon(struct file *file, void *priv, enum v4l2_buf_type i)
{
	struct vivi_dev *dev = video_drvdata(file);
	return vb2_streamon(&dev->vb_vidq, i);
}

static int vidioc_streamoff(struct file *file, void *priv, enum v4l2_buf_type i)
{
	struct vivi_dev *dev = video_drvdata(file);
	return vb2_streamoff(&dev->vb_vidq, i);
}

static int vidioc_s_std(struct file *file, void *priv, v4l2_std_id *i)
{
	return 0;
}

/* only one input in this sample driver */
static int vidioc_enum_input(struct file *file, void *priv,
				struct v4l2_input *inp)
{
	if (inp->index >= NUM_INPUTS)
		return -EINVAL;

	inp->type = V4L2_INPUT_TYPE_CAMERA;
	inp->std = V4L2_STD_525_60;
	sprintf(inp->name, "Camera %u", inp->index);
	return 0;
}

static int vidioc_g_input(struct file *file, void *priv, unsigned int *i)
{
	struct vivi_dev *dev = video_drvdata(file);

	*i = dev->input;
	return 0;
}

static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
{
	struct vivi_dev *dev = video_drvdata(file);

	if (i >= NUM_INPUTS)
		return -EINVAL;

	if (i == dev->input)
		return 0;

	dev->input = i;
	precalculate_bars(dev);
	precalculate_line(dev);
	return 0;
}

/* --- controls ---------------------------------------------- */

static int vivi_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
{
	struct vivi_dev *dev = container_of(ctrl->handler, struct vivi_dev, ctrl_handler);

	if (ctrl == dev->autogain)
		dev->gain->val = jiffies & 0xff;
	return 0;
}

static int vivi_s_ctrl(struct v4l2_ctrl *ctrl)
{
	struct vivi_dev *dev = container_of(ctrl->handler, struct vivi_dev, ctrl_handler);

	switch (ctrl->id) {
	case V4L2_CID_ALPHA_COMPONENT:
		dev->alpha_component = ctrl->val;
		break;
	default:
		if (ctrl == dev->button)
			dev->button_pressed = 30;
		break;
	}
	return 0;
}

/* ------------------------------------------------------------------
	File operations for the device
   ------------------------------------------------------------------*/

static ssize_t
vivi_read(struct file *file, char __user *data, size_t count, loff_t *ppos)
{
	struct vivi_dev *dev = video_drvdata(file);
	int err;

	dprintk(dev, 1, "read called\n");
	mutex_lock(&dev->mutex);
	err = vb2_read(&dev->vb_vidq, data, count, ppos,
		       file->f_flags & O_NONBLOCK);
	mutex_unlock(&dev->mutex);
	return err;
}

static unsigned int
vivi_poll(struct file *file, struct poll_table_struct *wait)
{
	struct vivi_dev *dev = video_drvdata(file);
	struct vb2_queue *q = &dev->vb_vidq;

	dprintk(dev, 1, "%s\n", __func__);
	return vb2_poll(q, file, wait);
}

static int vivi_close(struct file *file)
{
	struct video_device  *vdev = video_devdata(file);
	struct vivi_dev *dev = video_drvdata(file);

	dprintk(dev, 1, "close called (dev=%s), file %p\n",
		video_device_node_name(vdev), file);

	if (v4l2_fh_is_singular_file(file))
		vb2_queue_release(&dev->vb_vidq);
	return v4l2_fh_release(file);
}

static int vivi_mmap(struct file *file, struct vm_area_struct *vma)
{
	struct vivi_dev *dev = video_drvdata(file);
	int ret;

	dprintk(dev, 1, "mmap called, vma=0x%08lx\n", (unsigned long)vma);

	ret = vb2_mmap(&dev->vb_vidq, vma);
	dprintk(dev, 1, "vma start=0x%08lx, size=%ld, ret=%d\n",
		(unsigned long)vma->vm_start,
		(unsigned long)vma->vm_end - (unsigned long)vma->vm_start,
		ret);
	return ret;
}

static const struct v4l2_ctrl_ops vivi_ctrl_ops = {
	.g_volatile_ctrl = vivi_g_volatile_ctrl,
	.s_ctrl = vivi_s_ctrl,
};

#define VIVI_CID_CUSTOM_BASE	(V4L2_CID_USER_BASE | 0xf000)

static const struct v4l2_ctrl_config vivi_ctrl_button = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 0,
	.name = "Button",
	.type = V4L2_CTRL_TYPE_BUTTON,
};

static const struct v4l2_ctrl_config vivi_ctrl_boolean = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 1,
	.name = "Boolean",
	.type = V4L2_CTRL_TYPE_BOOLEAN,
	.min = 0,
	.max = 1,
	.step = 1,
	.def = 1,
};

static const struct v4l2_ctrl_config vivi_ctrl_int32 = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 2,
	.name = "Integer 32 Bits",
	.type = V4L2_CTRL_TYPE_INTEGER,
	.min = 0x80000000,
	.max = 0x7fffffff,
	.step = 1,
};

static const struct v4l2_ctrl_config vivi_ctrl_int64 = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 3,
	.name = "Integer 64 Bits",
	.type = V4L2_CTRL_TYPE_INTEGER64,
};

static const char * const vivi_ctrl_menu_strings[] = {
	"Menu Item 0 (Skipped)",
	"Menu Item 1",
	"Menu Item 2 (Skipped)",
	"Menu Item 3",
	"Menu Item 4",
	"Menu Item 5 (Skipped)",
	NULL,
};

static const struct v4l2_ctrl_config vivi_ctrl_menu = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 4,
	.name = "Menu",
	.type = V4L2_CTRL_TYPE_MENU,
	.min = 1,
	.max = 4,
	.def = 3,
	.menu_skip_mask = 0x04,
	.qmenu = vivi_ctrl_menu_strings,
};

static const struct v4l2_ctrl_config vivi_ctrl_string = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 5,
	.name = "String",
	.type = V4L2_CTRL_TYPE_STRING,
	.min = 2,
	.max = 4,
	.step = 1,
};

static const struct v4l2_ctrl_config vivi_ctrl_bitmask = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 6,
	.name = "Bitmask",
	.type = V4L2_CTRL_TYPE_BITMASK,
	.def = 0x80002000,
	.min = 0,
	.max = 0x80402010,
	.step = 0,
};

static const s64 vivi_ctrl_int_menu_values[] = {
	1, 1, 2, 3, 5, 8, 13, 21, 42,
};

static const struct v4l2_ctrl_config vivi_ctrl_int_menu = {
	.ops = &vivi_ctrl_ops,
	.id = VIVI_CID_CUSTOM_BASE + 7,
	.name = "Integer menu",
	.type = V4L2_CTRL_TYPE_INTEGER_MENU,
	.min = 1,
	.max = 8,
	.def = 4,
	.menu_skip_mask = 0x02,
	.qmenu_int = vivi_ctrl_int_menu_values,
};

static const struct v4l2_file_operations vivi_fops = {
	.owner		= THIS_MODULE,
	.open           = v4l2_fh_open,
	.release        = vivi_close,
	.read           = vivi_read,
	.poll		= vivi_poll,
	.unlocked_ioctl = video_ioctl2, /* V4L2 ioctl handler */
	.mmap           = vivi_mmap,
};

static const struct v4l2_ioctl_ops vivi_ioctl_ops = {
	.vidioc_querycap      = vidioc_querycap,
	.vidioc_enum_fmt_vid_cap  = vidioc_enum_fmt_vid_cap,
	.vidioc_g_fmt_vid_cap     = vidioc_g_fmt_vid_cap,
	.vidioc_try_fmt_vid_cap   = vidioc_try_fmt_vid_cap,
	.vidioc_s_fmt_vid_cap     = vidioc_s_fmt_vid_cap,
	.vidioc_reqbufs       = vidioc_reqbufs,
	.vidioc_querybuf      = vidioc_querybuf,
	.vidioc_qbuf          = vidioc_qbuf,
	.vidioc_dqbuf         = vidioc_dqbuf,
	.vidioc_s_std         = vidioc_s_std,
	.vidioc_enum_input    = vidioc_enum_input,
	.vidioc_g_input       = vidioc_g_input,
	.vidioc_s_input       = vidioc_s_input,
	.vidioc_streamon      = vidioc_streamon,
	.vidioc_streamoff     = vidioc_streamoff,
	.vidioc_log_status    = v4l2_ctrl_log_status,
	.vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
	.vidioc_unsubscribe_event = v4l2_event_unsubscribe,
};

static struct video_device vivi_template = {
	.name		= "vivi",
	.fops           = &vivi_fops,
	.ioctl_ops 	= &vivi_ioctl_ops,
	.release	= video_device_release,

	.tvnorms              = V4L2_STD_525_60,
	.current_norm         = V4L2_STD_NTSC_M,
};

/* -----------------------------------------------------------------
	Initialization and module stuff
   ------------------------------------------------------------------*/

static int vivi_release(void)
{
	struct vivi_dev *dev;
	struct list_head *list;

	while (!list_empty(&vivi_devlist)) {
		list = vivi_devlist.next;
		list_del(list);
		dev = list_entry(list, struct vivi_dev, vivi_devlist);

		v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
			video_device_node_name(dev->vfd));
		video_unregister_device(dev->vfd);
		v4l2_device_unregister(&dev->v4l2_dev);
		v4l2_ctrl_handler_free(&dev->ctrl_handler);
		kfree(dev);
	}

	return 0;
}

static int __init vivi_create_instance(int inst)
{
	struct vivi_dev *dev;
	struct video_device *vfd;
	struct v4l2_ctrl_handler *hdl;
	struct vb2_queue *q;
	int ret;

	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
	if (!dev)
		return -ENOMEM;

	snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name),
			"%s-%03d", VIVI_MODULE_NAME, inst);
	ret = v4l2_device_register(NULL, &dev->v4l2_dev);
	if (ret)
		goto free_dev;

	dev->fmt = &formats[0];
	dev->width = 640;
	dev->height = 480;
	dev->pixelsize = dev->fmt->depth / 8;
	hdl = &dev->ctrl_handler;
	v4l2_ctrl_handler_init(hdl, 11);
	dev->volume = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_AUDIO_VOLUME, 0, 255, 1, 200);
	dev->brightness = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_BRIGHTNESS, 0, 255, 1, 127);
	dev->contrast = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_CONTRAST, 0, 255, 1, 16);
	dev->saturation = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_SATURATION, 0, 255, 1, 127);
	dev->hue = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_HUE, -128, 127, 1, 0);
	dev->autogain = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_AUTOGAIN, 0, 1, 1, 1);
	dev->gain = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_GAIN, 0, 255, 1, 100);
	dev->alpha = v4l2_ctrl_new_std(hdl, &vivi_ctrl_ops,
			V4L2_CID_ALPHA_COMPONENT, 0, 255, 1, 0);
	dev->button = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_button, NULL);
	dev->int32 = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_int32, NULL);
	dev->int64 = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_int64, NULL);
	dev->boolean = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_boolean, NULL);
	dev->menu = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_menu, NULL);
	dev->string = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_string, NULL);
	dev->bitmask = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_bitmask, NULL);
	dev->int_menu = v4l2_ctrl_new_custom(hdl, &vivi_ctrl_int_menu, NULL);
	if (hdl->error) {
		ret = hdl->error;
		goto unreg_dev;
	}
	v4l2_ctrl_auto_cluster(2, &dev->autogain, 0, true);
	dev->v4l2_dev.ctrl_handler = hdl;

	/* initialize locks */
	spin_lock_init(&dev->slock);

	/* initialize queue */
	q = &dev->vb_vidq;
	memset(q, 0, sizeof(dev->vb_vidq));
	q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_READ;
	q->drv_priv = dev;
	q->buf_struct_size = sizeof(struct vivi_buffer);
	q->ops = &vivi_video_qops;
	q->mem_ops = &vb2_vmalloc_memops;

	vb2_queue_init(q);

	mutex_init(&dev->mutex);

	/* init video dma queues */
	INIT_LIST_HEAD(&dev->vidq.active);
	init_waitqueue_head(&dev->vidq.wq);

	ret = -ENOMEM;
	vfd = video_device_alloc();
	if (!vfd)
		goto unreg_dev;

	*vfd = vivi_template;
	vfd->debug = debug;
	vfd->v4l2_dev = &dev->v4l2_dev;
	set_bit(V4L2_FL_USE_FH_PRIO, &vfd->flags);

	/*
	 * Provide a mutex to v4l2 core. It will be used to protect
	 * all fops and v4l2 ioctls.
	 */
	vfd->lock = &dev->mutex;

	ret = video_register_device(vfd, VFL_TYPE_GRABBER, video_nr);
	if (ret < 0)
		goto rel_vdev;

	video_set_drvdata(vfd, dev);

	/* Now that everything is fine, let's add it to device list */
	list_add_tail(&dev->vivi_devlist, &vivi_devlist);

	if (video_nr != -1)
		video_nr++;

	dev->vfd = vfd;
	v4l2_info(&dev->v4l2_dev, "V4L2 device registered as %s\n",
		  video_device_node_name(vfd));
	return 0;

rel_vdev:
	video_device_release(vfd);
unreg_dev:
	v4l2_ctrl_handler_free(hdl);
	v4l2_device_unregister(&dev->v4l2_dev);
free_dev:
	kfree(dev);
	return ret;
}

/* This routine allocates from 1 to n_devs virtual drivers.

   The real maximum number of virtual drivers will depend on how many drivers
   will succeed. This is limited to the maximum number of devices that
   videodev supports, which is equal to VIDEO_NUM_DEVICES.
 */
static int __init vivi_init(void)
{
	const struct font_desc *font = find_font("VGA8x16");
	int ret = 0, i;

	if (font == NULL) {
		printk(KERN_ERR "vivi: could not find font\n");
		return -ENODEV;
	}
	font8x16 = font->data;

	if (n_devs <= 0)
		n_devs = 1;

	for (i = 0; i < n_devs; i++) {
		ret = vivi_create_instance(i);
		if (ret) {
			/* If some instantiations succeeded, keep driver */
			if (i)
				ret = 0;
			break;
		}
	}

	if (ret < 0) {
		printk(KERN_ERR "vivi: error %d while loading driver\n", ret);
		return ret;
	}

	printk(KERN_INFO "Video Technology Magazine Virtual Video "
			"Capture Board ver %s successfully loaded.\n",
			VIVI_VERSION);

	/* n_devs will reflect the actual number of allocated devices */
	n_devs = i;

	return ret;
}

static void __exit vivi_exit(void)
{
	vivi_release();
}

module_init(vivi_init);
module_exit(vivi_exit);

三、camera应用程序编写

1.程序运行及编写流程

在这里插入图片描述

2.应用程序源码

#include <stdio.h>
#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <linux/videodev2.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/select.h>
#include <string.h>
/* According to earlier standards */
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>


struct v4l2_capability cap;
struct v4l2_format fmt;
struct v4l2_requestbuffers req_buf;
struct v4l2_buffer kbuf;
enum v4l2_buf_type buf_type;
int i;
struct video_buffer{
	void * start;
	int length;
};
struct video_buffer *buffer;

int open_camera(void)
{
	int fd;
	fd = open("/dev/video0",O_RDWR);
	if(fd < 0){
		perror("open /dev/video0 is fail");
		return -1;
	}
	return fd;
}

int check_device(int fd)
{
	if(ioctl(fd,VIDIOC_QUERYCAP,&cap) == -1){
		perror("query device is fail");
		return -1;
	}
	if(!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)){
		return -1;
	}

	return 0;
}

int set_camera_fmt(int fd)
{
	fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	fmt.fmt.pix.width = 720;
	fmt.fmt.pix.height = 576;
	fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
	fmt.fmt.pix.field = V4L2_FIELD_INTERLACED;

	if(ioctl(fd,VIDIOC_S_FMT,&fmt) == -1){
		perror("set camera fmt is fail");
		return -1;
	}
	
	return 0;
}

int init_buffer(int fd)
{
	//1.分配缓冲区
	req_buf.count = 4;
	req_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	req_buf.memory = V4L2_MEMORY_MMAP;
	
	if(ioctl(fd,VIDIOC_REQBUFS,&req_buf) == -1){
		perror("set VIDIOC_REQBUFS is fail");
		return -1;
	}
	
	buffer = calloc(req_buf.count,sizeof(*buffer));
	for(i=0; i<req_buf.count; i++){
		kbuf.index = i;
		kbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
		kbuf.memory = V4L2_MEMORY_MMAP;
		if(ioctl(fd,VIDIOC_QUERYBUF,&kbuf) == -1){
			perror("set VIDIOC_QUERYBUF is fail");
			return -1;
		}
		buffer[i].length = kbuf.length;
		buffer[i].start = mmap(NULL,kbuf.length,  PROT_READ| PROT_WRITE, MAP_SHARED,fd,kbuf.m.offset);

		if(ioctl(fd,VIDIOC_QBUF,&kbuf) == -1){
			perror("set VIDIOC_QBUF is fail");
			return -1;
		}
	
	}
	return 0;
}

int start_camera(int fd)
{
	buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	if(ioctl(fd,VIDIOC_STREAMON,&buf_type) == -1){
		perror("set VIDIOC_STREAMON is fail");
		return -1;
	}
	return 0;

}

int stop_camera(int fd)
{
	buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	if(ioctl(fd,VIDIOC_STREAMOFF,&buf_type) == -1){
		perror("set VIDIOC_STREAMON is fail");
		return -1;
	}
	return 0;

}


int build_picture(int fd)
{
	FILE * fp;
	memset(&kbuf,0,sizeof(kbuf));
	kbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	if(ioctl(fd,VIDIOC_DQBUF,&kbuf) == -1){
		perror("set VIDIOC_DQBUF is fail");
		return -1;
	}

	fp = fopen("picture.yuv","w");
	fwrite(buffer[kbuf.index].start,1,buffer[kbuf.index].length,fp);
	fclose(fp);

	if(ioctl(fd,VIDIOC_QBUF,&kbuf) == -1){
			perror("set VIDIOC_QBUF is fail");
			return -1;
	}
	return 0;
}

int camera_read_data(int fd)
{
	int ret;
	fd_set rfds;
	struct timeval tim;
	tim.tv_sec = 2;
	tim.tv_usec = 0;

	FD_ZERO(&rfds);
	FD_SET(fd, &rfds);
	ret = select(fd+1, &rfds,NULL,NULL, &tim);
	if(ret == -1){
		perror("select is fail");
		return -1;
	}else if(ret == 0){
		perror("select timeout");
		return -1;
	}else{
		build_picture(fd);
	}
	return 0;
}

int data_free(int fd)
{
	for(i=0; i<4; i++){
		munmap(buffer[i].start,buffer[i].length);
	}
	free(buffer);
	close(fd);
}

int main(int argc, const char *argv[])
{
	int fd,ret;
	//1.打开驱动
	fd = open_camera();
	if(fd == -1){
		return -1;
	}

	//2.查询是否是一个摄像头设备
	ret = check_device(fd);
	if(ret == -1){
		return -1;
	}

	//3.设置图像采集的格式
	ret = set_camera_fmt(fd);
	if(ret != 0){
		return -1;
	}

	//4.缓冲区的操作
	init_buffer(fd);
	
	//5.开始采集数据
	start_camera(fd);

	//6.读取数据
	camera_read_data(fd);

	//7.停止采集
	stop_camera(fd);
	//8.释放缓冲区
	data_free(fd);

	return 0;

}

四、camera服务注册过程、服务获取过程、HAL层实现

1./frameworks/av/camera/cameraserver/main_cameraserver.cpp

/*
 * Copyright (C) 2015 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

#define LOG_TAG "cameraserver"
//#define LOG_NDEBUG 0

#include "CameraService.h"
#include <hidl/HidlTransportSupport.h>

using namespace android;

int main(int argc __unused, char** argv __unused)
{
    signal(SIGPIPE, SIG_IGN);

    // Set 3 threads for HIDL calls
    hardware::configureRpcThreadpool(3, /*willjoin*/ false);

    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    ALOGI("ServiceManager: %p", sm.get());
    CameraService::instantiate();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}

2./frameworks/base/core/java/android/hardware/Camera.java

   public static Camera open() {
        int numberOfCameras = getNumberOfCameras();
        CameraInfo cameraInfo = new CameraInfo();
        for (int i = 0; i < numberOfCameras; i++) {
            getCameraInfo(i, cameraInfo);
            if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
                return new Camera(i);
            }
        }
        return null;
    }

Camera.java源码

3./frameworks/base/core/jni/android_hardware_Camera.cpp

/*
**
** Copyright 2008, The Android Open Source Project
**
** Licensed under the Apache License, Version 2.0 (the "License");
** you may not use this file except in compliance with the License.
** You may obtain a copy of the License at
**
**     http://www.apache.org/licenses/LICENSE-2.0
**
** Unless required by applicable law or agreed to in writing, software
** distributed under the License is distributed on an "AS IS" BASIS,
** WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
** See the License for the specific language governing permissions and
** limitations under the License.
*/

//#define LOG_NDEBUG 0
#define LOG_TAG "Camera-JNI"
#include <utils/Log.h>

#include "jni.h"
#include <nativehelper/JNIHelp.h>
#include "core_jni_helpers.h"
#include <android_runtime/android_graphics_SurfaceTexture.h>
#include <android_runtime/android_view_Surface.h>

#include <cutils/properties.h>
#include <utils/Vector.h>
#include <utils/Errors.h>

#include <gui/GLConsumer.h>
#include <gui/Surface.h>
#include <camera/Camera.h>
#include <binder/IMemory.h>

using namespace android;

enum {
    // Keep up to date with Camera.java
    CAMERA_HAL_API_VERSION_NORMAL_CONNECT = -2,
};

struct fields_t {
    jfieldID    context;
    jfieldID    facing;
    jfieldID    orientation;
    jfieldID    canDisableShutterSound;
    jfieldID    face_rect;
    jfieldID    face_score;
    jfieldID    face_id;
    jfieldID    face_left_eye;
    jfieldID    face_right_eye;
    jfieldID    face_mouth;
    jfieldID    rect_left;
    jfieldID    rect_top;
    jfieldID    rect_right;
    jfieldID    rect_bottom;
    jfieldID    point_x;
    jfieldID    point_y;
    jmethodID   post_event;
    jmethodID   rect_constructor;
    jmethodID   face_constructor;
    jmethodID   point_constructor;
};

static fields_t fields;
static Mutex sLock;

// provides persistent context for calls from native code to Java
class JNICameraContext: public CameraListener
{
public:
    JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz, const sp<Camera>& camera);
    ~JNICameraContext() { release(); }
    virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
    virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr,
                          camera_frame_metadata_t *metadata);
    virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
    virtual void postRecordingFrameHandleTimestamp(nsecs_t timestamp, native_handle_t* handle);
    virtual void postRecordingFrameHandleTimestampBatch(
            const std::vector<nsecs_t>& timestamps,
            const std::vector<native_handle_t*>& handles);
    void postMetadata(JNIEnv *env, int32_t msgType, camera_frame_metadata_t *metadata);
    void addCallbackBuffer(JNIEnv *env, jbyteArray cbb, int msgType);
    void setCallbackMode(JNIEnv *env, bool installed, bool manualMode);
    sp<Camera> getCamera() { Mutex::Autolock _l(mLock); return mCamera; }
    bool isRawImageCallbackBufferAvailable() const;
    void release();

private:
    void copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType);
    void clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers);
    void clearCallbackBuffers_l(JNIEnv *env);
    jbyteArray getCallbackBuffer(JNIEnv *env, Vector<jbyteArray> *buffers, size_t bufferSize);

    jobject     mCameraJObjectWeak;     // weak reference to java object
    jclass      mCameraJClass;          // strong reference to java class
    sp<Camera>  mCamera;                // strong reference to native object
    jclass      mFaceClass;  // strong reference to Face class
    jclass      mRectClass;  // strong reference to Rect class
    jclass      mPointClass;  // strong reference to Point class
    Mutex       mLock;

    /*
     * Global reference application-managed raw image buffer queue.
     *
     * Manual-only mode is supported for raw image callbacks, which is
     * set whenever method addCallbackBuffer() with msgType =
     * CAMERA_MSG_RAW_IMAGE is called; otherwise, null is returned
     * with raw image callbacks.
     */
    Vector<jbyteArray> mRawImageCallbackBuffers;

    /*
     * Application-managed preview buffer queue and the flags
     * associated with the usage of the preview buffer callback.
     */
    Vector<jbyteArray> mCallbackBuffers; // Global reference application managed byte[]
    bool mManualBufferMode;              // Whether to use application managed buffers.
    bool mManualCameraCallbackSet;       // Whether the callback has been set, used to
                                         // reduce unnecessary calls to set the callback.
};

bool JNICameraContext::isRawImageCallbackBufferAvailable() const
{
    return !mRawImageCallbackBuffers.isEmpty();
}

sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
{
    sp<Camera> camera;
    Mutex::Autolock _l(sLock);
    JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
    if (context != NULL) {
        camera = context->getCamera();
    }
    ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());
    if (camera == 0) {
        jniThrowRuntimeException(env,
                "Camera is being used after Camera.release() was called");
    }

    if (pContext != NULL) *pContext = context;
    return camera;
}

JNICameraContext::JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz, const sp<Camera>& camera)
{
    mCameraJObjectWeak = env->NewGlobalRef(weak_this);
    mCameraJClass = (jclass)env->NewGlobalRef(clazz);
    mCamera = camera;

    jclass faceClazz = env->FindClass("android/hardware/Camera$Face");
    mFaceClass = (jclass) env->NewGlobalRef(faceClazz);

    jclass rectClazz = env->FindClass("android/graphics/Rect");
    mRectClass = (jclass) env->NewGlobalRef(rectClazz);

    jclass pointClazz = env->FindClass("android/graphics/Point");
    mPointClass = (jclass) env->NewGlobalRef(pointClazz);

    mManualBufferMode = false;
    mManualCameraCallbackSet = false;
}

void JNICameraContext::release()
{
    ALOGV("release");
    Mutex::Autolock _l(mLock);
    JNIEnv *env = AndroidRuntime::getJNIEnv();

    if (mCameraJObjectWeak != NULL) {
        env->DeleteGlobalRef(mCameraJObjectWeak);
        mCameraJObjectWeak = NULL;
    }
    if (mCameraJClass != NULL) {
        env->DeleteGlobalRef(mCameraJClass);
        mCameraJClass = NULL;
    }
    if (mFaceClass != NULL) {
        env->DeleteGlobalRef(mFaceClass);
        mFaceClass = NULL;
    }
    if (mRectClass != NULL) {
        env->DeleteGlobalRef(mRectClass);
        mRectClass = NULL;
    }
    if (mPointClass != NULL) {
        env->DeleteGlobalRef(mPointClass);
        mPointClass = NULL;
    }
    clearCallbackBuffers_l(env);
    mCamera.clear();
}

void JNICameraContext::notify(int32_t msgType, int32_t ext1, int32_t ext2)
{
    ALOGV("notify");

    // VM pointer will be NULL if object is released
    Mutex::Autolock _l(mLock);
    if (mCameraJObjectWeak == NULL) {
        ALOGW("callback on dead camera object");
        return;
    }
    JNIEnv *env = AndroidRuntime::getJNIEnv();

    /*
     * If the notification or msgType is CAMERA_MSG_RAW_IMAGE_NOTIFY, change it
     * to CAMERA_MSG_RAW_IMAGE since CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed
     * to the Java app.
     */
    if (msgType == CAMERA_MSG_RAW_IMAGE_NOTIFY) {
        msgType = CAMERA_MSG_RAW_IMAGE;
    }

    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, ext1, ext2, NULL);
}

jbyteArray JNICameraContext::getCallbackBuffer(
        JNIEnv* env, Vector<jbyteArray>* buffers, size_t bufferSize)
{
    jbyteArray obj = NULL;

    // Vector access should be protected by lock in postData()
    if (!buffers->isEmpty()) {
        ALOGV("Using callback buffer from queue of length %zu", buffers->size());
        jbyteArray globalBuffer = buffers->itemAt(0);
        buffers->removeAt(0);

        obj = (jbyteArray)env->NewLocalRef(globalBuffer);
        env->DeleteGlobalRef(globalBuffer);

        if (obj != NULL) {
            jsize bufferLength = env->GetArrayLength(obj);
            if ((int)bufferLength < (int)bufferSize) {
                ALOGE("Callback buffer was too small! Expected %zu bytes, but got %d bytes!",
                    bufferSize, bufferLength);
                env->DeleteLocalRef(obj);
                return NULL;
            }
        }
    }

    return obj;
}

void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
{
    jbyteArray obj = NULL;

    // allocate Java byte array and copy data
    if (dataPtr != NULL) {
        ssize_t offset;
        size_t size;
        sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
        ALOGV("copyAndPost: off=%zd, size=%zu", offset, size);
        uint8_t *heapBase = (uint8_t*)heap->base();

        if (heapBase != NULL) {
            const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);

            if (msgType == CAMERA_MSG_RAW_IMAGE) {
                obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
            } else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
                obj = getCallbackBuffer(env, &mCallbackBuffers, size);

                if (mCallbackBuffers.isEmpty()) {
                    ALOGV("Out of buffers, clearing callback!");
                    mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
                    mManualCameraCallbackSet = false;

                    if (obj == NULL) {
                        return;
                    }
                }
            } else {
                ALOGV("Allocating callback buffer");
                obj = env->NewByteArray(size);
            }

            if (obj == NULL) {
                ALOGE("Couldn't allocate byte array for JPEG data");
                env->ExceptionClear();
            } else {
                env->SetByteArrayRegion(obj, 0, size, data);
            }
        } else {
            ALOGE("image heap is NULL");
        }
    }

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
    if (obj) {
        env->DeleteLocalRef(obj);
    }
}

void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
                                camera_frame_metadata_t *metadata)
{
    // VM pointer will be NULL if object is released
    Mutex::Autolock _l(mLock);
    JNIEnv *env = AndroidRuntime::getJNIEnv();
    if (mCameraJObjectWeak == NULL) {
        ALOGW("callback on dead camera object");
        return;
    }

    int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;

    // return data based on callback type
    switch (dataMsgType) {
        case CAMERA_MSG_VIDEO_FRAME:
            // should never happen
            break;

        // For backward-compatibility purpose, if there is no callback
        // buffer for raw image, the callback returns null.
        case CAMERA_MSG_RAW_IMAGE:
            ALOGV("rawCallback");
            if (mRawImageCallbackBuffers.isEmpty()) {
                env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                        mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
            } else {
                copyAndPost(env, dataPtr, dataMsgType);
            }
            break;

        // There is no data.
        case 0:
            break;

        default:
            ALOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
            copyAndPost(env, dataPtr, dataMsgType);
            break;
    }

    // post frame metadata to Java
    if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
        postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
    }
}

void JNICameraContext::postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr)
{
    // TODO: plumb up to Java. For now, just drop the timestamp
    postData(msgType, dataPtr, NULL);
}

void JNICameraContext::postRecordingFrameHandleTimestamp(nsecs_t, native_handle_t* handle) {
    // Video buffers are not needed at app layer so just return the video buffers here.
    // This may be called when stagefright just releases camera but there are still outstanding
    // video buffers.
    if (mCamera != nullptr) {
        mCamera->releaseRecordingFrameHandle(handle);
    } else {
        native_handle_close(handle);
        native_handle_delete(handle);
    }
}

void JNICameraContext::postRecordingFrameHandleTimestampBatch(
        const std::vector<nsecs_t>&,
        const std::vector<native_handle_t*>& handles) {
    // Video buffers are not needed at app layer so just return the video buffers here.
    // This may be called when stagefright just releases camera but there are still outstanding
    // video buffers.
    if (mCamera != nullptr) {
        mCamera->releaseRecordingFrameHandleBatch(handles);
    } else {
        for (auto& handle : handles) {
            native_handle_close(handle);
            native_handle_delete(handle);
        }
    }
}

void JNICameraContext::postMetadata(JNIEnv *env, int32_t msgType, camera_frame_metadata_t *metadata)
{
    jobjectArray obj = NULL;
    obj = (jobjectArray) env->NewObjectArray(metadata->number_of_faces,
                                             mFaceClass, NULL);
    if (obj == NULL) {
        ALOGE("Couldn't allocate face metadata array");
        return;
    }

    for (int i = 0; i < metadata->number_of_faces; i++) {
        jobject face = env->NewObject(mFaceClass, fields.face_constructor);
        env->SetObjectArrayElement(obj, i, face);

        jobject rect = env->NewObject(mRectClass, fields.rect_constructor);
        env->SetIntField(rect, fields.rect_left, metadata->faces[i].rect[0]);
        env->SetIntField(rect, fields.rect_top, metadata->faces[i].rect[1]);
        env->SetIntField(rect, fields.rect_right, metadata->faces[i].rect[2]);
        env->SetIntField(rect, fields.rect_bottom, metadata->faces[i].rect[3]);

        env->SetObjectField(face, fields.face_rect, rect);
        env->SetIntField(face, fields.face_score, metadata->faces[i].score);

        bool optionalFields = metadata->faces[i].id != 0
            && metadata->faces[i].left_eye[0] != -2000 && metadata->faces[i].left_eye[1] != -2000
            && metadata->faces[i].right_eye[0] != -2000 && metadata->faces[i].right_eye[1] != -2000
            && metadata->faces[i].mouth[0] != -2000 && metadata->faces[i].mouth[1] != -2000;
        if (optionalFields) {
            int32_t id = metadata->faces[i].id;
            env->SetIntField(face, fields.face_id, id);

            jobject leftEye = env->NewObject(mPointClass, fields.point_constructor);
            env->SetIntField(leftEye, fields.point_x, metadata->faces[i].left_eye[0]);
            env->SetIntField(leftEye, fields.point_y, metadata->faces[i].left_eye[1]);
            env->SetObjectField(face, fields.face_left_eye, leftEye);
            env->DeleteLocalRef(leftEye);

            jobject rightEye = env->NewObject(mPointClass, fields.point_constructor);
            env->SetIntField(rightEye, fields.point_x, metadata->faces[i].right_eye[0]);
            env->SetIntField(rightEye, fields.point_y, metadata->faces[i].right_eye[1]);
            env->SetObjectField(face, fields.face_right_eye, rightEye);
            env->DeleteLocalRef(rightEye);

            jobject mouth = env->NewObject(mPointClass, fields.point_constructor);
            env->SetIntField(mouth, fields.point_x, metadata->faces[i].mouth[0]);
            env->SetIntField(mouth, fields.point_y, metadata->faces[i].mouth[1]);
            env->SetObjectField(face, fields.face_mouth, mouth);
            env->DeleteLocalRef(mouth);
        }

        env->DeleteLocalRef(face);
        env->DeleteLocalRef(rect);
    }
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
    env->DeleteLocalRef(obj);
}

void JNICameraContext::setCallbackMode(JNIEnv *env, bool installed, bool manualMode)
{
    Mutex::Autolock _l(mLock);
    mManualBufferMode = manualMode;
    mManualCameraCallbackSet = false;

    // In order to limit the over usage of binder threads, all non-manual buffer
    // callbacks use CAMERA_FRAME_CALLBACK_FLAG_BARCODE_SCANNER mode now.
    //
    // Continuous callbacks will have the callback re-registered from handleMessage.
    // Manual buffer mode will operate as fast as possible, relying on the finite supply
    // of buffers for throttling.

    if (!installed) {
        mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
        clearCallbackBuffers_l(env, &mCallbackBuffers);
    } else if (mManualBufferMode) {
        if (!mCallbackBuffers.isEmpty()) {
            mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_CAMERA);
            mManualCameraCallbackSet = true;
        }
    } else {
        mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_BARCODE_SCANNER);
        clearCallbackBuffers_l(env, &mCallbackBuffers);
    }
}

void JNICameraContext::addCallbackBuffer(
        JNIEnv *env, jbyteArray cbb, int msgType)
{
    ALOGV("addCallbackBuffer: 0x%x", msgType);
    if (cbb != NULL) {
        Mutex::Autolock _l(mLock);
        switch (msgType) {
            case CAMERA_MSG_PREVIEW_FRAME: {
                jbyteArray callbackBuffer = (jbyteArray)env->NewGlobalRef(cbb);
                mCallbackBuffers.push(callbackBuffer);

                ALOGV("Adding callback buffer to queue, %zu total",
                        mCallbackBuffers.size());

                // We want to make sure the camera knows we're ready for the
                // next frame. This may have come unset had we not had a
                // callbackbuffer ready for it last time.
                if (mManualBufferMode && !mManualCameraCallbackSet) {
                    mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_CAMERA);
                    mManualCameraCallbackSet = true;
                }
                break;
            }
            case CAMERA_MSG_RAW_IMAGE: {
                jbyteArray callbackBuffer = (jbyteArray)env->NewGlobalRef(cbb);
                mRawImageCallbackBuffers.push(callbackBuffer);
                break;
            }
            default: {
                jniThrowException(env,
                        "java/lang/IllegalArgumentException",
                        "Unsupported message type");
                return;
            }
        }
    } else {
       ALOGE("Null byte array!");
    }
}

void JNICameraContext::clearCallbackBuffers_l(JNIEnv *env)
{
    clearCallbackBuffers_l(env, &mCallbackBuffers);
    clearCallbackBuffers_l(env, &mRawImageCallbackBuffers);
}

void JNICameraContext::clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers) {
    ALOGV("Clearing callback buffers, %zu remained", buffers->size());
    while (!buffers->isEmpty()) {
        env->DeleteGlobalRef(buffers->top());
        buffers->pop();
    }
}

static jint android_hardware_Camera_getNumberOfCameras(JNIEnv *env, jobject thiz)
{
    return Camera::getNumberOfCameras();
}

static void android_hardware_Camera_getCameraInfo(JNIEnv *env, jobject thiz,
    jint cameraId, jobject info_obj)
{
    CameraInfo cameraInfo;
    if (cameraId >= Camera::getNumberOfCameras() || cameraId < 0) {
        ALOGE("%s: Unknown camera ID %d", __FUNCTION__, cameraId);
        jniThrowRuntimeException(env, "Unknown camera ID");
        return;
    }

    status_t rc = Camera::getCameraInfo(cameraId, &cameraInfo);
    if (rc != NO_ERROR) {
        jniThrowRuntimeException(env, "Fail to get camera info");
        return;
    }
    env->SetIntField(info_obj, fields.facing, cameraInfo.facing);
    env->SetIntField(info_obj, fields.orientation, cameraInfo.orientation);

    char value[PROPERTY_VALUE_MAX];
    property_get("ro.camera.sound.forced", value, "0");
    jboolean canDisableShutterSound = (strncmp(value, "0", 2) == 0);
    env->SetBooleanField(info_obj, fields.canDisableShutterSound,
            canDisableShutterSound);
}

// connect to camera service
static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
    jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
{
    // Convert jstring to String16
    const char16_t *rawClientName = reinterpret_cast<const char16_t*>(
        env->GetStringChars(clientPackageName, NULL));
    jsize rawClientNameLen = env->GetStringLength(clientPackageName);
    String16 clientName(rawClientName, rawClientNameLen);
    env->ReleaseStringChars(clientPackageName,
                            reinterpret_cast<const jchar*>(rawClientName));

    sp<Camera> camera;
    if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
        // Default path: hal version is don't care, do normal camera connect.
        camera = Camera::connect(cameraId, clientName,
                Camera::USE_CALLING_UID, Camera::USE_CALLING_PID);
    } else {
        jint status = Camera::connectLegacy(cameraId, halVersion, clientName,
                Camera::USE_CALLING_UID, camera);
        if (status != NO_ERROR) {
            return status;
        }
    }

    if (camera == NULL) {
        return -EACCES;
    }

    // make sure camera hardware is alive
    if (camera->getStatus() != NO_ERROR) {
        return NO_INIT;
    }

    jclass clazz = env->GetObjectClass(thiz);
    if (clazz == NULL) {
        // This should never happen
        jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
        return INVALID_OPERATION;
    }

    // We use a weak reference so the Camera object can be garbage collected.
    // The reference is only used as a proxy for callbacks.
    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
    context->incStrong((void*)android_hardware_Camera_native_setup);
    camera->setListener(context);

    // save context in opaque field
    env->SetLongField(thiz, fields.context, (jlong)context.get());

    // Update default display orientation in case the sensor is reverse-landscape
    CameraInfo cameraInfo;
    status_t rc = Camera::getCameraInfo(cameraId, &cameraInfo);
    if (rc != NO_ERROR) {
        ALOGE("%s: getCameraInfo error: %d", __FUNCTION__, rc);
        return rc;
    }
    int defaultOrientation = 0;
    switch (cameraInfo.orientation) {
        case 0:
            break;
        case 90:
            if (cameraInfo.facing == CAMERA_FACING_FRONT) {
                defaultOrientation = 180;
            }
            break;
        case 180:
            defaultOrientation = 180;
            break;
        case 270:
            if (cameraInfo.facing != CAMERA_FACING_FRONT) {
                defaultOrientation = 180;
            }
            break;
        default:
            ALOGE("Unexpected camera orientation %d!", cameraInfo.orientation);
            break;
    }
    if (defaultOrientation != 0) {
        ALOGV("Setting default display orientation to %d", defaultOrientation);
        rc = camera->sendCommand(CAMERA_CMD_SET_DISPLAY_ORIENTATION,
                defaultOrientation, 0);
        if (rc != NO_ERROR) {
            ALOGE("Unable to update default orientation: %s (%d)",
                    strerror(-rc), rc);
            return rc;
        }
    }

    return NO_ERROR;
}

// disconnect from camera service
// It's okay to call this when the native camera context is already null.
// This handles the case where the user has called release() and the
// finalizer is invoked later.
static void android_hardware_Camera_release(JNIEnv *env, jobject thiz)
{
    ALOGV("release camera");
    JNICameraContext* context = NULL;
    sp<Camera> camera;
    {
        Mutex::Autolock _l(sLock);
        context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));

        // Make sure we do not attempt to callback on a deleted Java object.
        env->SetLongField(thiz, fields.context, 0);
    }

    // clean up if release has not been called before
    if (context != NULL) {
        camera = context->getCamera();
        context->release();
        ALOGV("native_release: context=%p camera=%p", context, camera.get());

        // clear callbacks
        if (camera != NULL) {
            camera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
            camera->disconnect();
        }

        // remove context to prevent further Java access
        context->decStrong((void*)android_hardware_Camera_native_setup);
    }
}

static void android_hardware_Camera_setPreviewSurface(JNIEnv *env, jobject thiz, jobject jSurface)
{
    ALOGV("setPreviewSurface");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    sp<IGraphicBufferProducer> gbp;
    sp<Surface> surface;
    if (jSurface) {
        surface = android_view_Surface_getSurface(env, jSurface);
        if (surface != NULL) {
            gbp = surface->getIGraphicBufferProducer();
        }
    }

    if (camera->setPreviewTarget(gbp) != NO_ERROR) {
        jniThrowException(env, "java/io/IOException", "setPreviewTexture failed");
    }
}

static void android_hardware_Camera_setPreviewTexture(JNIEnv *env,
        jobject thiz, jobject jSurfaceTexture)
{
    ALOGV("setPreviewTexture");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    sp<IGraphicBufferProducer> producer = NULL;
    if (jSurfaceTexture != NULL) {
        producer = SurfaceTexture_getProducer(env, jSurfaceTexture);
        if (producer == NULL) {
            jniThrowException(env, "java/lang/IllegalArgumentException",
                    "SurfaceTexture already released in setPreviewTexture");
            return;
        }

    }

    if (camera->setPreviewTarget(producer) != NO_ERROR) {
        jniThrowException(env, "java/io/IOException",
                "setPreviewTexture failed");
    }
}

static void android_hardware_Camera_setPreviewCallbackSurface(JNIEnv *env,
        jobject thiz, jobject jSurface)
{
    ALOGV("setPreviewCallbackSurface");
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context);
    if (camera == 0) return;

    sp<IGraphicBufferProducer> gbp;
    sp<Surface> surface;
    if (jSurface) {
        surface = android_view_Surface_getSurface(env, jSurface);
        if (surface != NULL) {
            gbp = surface->getIGraphicBufferProducer();
        }
    }
    // Clear out normal preview callbacks
    context->setCallbackMode(env, false, false);
    // Then set up callback surface
    if (camera->setPreviewCallbackTarget(gbp) != NO_ERROR) {
        jniThrowException(env, "java/io/IOException", "setPreviewCallbackTarget failed");
    }
}

static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
{
    ALOGV("startPreview");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->startPreview() != NO_ERROR) {
        jniThrowRuntimeException(env, "startPreview failed");
        return;
    }
}

static void android_hardware_Camera_stopPreview(JNIEnv *env, jobject thiz)
{
    ALOGV("stopPreview");
    sp<Camera> c = get_native_camera(env, thiz, NULL);
    if (c == 0) return;

    c->stopPreview();
}

static jboolean android_hardware_Camera_previewEnabled(JNIEnv *env, jobject thiz)
{
    ALOGV("previewEnabled");
    sp<Camera> c = get_native_camera(env, thiz, NULL);
    if (c == 0) return JNI_FALSE;

    return c->previewEnabled() ? JNI_TRUE : JNI_FALSE;
}

static void android_hardware_Camera_setHasPreviewCallback(JNIEnv *env, jobject thiz, jboolean installed, jboolean manualBuffer)
{
    ALOGV("setHasPreviewCallback: installed:%d, manualBuffer:%d", (int)installed, (int)manualBuffer);
    // Important: Only install preview_callback if the Java code has called
    // setPreviewCallback() with a non-null value, otherwise we'd pay to memcpy
    // each preview frame for nothing.
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context);
    if (camera == 0) return;

    // setCallbackMode will take care of setting the context flags and calling
    // camera->setPreviewCallbackFlags within a mutex for us.
    context->setCallbackMode(env, installed, manualBuffer);
}

static void android_hardware_Camera_addCallbackBuffer(JNIEnv *env, jobject thiz, jbyteArray bytes, jint msgType) {
    ALOGV("addCallbackBuffer: 0x%x", msgType);

    JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));

    if (context != NULL) {
        context->addCallbackBuffer(env, bytes, msgType);
    }
}

static void android_hardware_Camera_autoFocus(JNIEnv *env, jobject thiz)
{
    ALOGV("autoFocus");
    JNICameraContext* context;
    sp<Camera> c = get_native_camera(env, thiz, &context);
    if (c == 0) return;

    if (c->autoFocus() != NO_ERROR) {
        jniThrowRuntimeException(env, "autoFocus failed");
    }
}

static void android_hardware_Camera_cancelAutoFocus(JNIEnv *env, jobject thiz)
{
    ALOGV("cancelAutoFocus");
    JNICameraContext* context;
    sp<Camera> c = get_native_camera(env, thiz, &context);
    if (c == 0) return;

    if (c->cancelAutoFocus() != NO_ERROR) {
        jniThrowRuntimeException(env, "cancelAutoFocus failed");
    }
}

static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, jint msgType)
{
    ALOGV("takePicture");
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context);
    if (camera == 0) return;

    /*
     * When CAMERA_MSG_RAW_IMAGE is requested, if the raw image callback
     * buffer is available, CAMERA_MSG_RAW_IMAGE is enabled to get the
     * notification _and_ the data; otherwise, CAMERA_MSG_RAW_IMAGE_NOTIFY
     * is enabled to receive the callback notification but no data.
     *
     * Note that CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed to the
     * Java application.
     */
    if (msgType & CAMERA_MSG_RAW_IMAGE) {
        ALOGV("Enable raw image callback buffer");
        if (!context->isRawImageCallbackBufferAvailable()) {
            ALOGV("Enable raw image notification, since no callback buffer exists");
            msgType &= ~CAMERA_MSG_RAW_IMAGE;
            msgType |= CAMERA_MSG_RAW_IMAGE_NOTIFY;
        }
    }

    if (camera->takePicture(msgType) != NO_ERROR) {
        jniThrowRuntimeException(env, "takePicture failed");
        return;
    }
}

static void android_hardware_Camera_setParameters(JNIEnv *env, jobject thiz, jstring params)
{
    ALOGV("setParameters");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    const jchar* str = env->GetStringCritical(params, 0);
    String8 params8;
    if (params) {
        params8 = String8(reinterpret_cast<const char16_t*>(str),
                          env->GetStringLength(params));
        env->ReleaseStringCritical(params, str);
    }
    if (camera->setParameters(params8) != NO_ERROR) {
        jniThrowRuntimeException(env, "setParameters failed");
        return;
    }
}

static jstring android_hardware_Camera_getParameters(JNIEnv *env, jobject thiz)
{
    ALOGV("getParameters");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return 0;

    String8 params8 = camera->getParameters();
    if (params8.isEmpty()) {
        jniThrowRuntimeException(env, "getParameters failed (empty parameters)");
        return 0;
    }
    return env->NewStringUTF(params8.string());
}

static void android_hardware_Camera_reconnect(JNIEnv *env, jobject thiz)
{
    ALOGV("reconnect");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->reconnect() != NO_ERROR) {
        jniThrowException(env, "java/io/IOException", "reconnect failed");
        return;
    }
}

static void android_hardware_Camera_lock(JNIEnv *env, jobject thiz)
{
    ALOGV("lock");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->lock() != NO_ERROR) {
        jniThrowRuntimeException(env, "lock failed");
    }
}

static void android_hardware_Camera_unlock(JNIEnv *env, jobject thiz)
{
    ALOGV("unlock");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->unlock() != NO_ERROR) {
        jniThrowRuntimeException(env, "unlock failed");
    }
}

static void android_hardware_Camera_startSmoothZoom(JNIEnv *env, jobject thiz, jint value)
{
    ALOGV("startSmoothZoom");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    status_t rc = camera->sendCommand(CAMERA_CMD_START_SMOOTH_ZOOM, value, 0);
    if (rc == BAD_VALUE) {
        char msg[64];
        sprintf(msg, "invalid zoom value=%d", value);
        jniThrowException(env, "java/lang/IllegalArgumentException", msg);
    } else if (rc != NO_ERROR) {
        jniThrowRuntimeException(env, "start smooth zoom failed");
    }
}

static void android_hardware_Camera_stopSmoothZoom(JNIEnv *env, jobject thiz)
{
    ALOGV("stopSmoothZoom");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->sendCommand(CAMERA_CMD_STOP_SMOOTH_ZOOM, 0, 0) != NO_ERROR) {
        jniThrowRuntimeException(env, "stop smooth zoom failed");
    }
}

static void android_hardware_Camera_setDisplayOrientation(JNIEnv *env, jobject thiz,
        jint value)
{
    ALOGV("setDisplayOrientation");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->sendCommand(CAMERA_CMD_SET_DISPLAY_ORIENTATION, value, 0) != NO_ERROR) {
        jniThrowRuntimeException(env, "set display orientation failed");
    }
}

static jboolean android_hardware_Camera_enableShutterSound(JNIEnv *env, jobject thiz,
        jboolean enabled)
{
    ALOGV("enableShutterSound");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return JNI_FALSE;

    int32_t value = (enabled == JNI_TRUE) ? 1 : 0;
    status_t rc = camera->sendCommand(CAMERA_CMD_ENABLE_SHUTTER_SOUND, value, 0);
    if (rc == NO_ERROR) {
        return JNI_TRUE;
    } else if (rc == PERMISSION_DENIED) {
        return JNI_FALSE;
    } else {
        jniThrowRuntimeException(env, "enable shutter sound failed");
        return JNI_FALSE;
    }
}

static void android_hardware_Camera_startFaceDetection(JNIEnv *env, jobject thiz,
        jint type)
{
    ALOGV("startFaceDetection");
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context);
    if (camera == 0) return;

    status_t rc = camera->sendCommand(CAMERA_CMD_START_FACE_DETECTION, type, 0);
    if (rc == BAD_VALUE) {
        char msg[64];
        snprintf(msg, sizeof(msg), "invalid face detection type=%d", type);
        jniThrowException(env, "java/lang/IllegalArgumentException", msg);
    } else if (rc != NO_ERROR) {
        jniThrowRuntimeException(env, "start face detection failed");
    }
}

static void android_hardware_Camera_stopFaceDetection(JNIEnv *env, jobject thiz)
{
    ALOGV("stopFaceDetection");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->sendCommand(CAMERA_CMD_STOP_FACE_DETECTION, 0, 0) != NO_ERROR) {
        jniThrowRuntimeException(env, "stop face detection failed");
    }
}

static void android_hardware_Camera_enableFocusMoveCallback(JNIEnv *env, jobject thiz, jint enable)
{
    ALOGV("enableFocusMoveCallback");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->sendCommand(CAMERA_CMD_ENABLE_FOCUS_MOVE_MSG, enable, 0) != NO_ERROR) {
        jniThrowRuntimeException(env, "enable focus move callback failed");
    }
}

//-------------------------------------------------

static const JNINativeMethod camMethods[] = {
  { "getNumberOfCameras",
    "()I",
    (void *)android_hardware_Camera_getNumberOfCameras },
  { "_getCameraInfo",
    "(ILandroid/hardware/Camera$CameraInfo;)V",
    (void*)android_hardware_Camera_getCameraInfo },
  { "native_setup",
    "(Ljava/lang/Object;IILjava/lang/String;)I",
    (void*)android_hardware_Camera_native_setup },
  { "native_release",
    "()V",
    (void*)android_hardware_Camera_release },
  { "setPreviewSurface",
    "(Landroid/view/Surface;)V",
    (void *)android_hardware_Camera_setPreviewSurface },
  { "setPreviewTexture",
    "(Landroid/graphics/SurfaceTexture;)V",
    (void *)android_hardware_Camera_setPreviewTexture },
  { "setPreviewCallbackSurface",
    "(Landroid/view/Surface;)V",
    (void *)android_hardware_Camera_setPreviewCallbackSurface },
  { "startPreview",
    "()V",
    (void *)android_hardware_Camera_startPreview },
  { "_stopPreview",
    "()V",
    (void *)android_hardware_Camera_stopPreview },
  { "previewEnabled",
    "()Z",
    (void *)android_hardware_Camera_previewEnabled },
  { "setHasPreviewCallback",
    "(ZZ)V",
    (void *)android_hardware_Camera_setHasPreviewCallback },
  { "_addCallbackBuffer",
    "([BI)V",
    (void *)android_hardware_Camera_addCallbackBuffer },
  { "native_autoFocus",
    "()V",
    (void *)android_hardware_Camera_autoFocus },
  { "native_cancelAutoFocus",
    "()V",
    (void *)android_hardware_Camera_cancelAutoFocus },
  { "native_takePicture",
    "(I)V",
    (void *)android_hardware_Camera_takePicture },
  { "native_setParameters",
    "(Ljava/lang/String;)V",
    (void *)android_hardware_Camera_setParameters },
  { "native_getParameters",
    "()Ljava/lang/String;",
    (void *)android_hardware_Camera_getParameters },
  { "reconnect",
    "()V",
    (void*)android_hardware_Camera_reconnect },
  { "lock",
    "()V",
    (void*)android_hardware_Camera_lock },
  { "unlock",
    "()V",
    (void*)android_hardware_Camera_unlock },
  { "startSmoothZoom",
    "(I)V",
    (void *)android_hardware_Camera_startSmoothZoom },
  { "stopSmoothZoom",
    "()V",
    (void *)android_hardware_Camera_stopSmoothZoom },
  { "setDisplayOrientation",
    "(I)V",
    (void *)android_hardware_Camera_setDisplayOrientation },
  { "_enableShutterSound",
    "(Z)Z",
    (void *)android_hardware_Camera_enableShutterSound },
  { "_startFaceDetection",
    "(I)V",
    (void *)android_hardware_Camera_startFaceDetection },
  { "_stopFaceDetection",
    "()V",
    (void *)android_hardware_Camera_stopFaceDetection},
  { "enableFocusMoveCallback",
    "(I)V",
    (void *)android_hardware_Camera_enableFocusMoveCallback},
};

struct field {
    const char *class_name;
    const char *field_name;
    const char *field_type;
    jfieldID   *jfield;
};

static void find_fields(JNIEnv *env, field *fields, int count)
{
    for (int i = 0; i < count; i++) {
        field *f = &fields[i];
        jclass clazz = FindClassOrDie(env, f->class_name);
        jfieldID field = GetFieldIDOrDie(env, clazz, f->field_name, f->field_type);
        *(f->jfield) = field;
    }
}

// Get all the required offsets in java class and register native functions
int register_android_hardware_Camera(JNIEnv *env)
{
    field fields_to_find[] = {
        { "android/hardware/Camera", "mNativeContext",   "J", &fields.context },
        { "android/hardware/Camera$CameraInfo", "facing",   "I", &fields.facing },
        { "android/hardware/Camera$CameraInfo", "orientation",   "I", &fields.orientation },
        { "android/hardware/Camera$CameraInfo", "canDisableShutterSound",   "Z",
          &fields.canDisableShutterSound },
        { "android/hardware/Camera$Face", "rect", "Landroid/graphics/Rect;", &fields.face_rect },
        { "android/hardware/Camera$Face", "leftEye", "Landroid/graphics/Point;", &fields.face_left_eye},
        { "android/hardware/Camera$Face", "rightEye", "Landroid/graphics/Point;", &fields.face_right_eye},
        { "android/hardware/Camera$Face", "mouth", "Landroid/graphics/Point;", &fields.face_mouth},
        { "android/hardware/Camera$Face", "score", "I", &fields.face_score },
        { "android/hardware/Camera$Face", "id", "I", &fields.face_id},
        { "android/graphics/Rect", "left", "I", &fields.rect_left },
        { "android/graphics/Rect", "top", "I", &fields.rect_top },
        { "android/graphics/Rect", "right", "I", &fields.rect_right },
        { "android/graphics/Rect", "bottom", "I", &fields.rect_bottom },
        { "android/graphics/Point", "x", "I", &fields.point_x},
        { "android/graphics/Point", "y", "I", &fields.point_y},
    };

    find_fields(env, fields_to_find, NELEM(fields_to_find));

    jclass clazz = FindClassOrDie(env, "android/hardware/Camera");
    fields.post_event = GetStaticMethodIDOrDie(env, clazz, "postEventFromNative",
                                               "(Ljava/lang/Object;IIILjava/lang/Object;)V");

    clazz = FindClassOrDie(env, "android/graphics/Rect");
    fields.rect_constructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");

    clazz = FindClassOrDie(env, "android/hardware/Camera$Face");
    fields.face_constructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");

    clazz = env->FindClass("android/graphics/Point");
    fields.point_constructor = env->GetMethodID(clazz, "<init>", "()V");
    if (fields.point_constructor == NULL) {
        ALOGE("Can't find android/graphics/Point()");
        return -1;
    }

    // Register native functions
    return RegisterMethodsOrDie(env, "android/hardware/Camera", camMethods, NELEM(camMethods));
}

4./hardware/libhardware/modules/camera/CameraHAL.cpp

/*
 * Copyright (C) 2012 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

#include <cstdlib>
#include <hardware/camera_common.h>
#include <hardware/hardware.h>
#include "ExampleCamera.h"
#include "VendorTags.h"

//#define LOG_NDEBUG 0
#define LOG_TAG "DefaultCameraHAL"
#include <cutils/log.h>

#define ATRACE_TAG (ATRACE_TAG_CAMERA | ATRACE_TAG_HAL)
#include <cutils/trace.h>

#include "CameraHAL.h"

/*
 * This file serves as the entry point to the HAL.  It contains the module
 * structure and functions used by the framework to load and interface to this
 * HAL, as well as the handles to the individual camera devices.
 */

namespace default_camera_hal {

// Default Camera HAL has 2 cameras, front and rear.
static CameraHAL gCameraHAL(2);
// Handle containing vendor tag functionality
static VendorTags gVendorTags;

CameraHAL::CameraHAL(int num_cameras)
  : mNumberOfCameras(num_cameras),
    mCallbacks(NULL)
{
    // Allocate camera array and instantiate camera devices
    mCameras = new Camera*[mNumberOfCameras];
    // Rear camera
    mCameras[0] = new ExampleCamera(0);
    // Front camera
    mCameras[1] = new ExampleCamera(1);
}

CameraHAL::~CameraHAL()
{
    for (int i = 0; i < mNumberOfCameras; i++) {
        delete mCameras[i];
    }
    delete [] mCameras;
}

int CameraHAL::getNumberOfCameras()
{
    ALOGV("%s: %d", __func__, mNumberOfCameras);
    return mNumberOfCameras;
}

int CameraHAL::getCameraInfo(int id, struct camera_info* info)
{
    ALOGV("%s: camera id %d: info=%p", __func__, id, info);
    if (id < 0 || id >= mNumberOfCameras) {
        ALOGE("%s: Invalid camera id %d", __func__, id);
        return -ENODEV;
    }
    // TODO: return device-specific static metadata
    return mCameras[id]->getInfo(info);
}

int CameraHAL::setCallbacks(const camera_module_callbacks_t *callbacks)
{
    ALOGV("%s : callbacks=%p", __func__, callbacks);
    mCallbacks = callbacks;
    return 0;
}

int CameraHAL::open(const hw_module_t* mod, const char* name, hw_device_t** dev)
{
    int id;
    char *nameEnd;

    ALOGV("%s: module=%p, name=%s, device=%p", __func__, mod, name, dev);
    if (*name == '\0') {
        ALOGE("%s: Invalid camera id name is NULL", __func__);
        return -EINVAL;
    }
    id = strtol(name, &nameEnd, 10);
    if (*nameEnd != '\0') {
        ALOGE("%s: Invalid camera id name %s", __func__, name);
        return -EINVAL;
    } else if (id < 0 || id >= mNumberOfCameras) {
        ALOGE("%s: Invalid camera id %d", __func__, id);
        return -ENODEV;
    }
    return mCameras[id]->open(mod, dev);
}

extern "C" {

static int get_number_of_cameras()
{
    return gCameraHAL.getNumberOfCameras();
}

static int get_camera_info(int id, struct camera_info* info)
{
    return gCameraHAL.getCameraInfo(id, info);
}

static int set_callbacks(const camera_module_callbacks_t *callbacks)
{
    return gCameraHAL.setCallbacks(callbacks);
}

static int get_tag_count(const vendor_tag_ops_t* ops)
{
    return gVendorTags.getTagCount(ops);
}

static void get_all_tags(const vendor_tag_ops_t* ops, uint32_t* tag_array)
{
    gVendorTags.getAllTags(ops, tag_array);
}

static const char* get_section_name(const vendor_tag_ops_t* ops, uint32_t tag)
{
    return gVendorTags.getSectionName(ops, tag);
}

static const char* get_tag_name(const vendor_tag_ops_t* ops, uint32_t tag)
{
    return gVendorTags.getTagName(ops, tag);
}

static int get_tag_type(const vendor_tag_ops_t* ops, uint32_t tag)
{
    return gVendorTags.getTagType(ops, tag);
}

static void get_vendor_tag_ops(vendor_tag_ops_t* ops)
{
    ALOGV("%s : ops=%p", __func__, ops);
    ops->get_tag_count      = get_tag_count;
    ops->get_all_tags       = get_all_tags;
    ops->get_section_name   = get_section_name;
    ops->get_tag_name       = get_tag_name;
    ops->get_tag_type       = get_tag_type;
}

static int open_dev(const hw_module_t* mod, const char* name, hw_device_t** dev)
{
    return gCameraHAL.open(mod, name, dev);
}

static hw_module_methods_t gCameraModuleMethods = {
    open : open_dev
};

camera_module_t HAL_MODULE_INFO_SYM __attribute__ ((visibility("default"))) = {
    common : {
        tag                : HARDWARE_MODULE_TAG,
        module_api_version : CAMERA_MODULE_API_VERSION_2_2,
        hal_api_version    : HARDWARE_HAL_API_VERSION,
        id                 : CAMERA_HARDWARE_MODULE_ID,
        name               : "Default Camera HAL",
        author             : "The Android Open Source Project",
        methods            : &gCameraModuleMethods,
        dso                : NULL,
        reserved           : {0},
    },
    get_number_of_cameras : get_number_of_cameras,
    get_camera_info       : get_camera_info,
    set_callbacks         : set_callbacks,
    get_vendor_tag_ops    : get_vendor_tag_ops,
    open_legacy           : NULL,
    set_torch_mode        : NULL,
    init                  : NULL,
    reserved              : {0},
};
} // extern "C"

} // namespace default_camera_hal
  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值