调用系统照相机

很多时候我们都需要调用系统的照相机功能,来满足诸如设置图片、上传图片等操作,那么我们来看看如何调用相机吧。

代码很简单,如下:

/**
	 * 调用系统照相功能,返回所照相片到PicUtils.PIC_PATH下
	 */
	private void startCamera() {
		Intent camera = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
		startActivityForResult(camera, CAMERA);
	}
其中 CAMERA可以是任意的整数,它是照完相之后返回到当前界面时,得到的请求码,然后我们在Activity中的方法onActivityResult(int requestCode, int resultCode, Intent data)中得到照相的结果:

/**
	 * 当调用完系统应用之后,返回界面时的操作 requestCode = CAMERA、系统照相 requestCode = CAMERA +
	 * 1、系统裁剪
	 */
	protected void onActivityResult(int requestCode, int resultCode, Intent data) {
		super.onActivityResult(requestCode, resultCode, data);
		File file = new File(PicUtils.PIC_PATH + getUid() + "tmp.png"); // 临时图像
		if (requestCode == CAMERA && resultCode == Activity.RESULT_OK) {
			Bundle bundle = data.getExtras();
			Bitmap bitmap = (Bitmap) bundle.get("data"); // 得到照完相之后的图片数据
        }

其中的requestCode则为调用startActivityForRuesult的请求码,而Activity.RESULT_OK也是整数,为结果码,上面的代码表示当照相完成点击确定之后,则将数据转换成Bitmap对象。

值得注意的是这样写只能得到图片较小的结果,当图片太大时,这样写是得不到照相机的图片的,那个时候,我们就需要使用Uri来传递照相图片:

/**
	 * 调用系统照相功能,照相文件指向imageUri路径
	 */
	private void startCamera() {
		Intent camera = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
		camera.putExtra(MediaStore.EXTRA_OUTPUT, imageUri);
		startActivityForResult(camera, CAMERA);
	}
这样得到的照相图片则存放在imageUri所指向的文件中,而imageUri可以这样定义:imageUri=Uri.parse(PIC_PATH);然后我们就可以操作得到的Bitmap对象了。

照完相之后,可能有些应用也需要调用系统的裁剪功能,代码如下:

	/**
	 * @author ZYJ
	 * @功能 调用系统的裁剪,并开始裁剪指定的图片,返回裁剪的图片到PicUtils.PIC_PATH下
	 * @param uri
	 *            所需要裁剪的图片的uri
	 */
	public void startImgResize(Uri uri) {
		Intent intent = new Intent("com.android.camera.action.CROP"); // 裁剪意图
		intent.setDataAndType(uri, "image/*"); // 格式
		intent.putExtra("crop", "true"); // 发送裁剪信号
		intent.putExtra("aspectX", 4); // X方向上的比例
		intent.putExtra("aspectY", 3); // Y方向上的比例
		intent.putExtra("outputX", 1600); // 裁剪区的宽
		intent.putExtra("outputY", 1200); // 裁剪区的高
		intent.putExtra("scale", false); // 是否保留比例
		intent.putExtra(MediaStore.EXTRA_OUTPUT, uri); // 将URI指向相应的file:///
		intent.putExtra("return-data", false); // 是否将数据保留在Bitmap中返回
		intent.putExtra("outputFormat", Bitmap.CompressFormat.JPEG.toString());
		intent.putExtra("noFaceDetection", true); // no face detection
		intent.putExtra("circleCrop", false); // 圆形裁剪区域
		startActivityForResult(intent, CAMERA + 1);
	}
这段代码的注释已经写的很清楚了,其中使用Uri来传递图片数据,是因为图片较大,而使用系统裁剪则需要android2.3以上的系统才能支持,startActivityForResult(intent, int);同调用相机的方法。

这个说的是调用系统的裁剪,如果没有系统裁剪呢?比如2.2及以下的系统,那该肿么办?我们可以自己写一个裁剪的例子,代码如下:

public class MyView extends View {

	private Context context;
	private Bitmap controlBmp;
	private RectF mainBmp;
	private int mainBmpWidth, mainBmpHeight, controlBmpWidth, controlBmpHeight;
	private Matrix matrix;
	private float[] srcPs, dstPs;
	private RectF srcRect, dstRect;
	private Paint paint, paintRect, paintFrame;
	private float deltaX = 0, deltaY = 0; // 位移值
	private float scaleValue = 1; // 缩放值
	private Point lastPoint;
	private Point prePivot, lastPivot;
	private float preDegree, lastDegree;
	private Point symmetricPoint = new Point(); // 当前操作点对称点

	/**
	 * 图片操作类型 OPER_DEFAULT 默认 OPER_TRANSLATE 移动 OPER_SCALE 缩放 OPER_ROTATE 旋转
	 * OPER_SELECTED 选择
	 */
	public static final int OPER_DEFAULT = -1;
	public static final int OPER_TRANSLATE = 0;
	public static final int OPER_SCALE = 1;
	public static final int OPER_ROTATE = 2;
	public static final int OPER_SELECTED = 3;
	public int lastOper = OPER_DEFAULT;

	/*
	 * 图片控制点 0---1---2 | | 7 8 3 | | 6---5---4
	 */
	public static final int CTR_NONE = -1;
	public static final int CTR_LEFT_TOP = 0;
	public static final int CTR_MID_TOP = 1;
	public static final int CTR_RIGHT_TOP = 2;
	public static final int CTR_RIGHT_MID = 3;
	public static final int CTR_RIGHT_BOTTOM = 4;
	public static final int CTR_MID_BOTTOM = 5;
	public static final int CTR_LEFT_BOTTOM = 6;
	public static final int CTR_LEFT_MID = 7;
	public static final int CTR_MID_MID = 8;
	public int current_ctr = CTR_NONE;

	public MyView(Context context) {
		super(context);
		this.context = context;
		initData();
	}

	public MyView(Context context, AttributeSet attrs) {
		super(context, attrs);
		this.context = context;
		initData();
	}

	/**
	 * @author ZYJ
	 * @功能 初始化数据
	 */
	private void initData() {
		// mainBmp = BitmapFactory.decodeResource(this.context.getResources(),
		// R.drawable.bg1); // 空白位置
		mainBmp = new RectF();
		mainBmp.contains(0, 0, 300, 200);
		controlBmp = BitmapFactory.decodeResource(this.context.getResources(),
				R.drawable.dian); // 触控点
		// mainBmpWidth = mainBmp.getWidth();
		// mainBmpHeight = mainBmp.getHeight();
		mainBmpWidth = 200;
		mainBmpHeight = 200;
		controlBmpWidth = controlBmp.getWidth();
		controlBmpHeight = controlBmp.getHeight();

		// 触控点坐标
		srcPs = new float[] {
				0, 0, mainBmpWidth / 2, 0, mainBmpWidth, 0, mainBmpWidth,
				mainBmpHeight / 2, mainBmpWidth, mainBmpHeight,
				mainBmpWidth / 2, mainBmpHeight, 0, mainBmpHeight, 0,
				mainBmpHeight / 2, mainBmpWidth / 2, mainBmpHeight / 2
		};
		dstPs = srcPs.clone();
		srcRect = new RectF(0, 0, mainBmpWidth, mainBmpHeight);
		dstRect = new RectF();

		matrix = new Matrix();

		prePivot = new Point(mainBmpWidth / 2, mainBmpHeight / 2);
		lastPivot = new Point(mainBmpWidth / 2, mainBmpHeight / 2);

		lastPoint = new Point(0, 0);

		paint = new Paint();

		paintRect = new Paint();
		paintRect.setColor(Color.RED);
		paintRect.setAlpha(100);
		paintRect.setAntiAlias(true);

		paintFrame = new Paint();
		paintFrame.setColor(Color.GREEN);
		paintFrame.setAntiAlias(true);

		setMatrix(OPER_DEFAULT);
	}

	/**
	 * @author ZYJ
	 * @功能 矩阵变换,达到图形变形的目的
	 */
	private void setMatrix(int operationType) {
		switch (operationType) {
			case OPER_TRANSLATE:
				matrix.postTranslate(deltaX, deltaY);
				break;
			case OPER_SCALE:
				matrix.postScale(scaleValue, scaleValue, symmetricPoint.x,
						symmetricPoint.y);
				break;
			case OPER_ROTATE:
				matrix.postRotate(preDegree - lastDegree,
						dstPs[CTR_MID_MID * 2], dstPs[CTR_MID_MID * 2 + 1]);
				break;
		}

		matrix.mapPoints(dstPs, srcPs); // 在图片变换后,计算坐标,将srcPs坐标转换之后放入dstPs中
		matrix.mapRect(dstRect, srcRect); // 同上
	}

	/**
	 * @author ZYJ
	 * @功能 判断该点是否在变换之后的触控点坐标上
	 * @param x
	 * @param y
	 * @return true,在坐标上,其他则返回false
	 */
	private boolean isOnPic(int x, int y) {
		if (dstRect.contains(x, y)) {
			return true;
		} else
			return false;
	}

	/**
	 * @author ZYJ
	 * @功能 根据屏幕的点击或者移动操作,来判断该图片是使用哪种方式进行变换
	 * @param event
	 * @return 返回代表变换方式的id
	 */
	private int getOperationType(MotionEvent event) {

		int evX = (int) event.getX();
		int evY = (int) event.getY();
		int curOper = lastOper;
		switch (event.getAction()) {
			case MotionEvent.ACTION_DOWN:
				current_ctr = isOnCP(evX, evY);
				Log.i("img", "current_ctr is " + current_ctr);
				if (current_ctr != CTR_NONE || isOnPic(evX, evY)) {
					curOper = OPER_SELECTED;
				}
				break;
			case MotionEvent.ACTION_MOVE:
				if (current_ctr > CTR_NONE && current_ctr < CTR_MID_MID) { // 在边缘时进行缩放操作
					curOper = OPER_SCALE;
				} else if (current_ctr == CTR_MID_MID) { // 在中心点是进行旋转操作
					curOper = OPER_ROTATE;
				} else if (lastOper == OPER_SELECTED) { // 不在触控点上,进行移动
					curOper = OPER_TRANSLATE;
				}
				break;
			case MotionEvent.ACTION_UP:
				curOper = OPER_SELECTED;
				break;
			default:
				break;
		}
		Log.d("img", "curOper is " + curOper);
		return curOper;

	}

	/**
	 * @author ZYJ
	 * @功能 判断点所在的控制点
	 * @param evX
	 *            点击屏幕的x坐标
	 * @param evY
	 *            点击屏幕的y坐标
	 * @return 返回屏幕点击点所在的触控点id,默认返回第一个
	 */
	private int isOnCP(int evx, int evy) {
		Rect rect = new Rect(evx - controlBmpWidth / 2, evy - controlBmpHeight
				/ 2, evx + controlBmpWidth / 2, evy + controlBmpHeight / 2);
		int res = 0;
		for (int i = 0; i < dstPs.length; i += 2) {
			if (rect.contains((int) dstPs[i], (int) dstPs[i + 1])) {
				return res;
			}
			++res;
		}
		return CTR_NONE;
	}

	public boolean onTouchEvent(MotionEvent event) {
		dispatchTouchEvent(event);
		return true;
	}

	@Override
	public boolean dispatchTouchEvent(MotionEvent event) { // 分发器
		int evX = (int) event.getX();
		int evY = (int) event.getY();

		int operType = OPER_DEFAULT;
		// int operType = 0;
		operType = getOperationType(event);

		switch (operType) {
			case OPER_TRANSLATE:
				translate(evX, evY);
				break;
			case OPER_SCALE:
				scale(event);
				break;
			case OPER_ROTATE:
				rotate(event);
				break;
		}

		lastPoint.x = evX;
		lastPoint.y = evY;

		lastOper = operType;
		invalidate();// 重绘
		return true;
	}

	/**
	 * @author ZYJ
	 * @功能 移动
	 * @param evx
	 * @param evy
	 */
	private void translate(int evx, int evy) {

		// **************************************************************
		// 计算两点之间的距离,并将它设置到lastPivot中
		// **************************************************************
		prePivot.x += evx - lastPoint.x;
		prePivot.y += evy - lastPoint.y;

		deltaX = prePivot.x - lastPivot.x;
		deltaY = prePivot.y - lastPivot.y;

		lastPivot.x = prePivot.x;
		lastPivot.y = prePivot.y;

		setMatrix(OPER_TRANSLATE); // 设置矩阵

	}

	/**
	 * 缩放 0---1---2 | | 7 8 3 | | 6---5---4
	 * 
	 * @param evX
	 * @param evY
	 */
	private void scale(MotionEvent event) {

		int pointIndex = current_ctr * 2; // 得到触控点坐标

		float px = dstPs[pointIndex];
		float py = dstPs[pointIndex + 1];

		float evx = event.getX();
		float evy = event.getY();

		float oppositeX = 0;
		float oppositeY = 0;
		if (current_ctr < 4 && current_ctr >= 0) {
			oppositeX = dstPs[pointIndex + 8];
			oppositeY = dstPs[pointIndex + 9];
		} else if (current_ctr >= 4) {
			oppositeX = dstPs[pointIndex - 8];
			oppositeY = dstPs[pointIndex - 7];
		}
		float temp1 = getDistanceOfTwoPoints(px, py, oppositeX, oppositeY);
		float temp2 = getDistanceOfTwoPoints(evx, evy, oppositeX, oppositeY);

		this.scaleValue = temp2 / temp1; // 缩放倍数
		symmetricPoint.x = (int) oppositeX;
		symmetricPoint.y = (int) oppositeY;

		Log.i("img", "scaleValue is " + scaleValue);
		setMatrix(OPER_SCALE);
	}

	/**
	 * 旋转图片 0---1---2 | | 7 8 3 | | 6---5---4
	 * 
	 * @param evX
	 * @param evY
	 */
	private void rotate(MotionEvent event) {

		if (event.getPointerCount() == 2) { // 两个手指按压屏幕
			preDegree = computeDegree(new Point((int) event.getX(0),
					(int) event.getY(0)), new Point((int) event.getX(1),
					(int) event.getY(1)));
		} else {
			preDegree = computeDegree(
					new Point((int) event.getX(), (int) event.getY()),
					new Point((int) dstPs[16], (int) dstPs[17]));
		}
		setMatrix(OPER_ROTATE);
		lastDegree = preDegree;
	}

	/**
	 * 计算两点与垂直方向夹角
	 * 
	 * @param p1
	 * @param p2
	 * @return
	 */
	public float computeDegree(Point p1, Point p2) {
		float tran_x = p1.x - p2.x;
		float tran_y = p1.y - p2.y;
		float degree = 0.0f;
		float angle = (float) (Math.asin(tran_x
				/ Math.sqrt(tran_x * tran_x + tran_y * tran_y)) * 180 / Math.PI);
		if (!Float.isNaN(angle)) {
			if (tran_x >= 0 && tran_y <= 0) {// 第一象限
				degree = angle;
			} else if (tran_x <= 0 && tran_y <= 0) {// 第二象限
				degree = angle;
			} else if (tran_x <= 0 && tran_y >= 0) {// 第三象限
				degree = -180 - angle;
			} else if (tran_x >= 0 && tran_y >= 0) {// 第四象限
				degree = 180 - angle;
			}
		}
		return degree;
	}

	/**
	 * @author ZYJ
	 * @功能 两点之间的距离
	 * @param x1
	 * @param y1
	 * @param x2
	 * @param y2
	 * @return
	 */
	private float getDistanceOfTwoPoints(float x1, float y1, float x2, float y2) {
		return (float) (Math
				.sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)));
	}

	public void onDraw(Canvas canvas) {
		drawBackground(canvas);// 绘制背景,以便测试矩形的映射
		// canvas.drawBitmap(mainBmp, matrix, paint);// 绘制主图片
		drawControlPoints(canvas);// 绘制控制点图片
		zyj(canvas,
				BitmapFactory.decodeResource(getResources(), (R.drawable.bg1)));
		drawFrame(canvas);// 绘制边框,以便测试点的映射
	}

	private void zyj(Canvas canvas, Bitmap bitmap) {
		Matrix m = new Matrix();
		float beishu = (float) this.getWidth() / bitmap.getWidth();
		m.postScale(beishu, beishu);
		Rect r = new Rect();
		r.left = (int) dstPs[0];
		r.right = (int) (dstPs[4]);
		r.top = (int) dstPs[1];
		r.bottom = (int) (dstPs[13]);
		canvas.clipRect(r);
		canvas.drawBitmap(bitmap, m, paint);
	}

	private void drawBackground(Canvas canvas) {
		canvas.drawRect(dstRect, paintRect);
	}

	private void drawFrame(Canvas canvas) {
		canvas.drawLine(dstPs[0], dstPs[1], dstPs[4], dstPs[5], paintFrame);
		canvas.drawLine(dstPs[4], dstPs[5], dstPs[8], dstPs[9], paintFrame);
		canvas.drawLine(dstPs[8], dstPs[9], dstPs[12], dstPs[13], paintFrame);
		canvas.drawLine(dstPs[0], dstPs[1], dstPs[12], dstPs[13], paintFrame);
		canvas.drawPoint(dstPs[16], dstPs[17], paintFrame);
	}

	private void drawControlPoints(Canvas canvas) {
		for (int i = 0; i < dstPs.length; i += 2) {
			if (i == 16) {
				continue;
			}
			canvas.drawBitmap(controlBmp, dstPs[i] - controlBmpWidth / 2,
					dstPs[i + 1] - controlBmpHeight / 2, paint);
		}
	}

	public Bitmap startCrop() {
		BitmapFactory.Options opts = new BitmapFactory.Options();
		opts.inDensity = getResources().getDisplayMetrics().densityDpi;
		opts.inTargetDensity = getResources().getDisplayMetrics().densityDpi;
		Bitmap tmp = BitmapFactory.decodeResource(getResources(),
				R.drawable.bg1, opts);
		float multiple = ((float) (tmp.getWidth()) / this.getWidth());
		int x = (int) (dstPs[0] * multiple);
		int y = (int) (dstPs[1] * multiple);
		if (x < 0) {
			x = 0;
		}
		if (y < 0) {
			y = 0;
		}
		// 打印信息
		Bitmap result = Bitmap.createBitmap(tmp, x, y, tmp.getWidth() - x,
				tmp.getHeight() - y);
		return result;
	}
}

代码可能有点长,它实现了图片的缩放,旋转,位移等操作,其实说穿了,就是通过矩阵来实现这一切的,在startCrop()方法中,来开始裁剪操作好了的图片,从而得到我们所需要的图片。

注:

1、在调用系统相机之前,你可能调用了相机,但你得不到相机的结果,有两种可能,一是你的图片过去,传递数据时出现异常,这个时候你需要用Uri来传递图片数据,二是你的Activity的加载模式配置有问题,因为要得到相机结果,加载模式需要配置成默认的standard,有一次我配置成singleInstance模式,始终得不到结果,所以你可以看看你的Activity模式是否出现了错误,最好是这几种模式都试试

2、在自定义的裁剪中,我所贴出来的代码,也是参照前辈们的,然后自己小改动了一点点,它是基于普通的View组件,所以给它设置背景图片,才能开始裁剪图片

OK,以上就是关于调用系统照相和系统裁剪以及自定义系统裁剪






  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值