前段时间做了个小项目,让完全自定义一个相机,并且实现各种功能,好在有官方的API文档和各个资料啊,最后是写出来了,使用的是AVFoundation的框架,但是后来发现拍照的照片是旋转了90°的,另外截屏框里面的截取之后的图片位置也是不对的,研究了大半天,发现系统API的AVCaptureStillImageOutput生成的图片fixOrientation没有设置,所以需要各位手动旋转了!
因为今年一开始就一直在用swift开发项目,所以博主就不贴OC的代码了,写出来的是swift2.3的,假如有使用3.0的请自行更改部分错误提示,话说3.0变动确实挺大的。
下面是封装的方法:
//MARK: -旋转图片
func fixOrientation(aImage:UIImage)->UIImage {
if (aImage.imageOrientation == UIImageOrientation.Up) {
return aImage
}
var transform:CGAffineTransform = CGAffineTransformIdentity;
switch (aImage.imageOrientation) {
case UIImageOrientation.Down,UIImageOrientation.DownMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);
transform = CGAffineTransformRotate(transform, CGFloat(M_PI));
break;
case UIImageOrientation.Left,UIImageOrientation.LeftMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
transform = CGAffineTransformRotate(transform, CGFloat(M_PI_2));
break;
case UIImageOrientation.Right,UIImageOrientation.RightMirrored:
transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);
transform = CGAffineTransformRotate(transform, CGFloat(-M_PI_2));
break;
default:
break;
}
switch (aImage.imageOrientation) {
case UIImageOrientation.UpMirrored,UIImageOrientation.DownMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
case UIImageOrientation.LeftMirrored,UIImageOrientation.RightMirrored:
transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
default:
break;
}
let ctx:CGContextRef = CGBitmapContextCreate(nil, Int(aImage.size.width), Int(aImage.size.height),CGImageGetBitsPerComponent(aImage.CGImage), 0,CGImageGetColorSpace(aImage.CGImage),CGImageGetBitmapInfo(aImage.CGImage).rawValue)!;
CGContextConcatCTM(ctx, transform);
switch (aImage.imageOrientation) {
case UIImageOrientation.Left,UIImageOrientation.LeftMirrored,UIImageOrientation.Right,UIImageOrientation.RightMirrored:
CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);
break;
default:
CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);
break;
}
let cgimg:CGImageRef = CGBitmapContextCreateImage(ctx)!;
let img:UIImage = UIImage.init(CGImage: cgimg)
return img
}
当你需要一个裁剪框裁剪图片的时候,其实就是你给一个区域然后自己drawImage重新绘制生成一张图片,经测试,我感觉这是实现裁剪的一个比较简单的方法:
//MARK: -裁剪图片
func cutImageFromAlbum(cutImage:UIImage)->UIImage {
let subImageSize:CGSize = CGSizeMake(SCREENWIDTH, SCREENHEIGHT);
//定义裁剪的区域相对于原图片的位置
let subImageRect:CGRect = CGRectMake((SCREENWIDTH/6) * kDeviceScale,(SCREENHEIGHT/6) * kDeviceScale, quJingWidth * kDeviceScale, quJingHeight * kDeviceScale)
let imageRef:CGImageRef = cutImage.CGImage!;
let subImageRef:CGImageRef = CGImageCreateWithImageInRect(imageRef, subImageRect)!
UIGraphicsBeginImageContextWithOptions(subImageSize,false,kDeviceScale);
let context:CGContextRef = UIGraphicsGetCurrentContext()!
CGContextDrawImage(context, subImageRect, subImageRef)
let subImage:UIImage = UIImage.init(CGImage: subImageRef)
UIGraphicsEndImageContext()
//返回裁剪的部分图像
return subImage
}
注意:
kDeviceScale其实就是UIScreen.mainScreen().scale,也就是判断当前屏幕分辨率的,当屏幕分辨率为640x940时UIScreen.mainScreen().scale = 2,当屏幕分辨率为320x480时UIScreen.mainScreen().scale = 1。上面在计算裁剪区域位置的时候,一定要 * kDeviceScale,不然你裁剪的位置根本就不是你想要裁剪的位置,这个坑切记,亲身试验找出的问题,多么痛的领悟啊!