文章目录
前言
使用DeepLabV3图像分割模型在iOS应用程序中添加、删除和修改图像背景
Core ML是Apple的移动机器学习框架,允许在设备上部署、运行和重新训练模型。
作为第一次学习使用人像识别并实现机器学习的最流行用例之一:iOS应用程序图像分割
一、获取DeepLab Core ML模型
1.Core ML模型
在Apple Developer上可义获取Create ML以及对Core ML的概述、讲解及其的转换器和已训练好的模型,可以更多了解其技术。
2.导入Core ML
以Swift作为用户界面启动一个新的Xcode项目,并拖放已下载好的Core ML文件
二、人脸识别
1.读取相册,导入图片
@IBAction func getImage(_ sender: UIButton) {
let actionSheetController = UIAlertController()
let cancelAction = UIAlertAction(title: "取消", style: UIAlertAction.Style.cancel){(alertAction)-> Void in
print("Tap 取消 Button")
}
let takingPicturesAction = UIAlertAction(title: "拍照", style: UIAlertAction.Style.destructive){(alertAction) -> Void in
self.getImageGo(type:1)
}
let photoAlbumAction = UIAlertAction(title: "相册", style: UIAlertAction.Style.default){ (alertAction)-> Void in
self.getImageGo(type:2)
}
actionSheetController.addAction(cancelAction)
actionSheetController.addAction(takingPicturesAction)
actionSheetController.addAction(photoAlbumAction)
self.present(actionSheetController, animated: true,completion:nil)
}
点击相册可读取导入图片,拍照可在真机iPhone上实现,Simulator无法实现
2.识别图片是否有人像,没有提示无法识别
func detect(){
let person = CIImage(cgImage: image.image!.cgImage!)
let accuracy = [CIDetectorAccuracy:CIDetectorAccuracyHigh]
let facedetector = CIDetector(ofType: CIDetectorTypeFace, context: nil,options: accuracy)
let faces = facedetector?.features(in: person)
if faces?.first is CIFaceFeature {
print("检测到人脸!")
}else{
let alert = UIAlertController(title: "提示", message: "未检测到人脸", preferredStyle:.alert)
let alertAction = UIAlertAction(title: "确定", style:.default,handler: nil)
alert.addAction(alertAction)
self.present(alert, animated: true,completion: nil)
}
}
func getImageGo(type:Int){
takingPicture = UIImagePickerController.init()
if(type==1){
takingPicture.sourceType = .camera
}else if(type==2){
takingPicture.sourceType = .photoLibrary
}
//是否截取,设置为true在获取图片后可以将其截取成正方形
takingPicture.allowsEditing = false
takingPicture.delegate = self
present(takingPicture, animated: true,completion: nil)
}
//拍照或是相册选择返回的图片
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
takingPicture.dismiss(animated: true,completion: nil)
if(takingPicture.allowsEditing == false){
//原图
image.image=info[UIImagePickerController.InfoKey.originalImage]as? UIImage
}else{
//截图
image.image=info[UIImagePickerController.InfoKey.editedImage]as? UIImage
}
self.detect()
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true,completion: nil)
}
三、使用视觉请求运行图像分割
1.设置视觉请求以运行DeepLabV3图像分割模型:
func runVisionRequest(){
guard let model = try? VNCoreMLModel(for: DeepLabV3(configuration: .init()).model)
else{return}
let request = VNCoreMLRequest(model: model,completionHandler: visionRequestDidComplete)
request.imageCropAndScaleOption = .scaleFill
DispatchQueue.global().async{ [self] in
let handler = VNImageRequestHandler(cgImage: inputimage.cgImage!,options: [:])
do {
try handler.perform([request])
}catch{
print(error)
}
}
}
- Core ML不支持iOS 14中的默认init方法(DeepLabV3()),重新定义使用init(configuration:)
- VNCoreMLModel是Core ML模型的容器。需要使用这种格式来使用VNCoreMLRequest执行视觉处理
- VNCoreMLRequest完成后,触发在visionRequestDidComplete函数中定义的完成处理程序函数。
- VNImageRequestHandler函数是触发Vision请求的地方。传递输入图像(Vision对其进行预处理以匹配模型输入大小),并在处理程序中设置VNCoreMLRequest。
2.人像分割技术细节,从输出中检索分段掩码
VNImageRequest完成,可以在定义的visionRequestDidComplete完成处理程序中处理结果
代码如下(示例):
func visionRequestDidComplete(request: VNRequest, error: Error?) {
DispatchQueue.main.async {
if let observations = request.results as? [VNCoreMLFeatureValueObservation],
let segmentationmap = observations.first?.featureValue.multiArrayValue {
let segmentationMask = segmentationmap.image(min: 0, max: 1)
self.image.image = segmentationMask!.resizedImage(for: self.image.image!.size)!
self.maskInputImage()
}
}
}
原始图像与遮罩图像效果会是这样:
3.添加新的背景
将分割的遮罩传递给maskInputImage函数中,将原始图像采集裁剪出遮罩图像与新的背景图像融合。背景可以是图片或者是用指定颜色当背景。本次使用的是第二种方式。
func maskInputImage(){
let bgImage = UIImage.imageFromColor(color: .darkGray,size:self.image.image!.size, scale: self.image.image!.scale)
let beginImage = CIImage(cgImage: inputimage.cgImage!)
let background = CIImage(cgImage: bgImage!.cgImage!)
let mask = CIImage(cgImage: image.image!.cgImage!)
if let compositeImage = CIFilter(name: "CIBlendWithMask",parameters: [kCIInputImageKey:beginImage,
kCIInputBackgroundImageKey:background,
kCIInputMaskImageKey:mask])?.outputImage
{
let ciCOntext = CIContext(options:nil)
let filteredImageRef = ciCOntext.createCGImage(compositeImage, from: compositeImage.extent)
self.image.image = UIImage(cgImage: filteredImageRef!)
let testImage = UIImageView.init(frame: CGRect(x: 50, y: 100, width: 200, height: 200))
testImage.image = UIImage(cgImage: filteredImageRef!)
它会回传以下结果:
如果在没有导入人像图片而直接点击合成Button,会报错没有找到对应的图片使得程序退出
四、保存图像,并将导入的人像可当作贴纸拖动
1.将遮罩融合的图片以2048*3072像素保存
import UIKit
import Photos
class SaveImageViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
}
func loadImage(image:UIImage) {
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(image:didFinishSavingWithError:contextInfo:)), nil)
}
@objc func image(image: UIImage, didFinishSavingWithError error: NSError?, contextInfo: AnyObject){
if let error = error{
let ac = UIAlertController(title: "Save error", message: error.localizedDescription, preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
}else{
let ac = UIAlertController(title: "Saved!", message: "Your altered image has been saved to your photos", preferredStyle: .alert)
ac.addAction(UIAlertAction(title:"OK", style: .default))
present(ac, animated: true)
}
}
}
@IBAction func SaveImage(_ sender: UIButton) {
let reSize = CGSize(width: 2048, height: 3072)
image.image = image.image!.reSizeImage(reSize: reSize)
SaveImageViewController().loadImage(image: self.image.image!)
}
}
点击保存Button后,要保存的图片可在相册中查看
2.人像贴纸实现
在StickerView编写贴纸功能
import UIKit
@objc protocol StickerViewDelegate{
@objc func stickerViewDidBeginMoving(_ stickerView:StickerView)
@objc func stickerViewDidChangeMoving(_ stickerView:StickerView)
@objc func stickerViewDidEndMoving(_ stickerView:StickerView)
@objc func stickerViewDidTap(_ stickerView: StickerView)
}
class StickerView: UIView {
var delegate:StickerViewDelegate!
var contentView:UIView!
init(contentView:UIView){
self.defaultInset = 11
self.defaultMinimumSize = 4 * self.defaultInset
var frame = contentView.frame
frame = CGRect(x: 0, y: 0, width: frame.size.width + CGFloat(self.defaultInset) * 2, height: frame.size.height + CGFloat(self.defaultInset) * 2)
super.init(frame: frame)
//self.backgroundColor = UIColor.clear
self.addGestureRecognizer(self.moveGesture)
self.addGestureRecognizer(self.tapGesture)
// Setup content view
self.contentView = contentView
//self.contentView.center = CGRectGetCenter(self.bounds)
self.contentView.isUserInteractionEnabled = false
self.contentView.layer.allowsEdgeAntialiasing = true
self.addSubview(self.contentView)
//self.minimumSize = self.defaultMinimumSize
//self.outlineBorderColor = .brown
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private var defaultInset:NSInteger
private var defaultMinimumSize:NSInteger
// Variables for moving view
private var beginningPoint = CGPoint.zero
private var beginningCenter = CGPoint.zero
private lazy var moveGesture = {
return UIPanGestureRecognizer(target: self, action: #selector(handleMoveGesture(_:)))
}()
private lazy var tapGesture = {
return UITapGestureRecognizer(target: self, action: #selector(handleTapGesture(_:)))
}()
// MARK: - Gesture Handlers
@objc func handleMoveGesture(_ recognizer:UIPanGestureRecognizer){
let touchLocation = recognizer.location(in: self.superview)
switch recognizer.state {
case .began:
self.beginningPoint = touchLocation
self.beginningCenter = self.center
if let delegate = self.delegate {
delegate.stickerViewDidBeginMoving(self)
}
case .changed:
self.center = CGPoint(x: self.beginningCenter.x + (touchLocation.x - self.beginningPoint.x), y: self.beginningCenter.y + (touchLocation.y - self.beginningPoint.y))
if let delegate = self.delegate {
delegate.stickerViewDidBeginMoving(self)
}
case .ended:
self.center = CGPoint(x: self.beginningCenter.x + (touchLocation.x - self.beginningPoint.x), y: self.beginningCenter.y + (touchLocation.y - self.beginningPoint.y))
if let delegate = self.delegate {
delegate.stickerViewDidBeginMoving(self)
}
default:
break
}
}
@objc
func handleTapGesture(_ recognizer: UITapGestureRecognizer) {
if let delegate = self.delegate {
delegate.stickerViewDidTap(self)
}
}
}
extension StickerView:UIGestureRecognizerDelegate{
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldReceive touch: UITouch) -> Bool {
return true
}
}
在ViewController中的maskInputImage函数中调用实现
let stickView = StickerView.init(contentView: testImage)
stickView.center = CGPoint.init(x: 150, y: 150)
stickView.delegate = self
self.view.addSubview(stickView)
StickerView的代理
@available(iOS 15.0, *)
extension ViewController:StickerViewDelegate{
func stickerViewDidBeginMoving(_ stickerView: StickerView) {
//self.selectedStickerView = stickerView
}
func stickerViewDidChangeMoving(_ stickerView: StickerView) {
}
func stickerViewDidEndMoving(_ stickerView: StickerView) {
}
func stickerViewDidTap(_ stickerView: StickerView) {
}
}
效果图如下:
不足之处:可以实现将图片作为贴纸且可移动,但贴纸是整个包含背景图片,不是所要求的把人像作为贴纸移动,对此要加强学习改进。
总结
本次项目对图片识别人像、人像切割技术、加入新的背景保存等技术知识上进行学习实现,对贴纸运用还是不够熟练,对其要进行更深入、更全面的学习。作为初级iOS开发员要多做项目,多实践;才能知道自己哪里不足,并对此加强。