支持技术分享,转载或复制,请指出文章来源 此博客作者为Jack__0023
感谢(一些理论和关键代码的来源链接)
效果如下
将中间红色女性脸(以下简称C脸)换到第一张图的男性图片(以下简称D脸),换脸结果就是第三张图片(以下简称E脸)
1、简介
简单总结,一共 四个 步骤
这些理论掺杂了我个人看法,你可以去网上看对应的理论也是可以的
1-1、我使用 dlib 检测模块检测出 人脸的 68个关键点(这也就意味着我不用以后去担心使用第三方,他会不会收费的问题),
1-2、然后通过这68个关键点计算凸包并且保存凸包点(这一块很重要,没有这一块,就算执行了1-3和1-4步骤,你也会发现换脸的结果容易变成一些很糟糕的结果,例如两张半脸在一起);
1-3、然后在进行三角剖分和仿射变换(这块是将C脸切分成一个个三角形的然后通过关键点坐标覆盖到D脸上);
1-4、最后一步就是 无缝融合(这一步的作用是进行将仿射变换的脸与周围颜色的融合,避免看起来觉得很怪异不真实)。
2、注意事项(图片分辨率)
对于换脸的图片分辨率,你最好选取比较大一点的,并且双方的分辨率最好是相近的,目的是避免下采样效果严重,不然到了 三角剖分和仿射变换 的阶段,你会发现有一些小黑线,具体可以是例如 (1080*1920)这样的尺寸。
3、准备工具
3-1、Opencv 环境
如果没有安装的过的请参考我这篇博客,里面有介绍 Opencv 人实现人脸识别检测
3-1、dlib 模块组
这个我找了蛮久的,if 你觉得我写的不错,你积分又充足(只要两分),你点击这里下载,else 情况,你可以去我推荐的github里面dlib那个连接里面下载,一样的东西来的。
还需要 文件 shape_predictor_68_face_landmarks.dat,你可以dlib官网下载,这个是 68 个人脸关键点的检查文件
3、代码区(这次直接上代码吧,不分开了)
这里面的代码是我另外开的测试项目,所以包含了很多我的猜想和测试内容,比较乱,后面我如果有时间会整理一下这块然后再发上来,一些类缺失import 是被我删除了,但是是个性化类,你们重写一下即可。
import android.annotation.SuppressLint;
import android.app.Activity;
import android.content.res.Resources;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.Point;
import android.graphics.drawable.BitmapDrawable;
import android.os.Bundle;
import android.os.Environment;
import android.util.Log;
import com.tzutalin.dlib.Constants;
import com.tzutalin.dlib.FaceDet;
import com.tzutalin.dlib.VisionDetRet;
import org.opencv.android.OpenCVLoader;
import org.opencv.android.Utils;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfFloat6;
import org.opencv.core.MatOfInt;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgproc.Subdiv2D;
import org.opencv.photo.Photo;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
public class FaceOffActivity extends Activity {
private final String TAG = "yaoxumin33";
private FaceDet mFaceDet;
List<VisionDetRet> results;
private Paint mFaceLandmardkPaint;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
MgThread.exceute(new Runnable() {
@Override
public void run() {
FileUtils.delAllFile(FileConts.PATH_CUT_IMAGE);
FileUtils.exitOrCreatePath(FileConts.PATH_CUT_IMAGE);
}
});
}
@SuppressLint("ResourceType")
@Override
public void onResume() {
super.onResume();
if (!OpenCVLoader.initDebug()) {
Log.e("yaoxumin33", "OpenCV init error");
} else {
Log.d("yaoxumin33", "OpenCV library found inside package. Using it!");
}
initSetting();
//读取第一张图片资源,并获取人脸关键点
Resources r = getApplicationContext().getResources();
InputStream is = r.openRawResource(R.drawable.test8);
BitmapDrawable bmpDraw = new BitmapDrawable(is);
Bitmap bitmap = bmpDraw.getBitmap();
//获取人脸关键点,大概应该是68个
List<Point> points1 = checkFace(bitmap);
//读取第二张图片资源,获取人脸关键点
is = r.openRawResource(R.drawable.test7);
bmpDraw = new BitmapDrawable(is);
Bitmap bitmap2 = bmpDraw.getBitmap();
//获取人脸关键点
List<Point> points2 = checkFace(bitmap2);
//转换成 Mat 方便计算
Mat mat1 = new Mat();
Utils.bitmapToMat(bitmap, mat1, true);
Mat mat2 = new Mat();
Utils.bitmapToMat(bitmap2, mat2, true);
Imgproc.cvtColor(mat1, mat1, Imgproc.COLOR_RGBA2BGR);
Imgproc.cvtColor(mat2, mat2, Imgproc.COLOR_RGBA2BGR);
Log.d("yaoxumin33", "CV_8UC3 : " + CvType.CV_8UC3 + ",CvType.CV_8UC1 : " + CvType.CV_8UC1);
Log.d("yaoxumin33", "mat1 : " + mat1.type());
// Imgproc.GaussianBlur(mat1, mat1, new Size(3, 3), 0, 0);
// Imgproc.bilateralFilter(mat1, mat3, 3, 50, 50);
// Imgproc.GaussianBlur(mat3, mat3, new Size(3, 3), 0, 0);
/*Mat mat3 = beautiful(mat1);
Mat dest = new Mat(new Size(mat3.cols(), mat1.rows()), mat1.type());
Mat temp2 = dest.colRange(0, dest.cols());
mat3.copyTo(temp2);
String filename = FileConts.PATH_CUT_IMAGE + "dest.jpg";
Imgcodecs.imwrite(filename, dest);
Log.d("yaoxumin33", "dest");*/
/*filename = FileConts.PATH_CUT_IMAGE + "done-1.jpg";
Imgcodecs.imwrite(filename, brightness(dest, 1.0f, 98));
Log.d("yaoxumin33", "done-1");
filename = FileConts.PATH_CUT_IMAGE + "done-2.jpg";
Imgcodecs.imwrite(filename, brightness(dest, 1.15f, 98));
Log.d("yaoxumin33", "done-2");
filename = FileConts.PATH_CUT_IMAGE + "done-3.jpg";
Imgcodecs.imwrite(filename, brightness(dest, 1.0f, 118));
Log.d("yaoxumin33", "done-3");
filename = FileConts.PATH_CUT_IMAGE + "done-4.jpg";
Imgcodecs.imwrite(filename, brightness(dest, 1.15f, 118));
Log.d("yaoxumin33", "done-4");*/
//调整dest 亮度和对比度
/*Mat dest2 = brightness(dest, 1.0f, 88);
Imgproc.GaussianBlur(dest2, dest2, new Size(9, 9), 0, 0);*/
/*Mat result = new Mat();
int bilateralFilterVal = 30; // 双边模糊系数
Imgproc.bilateralFilter(mat1, result, bilateralFilterVal, // 整体磨皮
bilateralFilterVal * 2, bilateralFilterVal / 2);
Mat matFinal = new Mat();
Imgproc.GaussianBlur(result, matFinal, new Size(0, 0), 9);
Core.addWeighted(result, 1.5, matFinal, -0.5, 0, matFinal);
String filename = FileConts.PATH_CUT_IMAGE + "matFinal.jpg";
Imgcodecs.imwrite(filename, matFinal);*/
Log.d("yaoxumin33", "美颜完毕,开始换脸");
faceOff(points1, points2, mat1, mat2);
}
/**
* @Description 美白测试方法一号,速度快一点但是效果没有我想要的那么好,来源是美颜公式
* @author 姚旭民
* @date 2020/1/13 14:44
*
* @param image 要美颜的图片资源
* @return 返回经过美颜后的 Mat 对象
*/
public Mat beautiful(Mat image) {
Mat dst = new Mat();
// int value1 = 3, value2 = 1; 磨皮程度与细节程度的确定
int value1 = 3, value2 = 50;
int dx = value1 * 5; // 双边滤波参数之一
double fc = value1 * 25; // 双边滤波参数之一
double p = 0.1f; // 透明度
Mat temp1 = new Mat(), temp2 = new Mat(), temp3 = new Mat(), temp4 = new Mat();
// 双边滤波
Imgproc.bilateralFilter(image, temp1, dx, fc, fc);
// temp2 = (temp1 - image + 128);
Mat temp22 = new Mat();
Core.subtract(temp1, image, temp22);
// Core.subtract(temp22, new Scalar(128), temp2);
Core.add(temp22, new Scalar(128, 128, 128, 128), temp2);
// 高斯模糊
Imgproc.GaussianBlur(temp2, temp3, new Size(2 * value2 - 1, 2 * value2 - 1), 0, 0);
// temp4 = image + 2 * temp3 - 255;
Mat temp44 = new Mat();
temp3.convertTo(temp44, temp3.type(), 2, -255);
Core.add(image, temp44, temp4);
// dst = (image*(100 - p) + temp4*p) / 100;
Core.addWeighted(image, p, temp4, 1 - p, 0.0, dst);
Core.add(dst, new Scalar(10, 10, 10), dst);
return dst;
}
/**
* @Description 用于提高亮度和对比度,简称 美白,你可以找到这块代码的对应美白公式
* @author 姚旭民
* @date 2020/1/13 14:47
*/
protected Mat brightness(Mat src, float alpha, float beta) {
Mat result = Mat.zeros(src.size(), src.type());
int width = src.cols();
int height = src.rows();
double[] value;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
value = result.get(i, j);
value[0] = alpha * src.get(i, j)[0] + beta > 255 ? 255 : src.get(i, j)[0] + beta;
value[1] = alpha * src.get(i, j)[1] + beta > 255 ? 255 : src.get(i, j)[1] + beta;
value[2] = alpha * src.get(i, j)[2] + beta > 255 ? 255 : src.get(i, j)[2] + beta;
result.put(i, j, value);
}
}
return result;
}
protected double saturateCast(double value) {
if (value < 0) {
value = 0;
} else if (value > 255) {
value = 255;
}
return value;
}
@Override
protected void onDestroy() {
super.onDestroy();
if (mFaceDet != null) {
mFaceDet.release();
}
}
public void initSetting() {
if (mFaceDet == null) {
mFaceDet = new FaceDet(Constants.getFaceShapeModelPath());
}
mFaceLandmardkPaint = new Paint();
mFaceLandmardkPaint.setColor(Color.GREEN);
mFaceLandmardkPaint.setStrokeWidth(2);
mFaceLandmardkPaint.setStyle(Paint.Style.STROKE);
}
/**
* @Description 获取 bitmap 图片资源中的人脸,存在就返回人脸关键点的集合
* @author 姚旭民
* @date 2020/1/13 14:48
*
* @param bitmap 应该包含人脸的图片资源
* @return 返回人脸 关键点坐标
*/
public List<Point> checkFace(Bitmap bitmap) {
results = mFaceDet.detect(bitmap);
Log.d("yaoxumin33", "checkFace| results : " + results);
if (results != null) {
for (final VisionDetRet ret : results) {
//检索出来人脸关键点 68个
float resizeRatio = 1.0f;
Bitmap temp = bitmap.copy(bitmap.getConfig(), true);
android.graphics.Rect bounds = new android.graphics.Rect();
bounds.left = (int) (ret.getLeft() * resizeRatio);
bounds.top = (int) (ret.getTop() * resizeRatio);
bounds.right = (int) (ret.getRight() * resizeRatio);
bounds.bottom = (int) (ret.getBottom() * resizeRatio);
Canvas canvas = new Canvas(temp);
canvas.drawRect(bounds, mFaceLandmardkPaint);
// Draw landmark
ArrayList<Point> landmarks = ret.getFaceLandmarks();
Log.d("yaoxumin33", "找到位置 landmarks : " + landmarks.size());
// StringBuilder str = new StringBuilder();
for (Point point : landmarks) {
int pointX = (int) (point.x * resizeRatio);
int pointY = (int) (point.y * resizeRatio);
canvas.drawCircle(pointX, pointY, 2, mFaceLandmardkPaint);
// str.append("[").append(pointX).append(",").append(pointY).append("],");
}
// Log.d("yaoxumin33", "str : " + str);
saveBitmap(temp, FileConts.PATH_CUT_IMAGE + System.currentTimeMillis() + "-temp.png");
return landmarks;
}
}
return null;
}
/**
* @Description 保存图片到本地的方法,可以忽略不管
* @author 姚旭民
* @date 2020/1/13 14:49
*/
public static void saveBitmap(Bitmap bitmap, String path) {
String savePath;
File filePic;
if (Environment.getExternalStorageState().equals(Environment.MEDIA_MOUNTED)) {
savePath = path;
} else {
Log.d("yaoxumin33", "saveBitmap failure : sdcard not mounted");
return;
}
// Log.d(TAG, "saveBitmap savePath : " + savePath);
try {
filePic = new File(savePath);
if (!filePic.exists()) {
filePic.getParentFile().mkdirs();
filePic.createNewFile();
}
FileOutputStream fos = new FileOutputStream(filePic);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
} catch (IOException e) {
Log.d("yaoxumin33", "saveBitmap: " + e.getMessage());
return;
}
Log.d("yaoxumin33", "saveBitmap success: " + filePic.getAbsolutePath());
}
/**
* @Description 换脸的核心方法,计算凸包点,三角剖分和仿射变换,无缝融合 都在这里
* @author 姚旭民
* @date 2020/1/13 14:50
*/
public void faceOff(List<Point> hull1s, List<Point> hull2s, Mat imgCV1, Mat imgCV2) {
org.opencv.core.Point[] points1 = pointToOpencvPoint(hull1s);
org.opencv.core.Point[] points2 = pointToOpencvPoint(hull2s);
Mat imgCV1Warped = imgCV2.clone();
imgCV1.convertTo(imgCV1, CvType.CV_32F);
imgCV1Warped.convertTo(imgCV1Warped, CvType.CV_32F);
//寻找凸包点
MatOfInt hullIndex = new MatOfInt();
Imgproc.convexHull(new MatOfPoint(points2), hullIndex, true);
int[] hullIndexArray = hullIndex.toArray();
int hullIndexLen = hullIndexArray.length;
//保存凸包点的容器
List<org.opencv.core.Point> hull1 = new LinkedList<>();
List<org.opencv.core.Point> hull2 = new LinkedList<>();
// 保存组成凸包的关键点
for (int i = 0; i < hullIndexLen; i++) {
hull1.add(points1[hullIndexArray[i]]);
hull2.add(points2[hullIndexArray[i]]);
}
// delaunay triangulation 三角剖分和仿射变换
Rect rect = new Rect(0, 0, imgCV1Warped.cols(), imgCV1Warped.rows());
List<Correspondens> delaunayTri = delaunayTriangulation(hull2, rect);
Log.d("yaoxumin33", "delaunayTri.size : " + delaunayTri.size());
for (int i = 0; i < delaunayTri.size(); ++i) {
List<org.opencv.core.Point> t1 = new LinkedList<>();
List<org.opencv.core.Point> t2 = new LinkedList<>();
Correspondens corpd = delaunayTri.get(i);
for (int j = 0; j < 3; j++) {
t1.add(hull1.get(corpd.getIndex().get(j)));
t2.add(hull2.get(corpd.getIndex().get(j)));
}
imgCV1Warped = warpTriangle(imgCV1, imgCV1Warped, list2MP(t1), list2MP(t2), i);
/*String tempI = i > 9 ? i + "" : "0" + i;
String filename = FileConts.PATH_CUT_IMAGE + tempI + "-imgCV1Warped.jpg";
Imgcodecs.imwrite(filename, imgCV1Warped);
Log.d("yaoxumin33", filename);*/
}
// 无缝融合
List<org.opencv.core.Point> hull8U = new LinkedList<>();
for (int i = 0; i < hull2.size(); ++i) {
org.opencv.core.Point pt = new org.opencv.core.Point(hull2.get(i).x, hull2.get(i).y);
hull8U.add(pt);
}
Mat mask = Mat.zeros(imgCV2.rows(), imgCV2.cols(), imgCV2.depth());
Imgproc.fillConvexPoly(mask, list2MP(hull8U), new Scalar(255, 255, 255));
Log.d("yaoxumin33", "mask");
String filename = FileConts.PATH_CUT_IMAGE + "mask2.jpg";
Imgcodecs.imwrite(filename, mask);
Rect r = Imgproc.boundingRect(list2MP(hull2));
double x = (r.tl().x + r.br().x) / 2;
double y = (r.tl().y + r.br().y) / 2;
Log.d("yaoxumin33", "r.tl().x : " + r.tl().x + ",r.tl().y : " + r.tl().y);
Log.d("yaoxumin33", "r.br().x : " + r.br().x + ",r.br().y : " + r.br().y);
Log.d("yaoxumin33", "x : " + x + ",y : " + y);
org.opencv.core.Point center = new org.opencv.core.Point(x, y);
Mat result = new Mat();
imgCV1Warped.convertTo(imgCV1Warped, CvType.CV_8UC3);
filename = FileConts.PATH_CUT_IMAGE + "imgCV1Warped.jpg";
Imgcodecs.imwrite(filename, imgCV1Warped);
Log.d("yaoxumin33", "imgCV1Warped");
filename = FileConts.PATH_CUT_IMAGE + "imgCV2.jpg";
Imgcodecs.imwrite(filename, imgCV2);
Log.d("yaoxumin33", "imgCV2");
// Photo.colorChange(imgCV1Warped, imgCV2, result);
Photo.seamlessClone(imgCV1Warped, imgCV2, mask, center, result, Photo.NORMAL_CLONE);
/*int bilateralFilterVal = 30;
Mat result = new Mat();*/
// Imgproc.GaussianBlur(output, output, new Size(3, 3), 0, 0);
filename = FileConts.PATH_CUT_IMAGE + System.currentTimeMillis() + "-result.jpg";
Imgcodecs.imwrite(filename, result);
Log.d("yaoxumin33", "done");
}
/**
* @Description 三角剖分
* @author 姚旭民
* @date 2020/1/13 14:51
*/
public static List<Correspondens> delaunayTriangulation(List<org.opencv.core.Point> hull, Rect rect) {
Subdiv2D subdiv = new Subdiv2D(rect);
for (int i = 0; i < hull.size(); i++) {
subdiv.insert(hull.get(i));
}
MatOfFloat6 triangles = new MatOfFloat6();
subdiv.getTriangleList(triangles);
int cnt = triangles.rows();
float buff[] = new float[cnt * 6];
triangles.get(0, 0, buff);
List<Correspondens> delaunayTri = new LinkedList<>();
for (int i = 0; i < cnt; ++i) {
List<org.opencv.core.Point> points = new LinkedList<>();
points.add(new org.opencv.core.Point(buff[6 * i + 0], buff[6 * i + 1]));
points.add(new org.opencv.core.Point(buff[6 * i + 2], buff[6 * i + 3]));
points.add(new org.opencv.core.Point(buff[6 * i + 4], buff[6 * i + 5]));
Correspondens ind = new Correspondens();
if (rect.contains(points.get(0)) && rect.contains(points.get(1)) && rect.contains(points.get(2))) {
int count = 0;
for (int j = 0; j < 3; j++) {
for (int k = 0; k < hull.size(); k++) {
if (Math.abs(points.get(j).x - hull.get(k).x) < 1.0 && Math.abs(points.get(j).y - hull.get(k).y) < 1.0) {
ind.add(k);
count++;
}
}
}
if (count == 3)
delaunayTri.add(ind);
}
}
return delaunayTri;
}
/**
* @Description point对象转换,用于后面的凸包点计算等等场景
* @author 姚旭民
* @date 2020/1/13 14:51
*/
public org.opencv.core.Point[] pointToOpencvPoint(List<Point> points) {
org.opencv.core.Point[] result = new org.opencv.core.Point[points.size()];
org.opencv.core.Point temp;
StringBuilder str = new StringBuilder();
int index = 0;
for (Point point : points) {
temp = new org.opencv.core.Point(point.x, point.y);
// result.add(temp);
result[index] = temp;
index++;
str.append("[").append(point.x).append(point.y).append("]-");
}
return result;
}
/**
* @Description 变形三角人脸图像
* @author 姚旭民
* @date 2020/1/13 14:52
*/
public Mat warpTriangle(Mat img1, Mat img2, MatOfPoint t1, MatOfPoint t2, int z) {
Rect r1 = Imgproc.boundingRect(t1);
Rect r2 = Imgproc.boundingRect(t2);
org.opencv.core.Point[] t1Points = t1.toArray();
org.opencv.core.Point[] t2Points = t2.toArray();
List<org.opencv.core.Point> t1Rect = new LinkedList<>();
List<org.opencv.core.Point> t2Rect = new LinkedList<>();
List<org.opencv.core.Point> t2RectInt = new LinkedList<>();
for (int i = 0; i < 3; i++) {
t1Rect.add(new org.opencv.core.Point(t1Points[i].x - r1.x, t1Points[i].y - r1.y));
t2Rect.add(new org.opencv.core.Point(t2Points[i].x - r2.x, t2Points[i].y - r2.y));
t2RectInt.add(new org.opencv.core.Point(t2Points[i].x - r2.x, t2Points[i].y - r2.y));
}
// mask 包含目标图片三个凸点的黑色矩形
Mat mask = Mat.zeros(r2.height, r2.width, CvType.CV_32FC3);
Imgproc.fillConvexPoly(mask, list2MP(t2RectInt), new Scalar(1.0, 1.0, 1.0), 16, 0);
Mat img1Rect = new Mat();
img1.submat(r1).copyTo(img1Rect);
// img2Rect 原始图片适应mask大小并调整位置的图片
Mat img2Rect = Mat.zeros(r2.height, r2.width, img1Rect.type());
img2Rect = applyAffineTransform(img2Rect, img1Rect, t1Rect, t2Rect);
// Log.d("yaoxumin33", "img2Rect : " + img2Rect.channels() + ",img2Rect.size() : " + img2Rect.size() + ",img2Rect.type() : " + img2Rect.type());
// Log.d("yaoxumin33", "mask : " + mask.channels() + ",mask.size() : " + mask.size() + ",mask.type() : " + mask.type());
Core.multiply(img2Rect, mask, img2Rect); // img2Rect在mask三个点之间的图片
Mat dst = new Mat();
Core.subtract(mask, new Scalar(1.0, 1.0, 1.0), dst);
Core.multiply(img2.submat(r2), dst, img2.submat(r2));
Core.absdiff(img2.submat(r2), img2Rect, img2.submat(r2));
return img2;
}
/**
* @Description 仿射变换
* @author 姚旭民
* @date 2020/1/13 14:52
*/
public Mat applyAffineTransform(Mat warpImage, Mat src, List<org.opencv.core.Point> srcTri, List<org.opencv.core.Point> dstTri) {
Mat warpMat = Imgproc.getAffineTransform(list2MP2(srcTri), list2MP2(dstTri));
Imgproc.warpAffine(src, warpImage, warpMat, warpImage.size(), Imgproc.INTER_LINEAR);
return warpImage;
}
/**
* List exchange to MatOfPoint
*
* @param points
* @return
*/
public static MatOfPoint list2MP(List<org.opencv.core.Point> points) {
org.opencv.core.Point[] t = points.toArray(new org.opencv.core.Point[points.size()]);
return new MatOfPoint(t);
}
/**
* List exchange to MatOfPoint2f
*
* @param points
* @return
*/
public static MatOfPoint2f list2MP2(List<org.opencv.core.Point> points) {
org.opencv.core.Point[] t = points.toArray(new org.opencv.core.Point[points.size()]);
return new MatOfPoint2f(t);
}
}
4、结束
如果有什么问题或者有什么想要探讨的可以私信我,如果我看到我会进行回复的。
其实我觉得这样并不完善,但是满足他们的业务需求,但是我觉得最好是这样,(下面这两点你可以在网上看到对应的操作解释)
1、进行对应的图片检查旋转,保证两张换脸图片的鼻子尽量倾斜同的角度,这样在进行五官替换的话可以达到比较好的效果;
2、不要进行整张脸的变换,而是只进行五官特征替换,这样到了 无缝缝合 阶段,不会因为 周围颜色导致换脸起来的人脸脸部颜色不好看,而且可以达到更好的视觉效果,例如美颜。
但是目前的我还是缺乏这块的知识支撑,我会利用空余时间进行研究替换的,虽然能验收,但是为了工作和自己,我会研究然后进行更好的效果体验。