详细叙述网上现有的PS换脸术(附步骤总结)

本文详细介绍了使用Photoshop进行换脸的具体步骤,包括如何正确复制背景图层、使用套索工具选取面部、调整选区、混合图层等关键操作,避免合成失败。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

说明

之前在抖音上学到了PS换脸术,最近在使用时有些遗忘步骤,导致多次尝试都失败。后来仔细分析了原因,将一些细节的地方总结出来,方便大家学习。

具体步骤

  1. 将现人拖到PS中作为背景图层
  2. 将原人拖到PS中,脸用套索工具套出来,Ctrl+J复制一层,删除原人图层
  3. 将背景图层复制一层
  4. 选中抠出来的脸,收缩2-10像素(根据具体情况而定)
  5. 选中现人图层,删除选区
  6. 选中复制的图层和人脸图层,选择编辑-自动混合图层,选择全景图,无缝色调和颜色打钩,内容识别填充透明区域打钩

细节问题

  1. 一定要记得Ctrl+J把背景复制一层(不复制是做不成功的),并且不能删除背景图层,否则混合后是空白的
  2. 一定要在收缩像素后再清除(不收缩像素混合后也会是空白);也一定要清除,否则会合成失败
  3. 背景图层是完整的,清除的是复制的图层1
### 计算机视觉中的 作为计算机视觉的一个重要应用领域,主要依赖于深度学习模型来实现面部特征点的检测、匹配以及图像融合。下面提供一段基于Python和Dlib库的人示例代码: ```python import cv2 import dlib import numpy as np def extract_index_nparray(nparray): index = None for num in nparray[0]: index = num break return index detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") img = cv2.imread('face1.jpg') img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) mask = np.zeros_like(img_gray) faces = detector(img_gray) for face in faces: landmarks = predictor(img_gray, face) landmarks_points = [] for n in range(0, 68): x = landmarks.part(n).x y = landmarks.part(n).y landmarks_points.append((x, y)) points = np.array(landmarks_points, np.int32) convexhull = cv2.convexHull(points) cv2.fillConvexPoly(mask, convexhull, 255) face_image_1 = cv2.bitwise_and(img, img, mask=mask) rect = cv2.boundingRect(convexhull) sub_div = cv2.Subdiv2D(rect) sub_div.insert(landmarks_points) triangles = sub_div.getTriangleList() triangles = np.array(triangles, dtype=np.int32) indexes_triangles = [] for t in triangles: pt1 = (t[0], t[1]) pt2 = (t[2], t[3]) pt3 = (t[4], t[5]) index_pt1 = np.where((points == pt1).all(axis=1)) index_pt1 = extract_index_nparray(index_pt1) index_pt2 = np.where((points == pt2).all(axis=1)) index_pt2 = extract_index_nparray(index_pt2) index_pt3 = np.where((points == pt3).all(axis=1)) index_pt3 = extract_index_nparray(index_pt3) if index_pt1 is not None and index_pt2 is not None and index_pt3 is not None: triangle = [index_pt1, index_pt2, index_pt3] indexes_triangles.append(triangle) # Load second image img2 = cv2.imread("face2.jpg") height, width, channels = img2.shape img2_new_face = np.zeros((height, width, channels), np.uint8) # Triangulation of both faces for triangle_index in indexes_triangles: tr1_pts = points[triangle_index] tr2_pts = landmarks_points[triangle_index] # Bounding rectangles r1 = cv2.boundingRect(np.float32([tr1_pts])) r2 = cv2.boundingRect(np.float32([tr2_pts])) # Cropped input images to the size of bounding boxes cropped_tr1_mask = np.zeros(r1[2:]) cropped_img1 = img[r1[1]:r1[1] + r1[3], r1[0]:r1[0] + r1[2]] # Points relative to their respective bounding box corners. pts1 = np.subtract(tr1_pts, r1[:2].repeat(2)[::2][:len(tr1_pts)]) # Warping Triangle from first image to second one using Affine Transformations matrix = cv2.getAffineTransform( np.float32(pts1), np.float32(np.subtract(tr2_pts, r2[:2].repeat(2)[::2][:len(tr2_pts)]))) warped_crop = cv2.warpAffine(cropped_img1, matrix, (r2[2], r2[3]), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_REFLECT_101) # Creating a mask for new face by filling it with white color inside polygon area defined by transformed triangle vertices hullMask = np.zeros(warped_crop.shape, dtype=warped_crop.dtype) cv2.fillConvexPoly(hullMask, np.int32(tr2_pts-r2[:2]), (255, 255, 255)) # Extracting region where we will place our newly created part of swapped face on top of original destination image img2_region = img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] # Combining source & target regions through bitwise operations while preserving transparency outside polygons areas combined_result = cv2.addWeighted(src1=img2_region.astype(float)/255., alpha=hullMask.astype(float)/255., src2=warped_crop.astype(float)/255., beta=(1-hullMask.astype(float))/255., gamma=0.) # Updating final output array at corresponding position according to current rectangle coordinates img2_new_face[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] = \ ((combined_result*255.).clip(min=0,max=255)).astype(np.uint8) result = cv2.seamlessClone(img2_new_face, img2, mask, tuple(map(int,(width/2,height/2))), cv2.NORMAL_CLONE) cv2.imshow("Result", result) cv2.waitKey(0) cv2.destroyAllWindows() ``` 这段代码展示了如何利用三角剖分方法将一张图片中的人物部移植到另一张图片上去[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值