pcl把3dmesh 映射成2维,将图像映射到3D面部网格

I am using the iPhone X and ARFaceKit to capture the user's face. The goal is to texture the face mesh with the user's image.

I'm only looking at a single frame (an ARFrame) from the AR session.

From ARFaceGeometry, I have a set of vertices that describe the face.

I make a jpeg representation of the current frame's capturedImage.

I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to:

1. map the vertices from model space to world space;

2. map the vertices from world space to camera space;

3. divide by image dimensions to get pixel coordinates for the texture.

let geometry: ARFaceGeometry = contentUpdater.faceGeometry!

let theCamera = session.currentFrame?.camera

let theFaceAnchor:SCNNode = contentUpdater.faceNode

let anchorTransform = float4x4((theFaceAnchor?.transform)!)

for index in 0..

let vertex = geometry.vertices[index]

// Step 1: Model space to world space, using the anchor's transform

let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0)

let worldSpace = anchorTransform * vertex4

// Step 2: World space to camera space

let world3 = float3(worldSpace.x, worldSpace.y, worldSpace.z)

let projectedPt = theCamera?.projectPoint(world3, orientation: .landscapeRight, viewportSize: (theCamera?.imageResolution)!)

// Step 3: Divide by image width/height to get pixel coordinates

if (projectedPt != nil) {

let vtx = projectedPt!.x / (theCamera?.imageResolution.width)!

let vty = projectedPt!.y / (theCamera?.imageResolution.height)!

textureVs += "vt \(vtx) \(vty)\n"

}

}

This is not working, but instead gets me a very funky looking face! Where am I going wrong?

解决方案

Texturing the face mesh with the user's image is now available in the Face-Based sample code published by Apple (section Map Camera Video onto 3D Face Geometry).

One can map camera video onto 3D Face Geometry using this following shader modifier.

// Transform the vertex to the camera coordinate system.

float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).

float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;

vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].

// Image coordinates are Y-flipped (upper-left origin).

float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);

vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).

float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.

_geometry.texcoords[0] = transformedVertex.xy;

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值