SMPL模型

SMPL的python版本在官方网站有两个,分别是SMPL_python_v.1.0.0,SMPL_python_v.1.1.0。区别是:SMPL_python_v.1.0.0不完备,只提供了10个shape PCA coefficients的模型。SMPL_python_v.1.1.0提供了3个性别300shape PCA coefficients的模型。

以SMPL_python_v1.1.0为例,其中包含了三个models和操作models的基础脚本。
三个模型是male,female,netrual的pkl格式模型,以netrual为例,我们看下其中的数据结构。

import pickle
with open(model_path, 'rb') as f:
	smpl = pickle.load(f, encoding='latin1')

'J_regressor_prior': [24, 6890], scipy.sparse.csc.csc_matrix
# 面部
'f': [13776, 3], numpy.ndarray
# regressor array that is used to calculate the 3d joints from the position of the vertices
'J_regressor': [24, 6890], scipy.sparse.csc.csc_matrix
# indices of parents for each joints
'kintree_table': [2, 24], numpy.ndarray
'J': [24, 3], numpy.ndarray
'weights_prior': [6890, 24], numpy.ndarray
# linear blend skinning weights that represent how much the rotation matrix of each parr affects each vertex
'weights': [6890, 24], numpy.ndarray
# pose blend shape basis, pose PCA coefficients
'posedirs':	[6890, 3, 207], numpy.ndarray
'bs_style': 'lbs'
# the vertices of the template model
'v_template': [6890, 3], numpy.ndarray
# tensor of PCA shape displacements
'shapedirs': [6890, 3, 300], chumpy.ch.Ch
'bs_type': 'lrotmin'
def forward_shape(self, betas):
	v_shaped = self.v_template + blend_shapes(betas, self.shapedirs)
	return SMPLOutput(vertices=v_shaped, betas=betas, v_shaped=v_shaped)
def forward(self, betas, body_pose, global_orient, transl,
		return_verts=True, return_full_pose=False, pose2rot=True, **kwargs):
	full_pose = torch.cat([global_orient, body_pose], dim=1)

	vertices, joints = lbs(beta, full_pose, self.v_template, self.shapedirs,
			self.posedirs, self.J_regressor, self.parents, self.lbs_weights,
			pose2rot=pose2rot)

	return SMPLOutput(vertices, global_orient=global_orient, body_pose=body_pose,
		joints=joints, betas=betas, full_pose=full_pose)

核心是在lbs.py中。

# add shape contribution
v_shaped = v_template + blend_shapes(betas, shapedirs)
# Get the joints
J = vertices2joints(J_regressor, v_shaped)
# add pose blend shapes
ident = torch.eye(3, dtype=dtype, device=device)
pose_feature = pose[:, 1:].view(batch_size, -1, 3, 3) - ident
# [N, P] * [P, V*3] -> [N, V, 3]
pose_offsets = torch.matmul(pose_feature.view(batch_size, -1),
                                    posedirs).view(batch_size, -1, 3)
v_posed = pose_offsets + v_shaped
# Get the global joint location
rot_mats = pose.view(batch_size, -1, 3, 3)
J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)
# Do skinning
W = lbs_weights.unsqueeze(dim=0).expand([batch_size, -1, -1])
T = torch.matmul(W, A.view(batch_size, num_joints, 16)) \
        .view(batch_size, -1, 4, 4)
homogen_coord = torch.ones([batch_size, v_posed.shape[1], 1],
                            dtype=dtype, device=device)
v_posed_homo = torch.cat([v_posed, homogen_coord], dim=2)
v_homo = torch.matmul(T, torch.unsqueeze(v_posed_homo, dim=-1))

verts = v_homo[:, :, :3, 0]

代码顺序是shape blend shape + pose blend shape -> skinning。返回的是6890*3的模型顶点和n个3d关键点坐标。
对于结构体中的 s c i p y . s p a r s e . c s c . c s c _ m a t r i x scipy.sparse.csc.csc\_matrix scipy.sparse.csc.csc_matrix类型,在处理过程中可以通过以下代码转为 n u m p y . n d a r r a y numpy.ndarray numpy.ndarray类型。

def to_np(array, dtype=np.float32):
	if 'scipy.sparse' in str(type(array)):
		array = array.todense()
	return np.array(array, dtype=dtype)

SMPL和SMPL-H的拓扑结构是一样的。

python3中没法识别chumpy.ch.Ch格式,为兼容python3,需要将该该格式的数据转为numpy.ndarray格式。

output_dict = {}
for key, data in body_data.iteritems():
if 'chumpy' in str(type(data)):
	output_dict[key] = np.array(data)
else:
	output_dict[key] = data

with open(out_path, 'wb') as f:
	pickle.dump(output_dict, f)

有的工程会用到 J _ r e g r e s s o r _ e x t r a . n p y J\_regressor\_extra.npy J_regressor_extra.npy补充额外关键点。

J_regressor_extra: [9, 6890], numpy.ndarray
extra_joints = vertice2joints(J_regressor_extra, smpl_output.vertices)
joints = torch.cat([smpl_output.joints, extra_joints], dim=1)
  • 3
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值