oracle根据body生成主体,everybody dance Openpose GAN Pose2vid tensorflow实现 人体姿态迁移 动作传递 人物姿态生成...

实现效果

人物freestyle 动作

teacher 动作传递给student,

按照teacher动作生成student,

485cfc24447e6d701a83e2a0e4383110.png

ef7dffa760767317f06b43cbfad95efd.png

其他的实例效果

13e3c47e3d2f57462582f417f090c3f9.png

6de735a61400d3454c33c39f23c36910.png

只训练了一小时,有时间多训练GAN的参数,生成的人物会效果更好。

有时间整理好代码,分享。

Dance 实例

We train and evaluate on Ubuntu 16.04, so if you don't have linux environment, you can set nThreads=0 in EverybodyDanceNow_reproduce_pytorch/src/config/train_opt.py.

2223b334a81fedd75cd2cf472656add1.png

686cfabd5fd78fe0f542edad90c7d64e.png

80d3a5fef51304aa02ca9a9a5fd74e86.png

1b0cb582cc991df96700a7545e143249.png

646e869e803c865be96ad91bb7d5f5cb.png

b5bef9b941ccf4256264b3011909140b.png

1 运行 make_source

1b6022abbf0cdbce7cca57ddd971fea8.png

db050f32cfdf0a9970e8a513b8edb7e3.png

02b36dd17a85cb6ec0813c5e4754a754.png

0724386a8d080f5100a1224edf56267a.png

3b85963ff0031278c555c620200bdbdf.png

Images  存储成图片

49ab94dac72e5caa239062cf964ffa59.png

人脸GAN用

Test_head_ori  头部位置

02667418478796f57c16012170ee5bc7.png

make label images for pix2pix'''

Test_img

Test_label_ori  姿态骨架位置

4de29c31af5aa32653318a16657148ff.png

f0849d392e44ada7e88311405794498a.png

2 运行 make_target.py

Put target video mv.mp4 in ./data/target/ and run make_target.py, pose.npy will save in ./data/target/, which contain the coordinate of faces (will use in step6).

40a98dc359f941ba3bbdc433f41b60ec.png

1a0bc047fa75f66f7a2a193d138bec91.png

Images  存储图片

ef9a024d1b1efd1d6e656bdc03a53d93.png

48b933deda801fc66eaba0676292eb1b.png

'''make label images for pix2pix'''

head_img

脸部特征

fb62f0fa9ce9bc76bd3ab75eec9d2420.png

train_img

特征点和整体图

e613a50f817fc54d2294e9b4ba82bf50.png

train_label

姿态骨架图

57cd195f2279f6c574ba3118282032d9.png

842c5dc1cd488adc6eefd20efdc0a5a8.png

Pose.npy

包含脸部定位

2b055835849792d434e441eb0bb245c7.png

3  运行 train_pose2vid.py

Run Run train_pose2vid.py and check loss and full training process in ./checkpoints/

If you break the traning and want to continue last training, set load_pretrain = './checkpoints/target/ in ./src/config/train_opt.py

and check loss and full training process in ./checkpoints/

If you break the traning and want to continue last training, set load_pretrain = './checkpoints/target/ in ./src/config/train_opt.py

a7075b6873078e889c36f5207116a18b.png

cc069207e782bf2818ae8891feff6dab.png

40660dff1323bf673ba5dd4b6e1034a7.png

0f80c81a2f0f3ed8a58ac0cc018eb459.png

4ebba7c8d5a8a7795f617f0d5f875d6c.png

c7bf500e94ce6e1de53c53cd8b6d0427.png

00b185faa8f7e6d7957232644049be2d.png

4 运行normalization.py

Run normalization.py rescale the label images, you can use two sample images from ./data/target/train/train_label/ and ./data/source/test_label_ori/ to complete normalization between two skeleton size

835a82fea7b5e53a01a3688e894c9e1e.png

6ab9a3a12005b836c98b9c17b205dba8.png

2275c7e7eafe2b5f498375a610700350.png

c705086f43e8385b0997e9c80a1f9d66.png

9ae81da58d56632860a60d4b9b8e0d11.png

5运行 transfer.py

Run transfer.py and get results in ./result

def create_label(shape, joint_list, person_to_joint_assoc):

label = np.zeros(shape, dtype=np.uint8)

cord_list = []

for limb_type in range(17):

for person_joint_info in person_to_joint_assoc:

joint_indices = person_joint_info[joint_to_limb_heatmap_relationship[limb_type]].astype(int)

if -1 in joint_indices:

continue

joint_coords = joint_list[joint_indices, :2]

coords_center = tuple(np.round(np.mean(joint_coords, 0)).astype(int))

cord_list.append(joint_coords[0])

limb_dir = joint_coords[0, :] - joint_coords[1, :]

limb_length = np.linalg.norm(limb_dir)

angle = math.degrees(math.atan2(limb_dir[1], limb_dir[0]))

polygon = cv2.ellipse2Poly(coords_center, (int(limb_length / 2), 4), int(angle), 0, 360, 1)

cv2.fillConvexPoly(label, polygon, limb_type+1)

return label,cord_list

13b0b6eb0aa92ad90bc504875f492faf.png

ae6c2b6363856376fb33015ce34e798c.png

Pytorch pose

https://github.com/tensorboy/pytorch_Realtime_Multi-Person_Pose_Estimation/tree/681d16fa6eac64d8828affa477af78dd358381d2

pix2pix

https://github.com/NVIDIA/pix2pixHD/tree/20687df85d30e6fff5aafb29b7981923da9fd02f

e52f9ec55aba2579dc0f20ea5d69b368.png

ff5b3a85b1141b9a6f142713c6138474.png

6 运行 ./face_enhancer/prepare.py

Run ./face_enhancer/prepare.py and check the results in ./data/face/test_sync and ./data/face/test_real.

a417f750cc1d4fd61485b6d4f4019e92.png

51e55e91dd8690fe632297d8a96e14ba.png

Test_real

91948d9fe6d7926e46a48ce9ed326a3a.png

177b08bf80db775b86b28b1009db99d3.png

test_sync

7 运行./face_enhancer/main.py

优化脸部

Run ./face_enhancer/main.py train face enhancer and run./face_enhancer/enhance.py to gain results

d1865dbe2df564463a99d5d943eb7724.png7a0ee6fa5cdeae6b568e675b910ed1a7.png

训练

7a0ee6fa5cdeae6b568e675b910ed1a7.png

910077207aa823332a5b95e7183489ed.png

d3fb816c8b2d40cc72da802bf1853916.png

3a974dfee49b66676b68faed30c4c4b7.png

62c2f3d9e9434c164256b68c54c110c2.png

53820efcd315fc39888bd0678e8a93ae.png

原图---------------------------------------------------------》训练优化前-----------------------------------------------------》训练优化后脸部

Logs

This is comparision in original (left), generated image before face enhancement (median) and after enhancement (right). FaceGAN can learn the residual error between the real picture and the generated picture faces.

7613156099c615b51308403424cccb26.png328f523f577e960ca4b6fda8aa4d7925.png

原图---------------------------------------------------------》训练优化前-----------------------------------------------------》训练优化后脸部

e98f4451b0342ca0032d1b2e4b4d92d6.pngb13af4715d76f180517e92b634fa13dc.png

原图---------------------------------------------------------》训练优化前-----------------------------------------------------》训练优化后脸部

8 运行 ./face_enhancer/enhance.py

存储结果

7c2b5023ad21b81163cdcae0d79c6688.png

074690c4d1b0c291b78f1b3993b7d10a.png

9 运行 make_gif.py

Run make_gif.py and make result pictures to gif picture

pip install -U scikit-image

d89215104b2897f544521302c49aa9be.png

安装没有

需要copy文件到python目录下

2649a53b1ed95adf3db4e2c6cd94511f.png

dd67ff4f0a310980254ca216af641b46.png

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值