我主要是需要它的adaptation部分
1. For adaptation, go to the adaptation directory. Please put the Webcaricature dataset to "CariFaceParsing/adaptation/datasets/face_webcaricature". And link "trainA" and "val" to "photo", link "trainB", "trainC", "trainD", "trainE", "trainF", "trainG", "trainH", "trainI" to "caricature". And download the provided "landmark_webcaricature" and put it to "CariFaceParsing/adaptation/datasets/"
这边因为下载的ladnmark_webcaricature 和原来的数据集命名不一样,我猜应该是要根据landmark_webcaricature每个文件夹里的文件对应的把图片复制过来造一个分类的数据集的意思....不知道有没有别的方便一点的方法,反正我写了个复制的代码:
我的trainA和trainB是之前跑MUNIT的时候,把webcaricature分开成了照片和漫画两个文件夹,这里就拿来用了。
代码也放在后面了。
# split webcaricature dataset into A~I according to ldmk dataset
import os
import shutil
ldmkPath = "D:/hx/dataset/CariFaceParsing_data/CariFaceParsing_data/adaptation/datasets/landmark_webcaricature/trainI"
imgPath_C = "D:/hx/dataset/dataset/p2c/trainA"
imgPath_P = "D:/hx/dataset/dataset/p2c/trainB"
dirPath = "D:/hx/codes/CariFaceParsing-master/CariFaceParsing-master/Adaptation/datasets/face_webcaricature/trainI"
n m,kbpyFiles():
n\
if not os.path.exists(dirPath):
os.makedirs(dirPath)
for root,dirs,files in os.walk(ldmkPath):
for eachfile in files:
a = eachfile.split("_")
b = a[0].replace(' ','_')
c = a[-1].replace('npy','jpg')
filename = b + "_" + c
file1 = a[0] + "_" + c
for root,dirs,files in os.walk(imgPath_C):
for img in files:
if img == filename:
path0 = os.path.join(imgPath_C, filename)
path1 = os.path.join(dirPath, file1)
shutil.copy(path0,path1)
print(eachfile + " copy succeed")
break
if __name__ == '__main__':
copyFiles()
import os
import shutil
def copyFiles(srcPath,pathA,pathB):
print(srcPath)
if not os.path.exists(srcPath):
print("src path not exist!")
if not os.path.exists(pathA):
os.makedirs(pathA)
if not os.path.exists(pathB):
os.makedirs(pathB)
for root,dirs,files in os.walk(srcPath):
for eachfile in files:
if eachfile[0]=='C':
a = root.split('\\')
img = a[-1] +"_"+eachfile
path1 = os.path.join(pathA,img)
shutil.copy(os.path.join(root,eachfile),path1)
print(eachfile+" copy succeeded")
else:
a = root.split('\\')
img = a[-1] +"_"+eachfile
path1 = os.path.join(pathB,img)
shutil.copy(os.path.join(root,eachfile),path1)
print(eachfile+" copy succeeded")
if __name__ == '__main__':
copyFiles('D:/hx/dataset/webcaricature_aligned_256','D:/hx/dataset/dataset/p2c/trainA/','D:/hx/dataset/dataset/p2c/trainB/')
2. Then put Helen dataset to "CariFaceParsing/adaptation/datasets/helen" and it should have three subfolder, "images", "labels", "landmark".
这一步用作者的链接我没找着labels....可能是数据集链接更新了..... 找了好久TAT 我之后有时间会把它上传到这里,因为那个链接下载的太慢了....
其实如果像我这样只需要adaptation这部分,没打算实现漫画的分割,可以不需要这个labels,它就是漫画的分割图。后来我也没用到...我直接把labels里面放了和images一样的图片,然后会报个11个通道什么的错误。那个是因为labels里面每张图片都有一个文件夹,文件夹里放了11张分割图片,要把这些图片合成一个有11个通道的图...isssues里面也有提到。我没有把他们合成一张图片,只是对代码做了一点修改。
我印象中是改了这三个地方...
3. Put the adapted results to "CariFaceParsing/adaptation/datasets/helen_shape_adaptation" and it should have "images", "labels". Put the provided "train_style" and "test_style" here and link "train_content", "test_content" to images.
这一步...我一开始什么都没创建,直接跑的测试代码。然后根据报错创建文件夹,后来发现test_style里边放的是想要参照的漫画图片,默认设置是8张....会在test_content文件夹生成结果。
还未训练,之后如果训练好了再更新吧。