python风格迁移_deep-photo-styletransfer

deep-photo-styletransfer

Code and data for paper "Deep Photo Style Transfer"

Disclaimer

This software is published for academic and non-commercial use only.

Setup

This code is based on torch. It has been tested on Ubuntu 14.04 LTS.

Dependencies:

CUDA backend:

Download VGG-19:

sh models/download_models.sh

Compile cuda_utils.cu (Adjust PREFIX and NVCC_PREFIX in makefile for your machine):

make clean && make

Usage

Quick start

To generate all results (in examples/) using the provided scripts, simply run

run('gen_laplacian/gen_laplacian.m')

in Matlab or Octave and then

python gen_all.py

in Python. The final output will be in examples/final_results/.

Basic usage

Given input and style images with semantic segmentation masks, put them in examples/ respectively. They will have the following filename form: examples/input/in.png, examples/style/tar.png and examples/segmentation/in.png, examples/segmentation/tar.png;

Compute the matting Laplacian matrix using gen_laplacian/gen_laplacian.m in Matlab. The output matrix will have the following filename form: gen_laplacian/Input_Laplacian_3x3_1e-7_CSR.mat;

Note: Please make sure that the content image resolution is consistent for Matting Laplacian computation in Matlab and style transfer in Torch, otherwise the result won't be correct.

Run the following script to generate segmented intermediate result:

th neuralstyle_seg.lua -content_image -style_image

Run the following script to generate final result:

th deepmatting_seg.lua -content_image -style_image

You can pass -backend cudnn and -cudnn_autotune to both Lua scripts (step 3.

and 4.) to potentially improve speed and memory usage. libcudnn.so must be in

your LD_LIBRARY_PATH. This requires cudnn.torch.

Image segmentation

Note: In the main paper we generate all comparison results using automatic scene segmentation algorithm modified from DilatedNet. Manual segmentation enables more diverse tasks hence we provide the masks in examples/segmentation/.

The mask colors we used (you could add more colors in ExtractMask function in two *.lua files):

Color variable

RGB Value

Hex Value

blue

0 0 255

0000ff

green

0 255 0

00ff00

black

0 0 0

000000

white

255 255 255

ffffff

red

255 0 0

ff0000

yellow

255 255 0

ffff00

grey

128 128 128

808080

lightblue

0 255 255

00ffff

purple

255 0 255

ff00ff

Here are some automatic and manual tools for creating a segmentation mask for a photo image:

Automatic:

Manual:

Examples

Here are some results from our algorithm (from left to right are input, style and our output):

in3.png

tar3.png

refine_3.png

in4.png

tar4.png

refine_4.png

in13.png

tar13.png

refine_13.png

in9.png

tar9.png

refine_9.png

in20.png

tar20.png

refine_20.png

in1.png

tar1.png

refine_1.png

in39.png

tar39.png

refine_39.png

in57.png

tar57.png

refine_57.png

in47.png

tar47.png

refine_47.png

in58.png

tar58.png

refine_58.png

in51.png

tar51.png

refine_51.png

in7.png

tar7.png

refine_7.png

in23.png

in23.png

best23_t_1000.png

in16.png

tar16.png

refine_16.png

in30.png

tar30.png

refine_30.png

in2.png

tar2.png

best2_t_1000.png

in11.png

tar11.png

refine_11.png

Acknowledgement

Our torch implementation is based on Justin Johnson's code;

We use Anat Levin's Matlab code to compute the matting Laplacian matrix.

Citation

If you find this work useful for your research, please cite:

@article{luan2017deep,

title={Deep Photo Style Transfer},

author={Luan, Fujun and Paris, Sylvain and Shechtman, Eli and Bala, Kavita},

journal={arXiv preprint arXiv:1703.07511},

year={2017}

}

Contact

Feel free to contact me if there is any question (Fujun Luan fl356@cornell.edu).

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值