先提供两个网址
github项目下载
DAIN-APP(windows)官方下载地址
关于DAIN的介绍,大家可以点击这条链接
让电影动漫统统变丝滑,480帧也毫无卡顿,交大博士生开源插帧软件DAIN
本人研究方向也不在 视频插帧这个方向上,只是想用一用最新的深度学习视频插帧算法。DAIN在linux下配置十分复杂,复现比较困难,贴上github教程:
第一波
linux下实现DAIN的方法
Installation
Download repository:
$ git clone https://github.com/baowenbo/DAIN.git
Before building Pytorch extensions, be sure you have pytorch >= 1.0.0
:
$ python -c "import torch; print(torch.__version__)"
Generate our PyTorch extensions:
$ cd DAIN
$ cd my_package
$ ./build.sh
Generate the Correlation package required by PWCNet:
$ cd ../PWCNet/correlation_package_pytorch1_0
$ ./build.sh
Testing Pre-trained Models
Make model weights dir and Middlebury dataset dir:
$ cd DAIN
$ mkdir model_weights
$ mkdir MiddleBurySet
Download pretrained models,
$ cd model_weights
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/best.pth
and Middlebury dataset:
$ cd ../MiddleBurySet
$ wget http://vision.middlebury.edu/flow/data/comp/zip/other-color-allframes.zip
$ unzip other-color-allframes.zip
$ wget http://vision.middlebury.edu/flow/data/comp/zip/other-gt-interp.zip
$ unzip other-gt-interp.zip
$ cd ..
preinstallations:
$ cd PWCNet/correlation_package_pytorch1_0
$ sh build.sh
$ cd ../my_package
$ sh build.sh
$ cd ..
We are good to go by:
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
The interpolated results are under MiddleBurySet/other-result-author/[random number]/
, where the random number
is used to distinguish different runnings.
Downloading Results
Our DAIN model achieves the state-of-the-art performance on the UCF101, Vimeo90K, and Middlebury (eval and other). Download our interpolated results with:
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/UCF101_DAIN.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/Vimeo90K_interp_DAIN.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/Middlebury_eval_DAIN.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/Middlebury_other_DAIN.zip
Slow-motion Generation
Our model is fully capable of generating slow-motion effect with minor modification on the network architecture. Run the following code by specifying time_step = 0.25
to generate x4 slow-motion effect:
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.25
or set time_step
to 0.125
or 0.1
as follows
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.125
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.1
to generate x8 and x10 slow-motion respectively. Or if you would like to have x100 slow-motion for a little fun.
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.01
You may also want to create gif animations by:
$ cd MiddleBurySet/other-result-author/[random number]/Beanbags
$ convert -delay 1 *.png -loop 0 Beanbags.gif //1*10ms delay
Have fun and enjoy yourself!
windows使用DAIN-APP
打开官方网址选一个版本下载下来。我是3070的显卡,所以看到这个适配30x0的更新版本,毫不犹豫就选他了:
这是一个免安装的软件,直接解压安装包后,就可以打开应用程序了:
软件打开是这样一个界面:(我正在把一个25FPS的视频插帧到200FPS)
需要注意的一点是:batch size这一项可能得自己根据视频的质量来试一试,过大可能会导致爆显存。我这个视频是640*512 25fps 一分多钟,64batch16batch都试过都爆显存,4可以正常运行。
软件的具体使用方法很简单,我就不多介绍了,看看插帧的效果再来做补充吧…
第二波来了
25fps插帧到200fps完成了,1分半的视频跑了一下午。
现在我把200fps的视频又输入了,准备弄个1600fps的视频
试了试 batch size=8,发现显存接近跑满,没有报错
没有后续了,等了好久之后报错内存不够了。只有把视频拆成多个视频进行处理了。