Abstracts of Face Synthesis Sketch Related Papers (just for self-study and memorize)

Face Photo-Sketch Synthesis via Knowledge Transfer

IJCAI 2019
Abstract:
Despite deep neural networks have demonstrated strong power in face photo-sketch synthesis task, their performance, however, are still limited by the lacking of training data ( photo-sketch pairs). Knowlledge Transfer (KT), which aims at training a smaller and fast student network with the information learned from a larger and accurate teacher network, has attracted much attention recently due to its superior performance in the accelearation and compression of deep nerual networks. This work has brought us great inspiration that we can train a ralatively small student network on limited training data by transferring knowledge from a larger teacher model trained on enough training data for other tasks. Therefore, we propose a novel knowledge transfer framework to synthesize face photos from face sketches or synthesize face sketches from face photos. Particularly, we utilize two teacher networks trained large amount of data in related task to learn knowledge of face photos and knowledge of face sketeches separately and transfer them to two student networks simultaneously. The two student networks, one for photo -> sketch task and the other for sketch -> photo task, can mimic and transform two kind of knowledge and transfer their knowledge mutually. With the proposed method, we can train a model which has superior performance using a small set of photo-sketch pairs. We validate the effectiveness of our method across several datasets. Quantitative and qualitative evaluations illustrate that our model outperforms other state-of-the-art methods in generating face sketches (or photos) with high visual quality and recognition ability.

Face Sketch Synthesis From a Single Photo-Sketch Pair

TCSVT 2017
Abstract
Face sketch synthesis is crucial in many practical applications, such as digital entertainment and law enforcement. Previous methods relying on many photo-sketch pairs have made great progress. State-of-the-art face sketch synthesis algorithms adopt Bayesian inference (BI) (e.g., Markov random fields) to select local sketch patches around corresponding position from a set of training data. However, these methods have two limitations: 1) they depend on many training photo-sketch pairs and 2) they cannot tackle nonfacial factors (e.g., hairpins, glasses, backgrounds, and image size) if these factors are excluded in training data. In this paper, we propose a novel face sketch synthesis method that is capable of handling nonfacial factors only using a single photo-sketch pair from coarse to fine. Our method proposes a cascaded image synthesis (CIS) strategy and integrates sparse representation-based greedy search (SRGS) and BI for face sketch synthesis. We first apply SRGS to select candidate sketch patches from the whole training photo-sketch pairs sampled from the only photo-sketch pair. We then employ BI to estimate an initial sketch. Afterward, the input photo and the estimated initial sketch are taken as an additional photo-sketch pair for training. Finally, we adopt CIS with the given two photo-sketch pairs to further improve the quality of the initial sketch. The experimental results on several databases demonstrate that our algorithm outperforms state-of-the-art methods.

Robust Face Sketch Style Synthesis

TIP 2016
Abstract
Heterogeneous image conversion is a critical issue in many computer vision tasks, among which example-based face sketch style synthesis provides a convenient way to make artistic effects for photos. However, existing face sketch style synthesis methods generate stylistic sketches depending on many photo-sketch pairs. This requirement limits the generalization ability of these methods to produce arbitrarily stylistic sketches. To handle such a drawback, we propose a robust face sketch style synthesis method, which can convert photos to arbitrarily stylistic sketches based on only one corresponding template sketch. In the proposed method, a sparse representation-based greedy search strategy is first applied to estimate an initial sketch. Then, multi-scale features and Euclidean distance are employed to select candidtae image patches from the initial estimated sketch and the template sketch. In order to further refine the obtained candidate image patches, a multi-feature-based optimization model is introduced. Finally, by assembling the refined candidate image patches, the completed face sketch is obtained. To further enhance the quality of synthesized sketches, a cascaded regression strategy is adopted. Compared with the state-of-the-art face sketch synthesis methods, experimental results on several commonly used face sketch databases and celebrity photos demonstrate the effectiveness of the proposed method.

Scoot: A Perceptual Metric for Facial Sketches

ICCV2019
Abstract
The human visual system has a strong ability to quickly assess the perceptual similarity between two facial sketches. However, existing popular facial sketch metrics, e.g., FSIM and SSIM, which initially designed for evaluating local image distortion, often fail to address the perceptual similarity between faces. In this paper, we design a perceptual metric, called Structure Co-Occurrence Texture (Scoot), which simultaneously considers the block-level spatial structure and co-occurrence texture statistics. To test the quality of metrics, we propose three nove meta-measures based on various reliable properties. Extensive experiments demonstrate that our Scoot metric exceeds the performance of prior work. Besides, we built the first large scale (152k judgments) human-perception-based sketch database that can evaluate thow well a metric is consistent with human perception. Our results suggest that “spatial structure” and “co-occurrence texture” are two generally applicable perceptual features in face sketch synthesis.

A multi-scale conditional generative adversarial network for face sketch synthesis

Abstract
We investigate conditional generative adversarial network (cGAN) as a solution to realize the face-to-sketch translation problems. These networks not only learn the mapping relationships between the face and responding sketch, but also generate a loss function to train the mapping relationships automatically. This makes it possible to regard the transformation problems as minimizing the loss function. In previous works, cGAN employs a single scale to resolve the above problems and lacks multi-scale information. In this work, considering that image multi-scale representation can capture image texture, structure and other important features more effectively, and we construct a three-layer pyramid model to obtain multi-scale information, and employ proposed multi-scale cGAN to train the mapping relationships. With respects to four metrics, our method outperforms previous models.

Face sketch synthesis via sparse representation-based greedy search

TIP2015
Abstract
Face sketch synthesis has wide applications in digital entertainment and law enforcement. Alothough there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, pervious methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel methods that conbines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed methods learns a photo patch feature dictionary from the training photo patches and replaces the photo pathces with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics.

APDrawingGAN: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs

Abstract
Significant progress has been made with image stylization using deep learning, espically with generative adversarial networks (GANs). However existing methods fail to produce high quality artistic portrait drawings. Such drawings have a highly abstract style, containing a sparse set of continuous graphical elements such as lines, and so small artifacts are more exposed than for painting styles. Moreover, artists tend to use different strategies to draw different facial features and the lines drawn are only loosely realted to obvious image features. To address these challenges, we propose APDrawingGAN, a novel GAN based architecture that builds upon hierarchical generators and discriminators combining both a global network (for images as a whole) and local networks (for individual facial regions). This allows dedicated drawing strategies to be learned for different facial features. Since artists’ drawings may not have lines perfectly aligned with image features, we develop a novel loss to measure similarity between generated and artists’ drawings based on distance transforms, leading to improved strokes in portrait drawing. To train APDrawingGAN, we construct an artistic drawing dataset containing high-resoultion portrait photos and corresponding professional artistic drawings. Extensive experiments, and a user study, show that APDrawingGAN produces significantly better artistic drawings than state-of-the-art methods.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值