3D Morphable Model Method

This note is a brief summary of the 3DMM paper A Morphable Model For The Synthesis Of 3D Faces.

Note: I have no idea why there is a “|” following each math environment and “\bold” has no effect, for clarity, see the original version 3D Morphable Model Method.


  • 3D head lase scans
  • full correspondence of faces (the method for acquiring this condition is described in the last section of the note.)

Model Construction

The model construction process consists of two steps: compute correspondence and construct model. Notice that these are steps for TRAINING, when the model is constructed, we can apply this model to new faces and scans through matching algorithm.

concept of morphable face model

A face has two major properties: geometry represented as shape-vector S=(X1,Y1,Z1,X2,...,Yn,Zn)T3n that contains the X,Y,Z-coordinates of its n vertices; texture represented as texture-vector T=(R1,G1,B1,R2,...,Gn,Bn)T3n that contains the R,G,B color values of the n corresponding vertices. An arbitrary new shapes Smodel and new texture Tmodel can be expressed in linear combination of the m exemplar faces:


Notice that this representation is based on exemplar faces, we actually use the PCA form in next section to perform reconstruction.

model representation

The construction process can be described as a PCA procedure, i.e., use principle components(eigenvectors of convariance matrices of shape and texture) to represent the model:


in equation(1), S¯,T¯ denotes the average shape and texture of the trained set, si,ti are the eigenvectors of the covariance matrices( in descending order according to their eigenvalues), αi,βi are model coefficients.

To quantify the results in terms of the plausibility of being faces, the author fits a multivariate normal distribution to the data set of 200 faces, then the probability for coefficients α⃗  is given by

p(α⃗ )exp[12i=1m1(αi/σi)2],(2)

In addition, we can divide the faces into independent subregions that are morphed independently. In this paper, the author defines four subregions, by which the complete 3D face is generated by computing linear combinations for each segment simultaneously and blending them at the borders according to algorithm [1].

face attributes

To map facial attributes(gender, fullness of faces, darkness of eyebrows, double chins, hooked and concave noses) defined by hand-labeled set of example faces to the parameter space of the morphable model, first define shape and texture vectors that will manipulate a specific attribute:


where μi are maually assigned labels describing the markedness of the attribute. According to the author, this is motivated by a performance based technique of facial expressions transferring. Multiples of (ΔS,ΔT) can now be added or subtracted from any face. But I’m confused about how to adjust the parameters for a specific attribute, e.g., smile.

Application– Matching 3DMMs to images and 3D scans

Matching a morphable model to images is to optimize the coefficients of the 3D model(α⃗ ,β⃗ ) along with a set of rendering parameters(ρ⃗ ) such that they produce an image as close as possible to the input image.

From parameters(α⃗ ,β⃗ ,ρ⃗ ),colored images


are rendered using perspective projection and the Phong illumination model. To estimate the maximum posterior probability of P(α⃗ ,β⃗ ,ρ⃗ |\boldIinput), we need to minimize is the Euclidean distance between the reconstructed image and the input image:


The optimazation of this cost function is the process of obtain the optimal parameters α,β,ρ. While according to the author, to avoid non-face-like surfaces leading to the same image, we need to impose some constraints to the solutions. This is achieved through the spanned space of shape and texture vectors, as well as a tradeoff between matching quality and prior probabilities, i.e., P(α),P(β),P(ρ). Thus this is transformed to a maximum posterior estimation problem.


If we neglect correlations between some of the variables, the right-hand side is


In which, P(α),P(β) can be estimated by Eq (2), P(ρ) is a normal distribution and use the starting values for ρ¯j and a ad hoc values for σR,j. And p(\boldIinput|α,β,ρ)exp(12σ2IEI). In [2], the reason of the distribution is “For Gaussian pixel noise with a standard deviation σI, the likelihood of observing \boldIinput, given α,β,ρ, is a product of one-dimensional normal distributions, with one distribution for each pixel and each color channel.” I still cannot understand this sentence (expect some explanations from readers). Posterior probability is then maximized by minimizing



In the process of optimization, we need to inference \boldImodel, then compute the cost function, and then find a new \boldImodel value to make the cost smaller, then iterate the two steps.

The above are the procedure to map a 3D morphable model to images, in order to apply to scans, we just need to replace I(x,y) to I(h,ϕ),


in which h,ϕ are vertical steps and angles in laser scan representation.

Building morphable model without correspondence

All process stated above are based on the assumption that all exemplar faces are in full correspondence. This section will describe two algorithms for computing correspondence.

3D corresponding using optic flow

Optic flow is first proposed to estimate corresponding points in images I(x,y), a gradient-based optic flow is modified for applying to 3D scans I(h,ϕ), taking into account color and radius values simultaneously [3].

Bootstrapping the model

Since optic flow does not incorporate any constraints on the set of solutions, it fails on some of the more unusual faces in the database. The modified bootstrapping method improve correspondence iteratively.

The process if as follows:
1. use optic flow to compute preliminary correspondences between faces and a reference face.
2. compute morphable models based on the correspondences and average faces as new reference face.
3. match the models to 3D scans, now we have original scans and approximated scans.
4. compute the correspondences between the two scans using optic flow.
5. iterate above steps.


[1] P.J. Burt and E.H. Adelson. Merging images through pattern decomposition. In Applications of Digital Image Processing VIII, number 575, pages 173–181. SPIE The International Society for Optical Engeneering, 1985.

[2] Blanz,V.,&Vetter,T.(2003).Face recognition based on fitting a 3d morphable model. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(9), 1063–1074.

[3] T. Vetter and V. Blanz. Estimating coloured 3d face models from single images:An example based approach. In Burkhardt and Neumann, editors, ComputerVision – ECCV’98 Vol. II, Freiburg, Germany, 1998. Springer, Lecture Notes in Computer Science 1407.

发布了12 篇原创文章 · 获赞 2 · 访问量 1万+


©️2019 CSDN 皮肤主题: 大白 设计师: CSDN官方博客