[Three.js]Three.js中文文档-矩阵变换(Matrix transformations)

Three.js 利用矩阵编码描述3D转换--转换(位置),旋转和缩放.对于每一个基于Object3D的矩阵(matrix)实例对象都会存储一个位置,旋转和缩放.这篇文档主要描述如何更新一个对象的变换.

便利性和矩阵的自动更新(matrixAutoUpdate)

这里有两种方式更新对象的转换:

  1. 修改对象的位置,四元素,和缩放属性,然后让Three.js根据这些新的数据重新计算矩阵:

    object.position.copy(start_position);
            object.quaternion.copy(quaternion);
    默认情况下,matrixAutoUpdate属性是设置为true的,然后矩阵就会自动进行重新计算.如果是静态对象,或者当重新计算的时候你希望手动进行操作,通过设置matrixAutoUpdate为false可以获得更好的性能:
    object.matrixAutoUpdate = false; 
    并在更改任何属性后,手动更新矩阵:
    object.updateMatrix();
  2. 直接修改对象的矩阵.Matrix4这个类有多种方法来进行修改矩阵:

    object.matrix.setRotationFromQuaternion(quaternion);
            object.matrix.setPosition(start_position);
            object.matrixAutoUpdate = false;
    在这里请注意,matrixAutoUpdate必须设置为false,并确保不要调用updateMatrix方法.调用updateMatrix将会影响到矩阵的手动更改,以及影响基于位置、缩放等的矩阵重新计算.
    

对象和世界矩阵

对象的矩阵(matrix)存储对象相对于对象的父对象(parent)的转换;在世界坐标系到对象的转换,您必须访问对象的Object3D.matrixWorld
当父或子对象的转换改变,你可以调用updateMatrixWorld()对子对象的matrixWorld进行更新.

旋转和四元

Three.js 提供两种方式来进行3D旋转:欧拉角(Euler angles)和四元素(Quaternions),以及两者之间的相互转换方法.欧拉角又叫做“万向节锁,“在某些配置情况下回失去一个自由度(如防止物体被绕一个轴)。因此,旋转对象总是存储在对象可能所处的四元数中.
以前的版本库包含一个useQuaternion属性,当设置为false的时候,会导致物体的矩阵是由欧拉角计算.这种做法已经是不被采用了的,相反,你应该在将更新的四元上使用setRotationFromEuler方法。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Part One — Matrices 1 Basic properties of vectors and matrices 3 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 Matrices: addition and multiplication . . . . . . . . . . . . . . . 4 4 The transpose of a matrix . . . . . . . . . . . . . . . . . . . . . 6 5 Square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6 Linear forms and quadratic forms . . . . . . . . . . . . . . . . . 7 7 The rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . 8 8 The inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 9 The determinant . . . . . . . . . . . . . . . . . . . . . . . . . . 10 10 The trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 11 Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . 11 12 Complex matrices . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . 14 14 Schur’s decomposition theorem . . . . . . . . . . . . . . . . . . 17 15 The Jordan decomposition . . . . . . . . . . . . . . . . . . . . . 18 16 The singular-value decomposition . . . . . . . . . . . . . . . . . 19 17 Further results concerning eigenvalues . . . . . . . . . . . . . . 20 18 Positive (semi)de�nite matrices . . . . . . . . . . . . . . . . . . 23 19 Three further results for positive de�nite matrices . . . . . . . 25 20 A useful result . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2 Kronecker products, the vec operator and the Moore-Penrose inverse 31 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 The Kronecker product . . . . . . . . . . . . . . . . . . . . . . 31 3 Eigenvalues of a Kronecker product . . . . . . . . . . . . . . . . 33 4 The vec operator . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5 The Moore-Penrose (MP) inverse . . . . . . . . . . . . . . . . . 36 6 Existence and uniqueness of the MP inverse . . . . . . . . . . . 37 v vi Contents 7 Some properties of the MP inverse . . . . . . . . . . . . . . . . 38 8 Further properties . . . . . . . . . . . . . . . . . . . . . . . . . 39 9 The solution of linear equation systems . . . . . . . . . . . . . 41 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3 Miscellaneous matrix results 47 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2 The adjoint matrix . . . . . . . . . . . . . . . . . . . . . . . . . 47 3 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . 49 4 Bordered determinants . . . . . . . . . . . . . . . . . . . . . . . 51 5 The matrix equation AX = 0 . . . . . . . . . . . . . . . . . . . 51 6 The Hadamard product . . . . . . . . . . . . . . . . . . . . . . 53 7 The commutation matrix K mn . . . . . . . . . . . . . . . . . . 54 8 The duplication matrix D n . . . . . . . . . . . . . . . . . . . . 56 9 Relationship between D n+1 and D n , I . . . . . . . . . . . . . . 58 10 Relationship between D n+1 and D n , II . . . . . . . . . . . . . . 60 11 Conditions for a quadratic form to be positive (negative) sub- ject to linear constraints . . . . . . . . . . . . . . . . . . . . . . 61 12 Necessary and su�cient conditions for r(A : B) = r(A) + r(B) 64 13 The bordered Gramian matrix . . . . . . . . . . . . . . . . . . 66 14 The equations X 1 A + X 2 B ′ = G 1 ,X 1 B = G 2 . . . . . . . . . . 68 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Part Two — Di�erentials: the theory 4 Mathematical preliminaries 75 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 2 Interior points and accumulation points . . . . . . . . . . . . . 75 3 Open and closed sets . . . . . . . . . . . . . . . . . . . . . . . . 76 4 The Bolzano-Weierstrass theorem . . . . . . . . . . . . . . . . . 79 5 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6 The limit of a function . . . . . . . . . . . . . . . . . . . . . . . 81 7 Continuous functions and compactness . . . . . . . . . . . . . . 82 8 Convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 9 Convex and concave functions . . . . . . . . . . . . . . . . . . . 85 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5 Di�erentials and di�erentiability 89 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3 Di�erentiability and linear approximation . . . . . . . . . . . . 91 4 The di�erential of a vector function . . . . . . . . . . . . . . . . 93 5 Uniqueness of the di�erential . . . . . . . . . . . . . . . . . . . 95 6 Continuity of di�erentiable functions . . . . . . . . . . . . . . . 96 7 Partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 97 Contents vii 8 The �rst identi�cation theorem . . . . . . . . . . . . . . . . . . 98 9 Existence of the di�erential, I . . . . . . . . . . . . . . . . . . . 99 10 Existence of the di�erential, II . . . . . . . . . . . . . . . . . . 101 11 Continuous di�erentiability . . . . . . . . . . . . . . . . . . . . 103 12 The chain rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 13 Cauchy invariance . . . . . . . . . . . . . . . . . . . . . . . . . 105 14 The mean-value theorem for real-valued functions . . . . . . . . 106 15 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . 107 16 Some remarks on notation . . . . . . . . . . . . . . . . . . . . . 109 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6 The second di�erential 113 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 2 Second-order partial derivatives . . . . . . . . . . . . . . . . . . 113 3 The Hessian matrix . . . . . . . . . . . . . . . . . . . . . . . . . 114 4 Twice di�erentiability and second-order approximation, I . . . 115 5 De�nition of twice di�erentiability . . . . . . . . . . . . . . . . 116 6 The second di�erential . . . . . . . . . . . . . . . . . . . . . . . 118 7 (Column) symmetry of the Hessian matrix . . . . . . . . . . . . 120 8 The second identi�cation theorem . . . . . . . . . . . . . . . . 122 9 Twice di�erentiability and second-order approximation, II . . . 123 10 Chain rule for Hessian matrices . . . . . . . . . . . . . . . . . . 125 11 The analogue for second di�erentials . . . . . . . . . . . . . . . 126 12 Taylor’s theorem for real-valued functions . . . . . . . . . . . . 128 13 Higher-order di�erentials . . . . . . . . . . . . . . . . . . . . . . 129 14 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 7 Static optimization 133 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 2 Unconstrained optimization . . . . . . . . . . . . . . . . . . . . 134 3 The existence of absolute extrema . . . . . . . . . . . . . . . . 135 4 Necessary conditions for a local minimum . . . . . . . . . . . . 137 5 Su�cient conditions for a local minimum: �rst-derivative test . 138 6 Su�cient conditions for a local minimum: second-derivative test 140 7 Characterization of di�erentiable convex functions . . . . . . . 142 8 Characterization of twice di�erentiable convex functions . . . . 145 9 Su�cient conditions for an absolute minimum . . . . . . . . . . 147 10 Monotonic transformations . . . . . . . . . . . . . . . . . . . . 147 11 Optimization subject to constraints . . . . . . . . . . . . . . . . 148 12 Necessary conditions for a local minimum under constraints . . 149 13 Su�cient conditions for a local minimum under constraints . . 154 14 Su�cient conditions for an absolute minimum under constraints158 15 A note on constraints in matrix form . . . . . . . . . . . . . . . 159 16 Economic interpretation of Lagrange multipliers . . . . . . . . . 160 Appendix: the implicit function theorem . . . . . . . . . . . . . . . . 162 viii Contents Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Part Three — Di�erentials: the practice 8 Some important di�erentials 167 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 2 Fundamental rules of di�erential calculus . . . . . . . . . . . . 167 3 The di�erential of a determinant . . . . . . . . . . . . . . . . . 169 4 The di�erential of an inverse . . . . . . . . . . . . . . . . . . . 171 5 Di�erential of the Moore-Penrose inverse . . . . . . . . . . . . . 172 6 The di�erential of the adjoint matrix . . . . . . . . . . . . . . . 175 7 On di�erentiating eigenvalues and eigenvectors . . . . . . . . . 177 8 The di�erential of eigenvalues and eigenvectors: symmetric case 179 9 The di�erential of eigenvalues and eigenvectors: complex case . 182 10 Two alternative expressions for dλ . . . . . . . . . . . . . . . . 185 11 Second di�erential of the eigenvalue function . . . . . . . . . . 188 12 Multiple eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 189 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 9 First-order di�erentials and Jacobian matrices 193 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 2 Classi�cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 3 Bad notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 4 Good notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5 Identi�cation of Jacobian matrices . . . . . . . . . . . . . . . . 198 6 The �rst identi�cation table . . . . . . . . . . . . . . . . . . . . 198 7 Partitioning of the derivative . . . . . . . . . . . . . . . . . . . 199 8 Scalar functions of a vector . . . . . . . . . . . . . . . . . . . . 200 9 Scalar functions of a matrix, I: trace . . . . . . . . . . . . . . . 200 10 Scalar functions of a matrix, II: determinant . . . . . . . . . . . 202 11 Scalar functions of a matrix, III: eigenvalue . . . . . . . . . . . 204 12 Two examples of vector functions . . . . . . . . . . . . . . . . . 204 13 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . 205 14 Kronecker products . . . . . . . . . . . . . . . . . . . . . . . . . 208 15 Some other problems . . . . . . . . . . . . . . . . . . . . . . . . 210 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 10 Second-order di�erentials and Hessian matrices 213 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 2 The Hessian matrix of a matrix function . . . . . . . . . . . . . 213 3 Identi�cation of Hessian matrices . . . . . . . . . . . . . . . . . 214 4 The second identi�cation table . . . . . . . . . . . . . . . . . . 215 5 An explicit formula for the Hessian matrix . . . . . . . . . . . . 217 6 Scalar functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 7 Vector functions . . . . . . . . . . . . . . . . . . . . . . . . . . 219 8 Matrix functions, I . . . . . . . . . . . . . . . . . . . . . . . . . 220 Contents ix 9 Matrix functions, II . . . . . . . . . . . . . . . . . . . . . . . . 221 Part Four — Inequalities 11 Inequalities 225 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 2 The Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . . . 225 3 Matrix analogues of the Cauchy-Schwarz inequality . . . . . . . 227 4 The theorem of the arithmetic and geometric means . . . . . . 228 5 The Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . 230 6 Concavity of λ 1 , convexity of λ n . . . . . . . . . . . . . . . . . 231 7 Variational description of eigenvalues . . . . . . . . . . . . . . . 232 8 Fischer’s min-max theorem . . . . . . . . . . . . . . . . . . . . 233 9 Monotonicity of the eigenvalues . . . . . . . . . . . . . . . . . . 235 10 The Poincar´ e separation theorem . . . . . . . . . . . . . . . . . 236 11 Two corollaries of Poincar´ e’s theorem . . . . . . . . . . . . . . 237 12 Further consequences of the Poincar´ e theorem . . . . . . . . . . 238 13 Multiplicative version . . . . . . . . . . . . . . . . . . . . . . . 239 14 The maximum of a bilinear form . . . . . . . . . . . . . . . . . 241 15 Hadamard’s inequality . . . . . . . . . . . . . . . . . . . . . . . 242 16 An interlude: Karamata’s inequality . . . . . . . . . . . . . . . 243 17 Karamata’s inequality applied to eigenvalues . . . . . . . . . . 245 18 An inequality concerning positive semide�nite matrices . . . . . 245 19 A representation theorem for ( � a p i ) 1/p . . . . . . . . . . . . . 246 20 A representation theorem for (trA p ) 1/p . . . . . . . . . . . . . . 248 21 Hölder’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . 249 22 Concavity of log|A| . . . . . . . . . . . . . . . . . . . . . . . . . 250 23 Minkowski’s inequality . . . . . . . . . . . . . . . . . . . . . . . 252 24 Quasilinear representation of |A| 1/n . . . . . . . . . . . . . . . . 254 25 Minkowski’s determinant theorem . . . . . . . . . . . . . . . . . 256 26 Weighted means of order p . . . . . . . . . . . . . . . . . . . . . 256 27 Schlömilch’s inequality . . . . . . . . . . . . . . . . . . . . . . . 259 28 Curvature properties of M p (x,a) . . . . . . . . . . . . . . . . . 260 29 Least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 30 Generalized least squares . . . . . . . . . . . . . . . . . . . . . 263 31 Restricted least squares . . . . . . . . . . . . . . . . . . . . . . 263 32 Restricted least squares: matrix version . . . . . . . . . . . . . 265 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Part Five — The linear model 12 Statistical preliminaries 275 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 2 The cumulative distribution function . . . . . . . . . . . . . . . 275 3 The joint density function . . . . . . . . . . . . . . . . . . . . . 276 4 Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 x Contents 5 Variance and covariance . . . . . . . . . . . . . . . . . . . . . . 277 6 Independence of two random variables . . . . . . . . . . . . . . 279 7 Independence of n random variables . . . . . . . . . . . . . . . 281 8 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 9 The one-dimensional normal distribution . . . . . . . . . . . . . 281 10 The multivariate normal distribution . . . . . . . . . . . . . . . 282 11 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 13 The linear regression model 287 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 2 A�ne minimum-trace unbiased estimation . . . . . . . . . . . . 288 3 The Gauss-Markov theorem . . . . . . . . . . . . . . . . . . . . 289 4 The method of least squares . . . . . . . . . . . . . . . . . . . . 292 5 Aitken’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 293 6 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . 295 7 Estimable functions . . . . . . . . . . . . . . . . . . . . . . . . 297 8 Linear constraints: the case M(R ′ ) ⊂ M(X ′ ) . . . . . . . . . . 299 9 Linear constraints: the general case . . . . . . . . . . . . . . . . 302 10 Linear constraints: the case M(R ′ ) ∩ M(X ′ ) = {0} . . . . . . . 305 11 A singular variance matrix: the case M(X) ⊂ M(V ) . . . . . . 306 12 A singular variance matrix: the case r(X ′ V + X) = r(X) . . . . 308 13 A singular variance matrix: the general case, I . . . . . . . . . . 309 14 Explicit and implicit linear constraints . . . . . . . . . . . . . . 310 15 The general linear model, I . . . . . . . . . . . . . . . . . . . . 313 16 A singular variance matrix: the general case, II . . . . . . . . . 314 17 The general linear model, II . . . . . . . . . . . . . . . . . . . . 317 18 Generalized least squares . . . . . . . . . . . . . . . . . . . . . 318 19 Restricted least squares . . . . . . . . . . . . . . . . . . . . . . 319 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 14 Further topics in the linear model 323 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 2 Best quadratic unbiased estimation of σ 2 . . . . . . . . . . . . 323 3 The best quadratic and positive unbiased estimator of σ 2 . . . 324 4 The best quadratic unbiased estimator of σ 2 . . . . . . . . . . . 326 5 Best quadratic invariant estimation of σ 2 . . . . . . . . . . . . 329 6 The best quadratic and positive invariant estimator of σ 2 . . . 330 7 The best quadratic invariant estimator of σ 2 . . . . . . . . . . . 331 8 Best quadratic unbiased estimation: multivariate normal case . 332 9 Bounds for the bias of the least squares estimator of σ 2 , I . . . 335 10 Bounds for the bias of the least squares estimator of σ 2 , II . . . 336 11 The prediction of disturbances . . . . . . . . . . . . . . . . . . 338 12 Best linear unbiased predictors with scalar variance matrix . . 339 13 Best linear unbiased predictors with �xed variance matrix, I . . 341 Contents xi 14 Best linear unbiased predictors with �xed variance matrix, II . 344 15 Local sensitivity of the posterior mean . . . . . . . . . . . . . . 345 16 Local sensitivity of the posterior precision . . . . . . . . . . . . 347 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Part Six — Applications to maximum likelihood estimation 15 Maximum likelihood estimation 351 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 2 The method of maximum likelihood (ML) . . . . . . . . . . . . 351 3 ML estimation of the multivariate normal distribution . . . . . 352 4 Symmetry: implicit versus explicit treatment . . . . . . . . . . 354 5 The treatment of positive de�niteness . . . . . . . . . . . . . . 355 6 The information matrix . . . . . . . . . . . . . . . . . . . . . . 356 7 ML estimation of the multivariate normal distribution: distinct means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 8 The multivariate linear regression model . . . . . . . . . . . . . 358 9 The errors-in-variables model . . . . . . . . . . . . . . . . . . . 361 10 The non-linear regression model with normal errors . . . . . . . 364 11 Special case: functional independence of mean- and variance parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 12 Generalization of Theorem 6 . . . . . . . . . . . . . . . . . . . 366 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 16 Simultaneous equations 371 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 2 The simultaneous equations model . . . . . . . . . . . . . . . . 371 3 The identi�cation problem . . . . . . . . . . . . . . . . . . . . . 373 4 Identi�cation with linear constraints on B and Γ only . . . . . 375 5 Identi�cation with linear constraints on B,Γ and Σ . . . . . . . 375 6 Non-linear constraints . . . . . . . . . . . . . . . . . . . . . . . 377 7 Full-information maximum likelihood (FIML): the information matrix (general case) . . . . . . . . . . . . . . . . . . . . . . . . 378 8 Full-information maximum likelihood (FIML): the asymptotic variance matrix (special case) . . . . . . . . . . . . . . . . . . . 380 9 Limited-information maximum likelihood (LIML): the �rst-order conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 10 Limited-information maximum likelihood (LIML): the informa- tion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 11 Limited-information maximum likelihood (LIML): the asymp- totic variance matrix . . . . . . . . . . . . . . . . . . . . . . . . 388 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 xii Contents 17 Topics in psychometrics 395 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 2 Population principal components . . . . . . . . . . . . . . . . . 396 3 Optimality of principal components . . . . . . . . . . . . . . . . 397 4 A related result . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 5 Sample principal components . . . . . . . . . . . . . . . . . . . 399 6 Optimality of sample principal components . . . . . . . . . . . 401 7 Sample analogue of Theorem 3 . . . . . . . . . . . . . . . . . . 401 8 One-mode component analysis . . . . . . . . . . . . . . . . . . 401 9 One-mode component analysis and sample principal compo- nents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 10 Two-mode component analysis . . . . . . . . . . . . . . . . . . 405 11 Multimode component analysis . . . . . . . . . . . . . . . . . . 406 12 Factor analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 13 A zigzag routine . . . . . . . . . . . . . . . . . . . . . . . . . . 413 14 A Newton-Raphson routine . . . . . . . . . . . . . . . . . . . . 415 15 Kaiser’s varimax method . . . . . . . . . . . . . . . . . . . . . . 418 16 Canonical correlations and variates in the population . . . . . . 421 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Index of symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Subject index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Foreword By Andrew Glassner xvii Preface xix Mathematical Notation xxi Pseudo-Code xxiii Contributors xxix I I I I I 2 2 2 2 2D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS Introduction 3 1. The Area of a Simple Polygon 5 Jon Rokne 2. Intersection of Line Segments C 7 Mukesh Prasad 3. Distance from a Point to a Line 10 Jack C. Morrison 4. An Easy Bounding Circle 14 Jon Rokneviii 5. The Smallest Circle Containing the Intersection C of Two Circles 17 Jon Rokne 6. Appolonius’s 10th Problem 19 Jon Rokne 7. A Peano Curve Generation Algorithm C 25 Ken Musgrave 8. Space-Filling Curves and a Measure of Coherence C 26 Douglas Voorhies 9. Scanline Coherent Shape Algebra 31 Jonathan E. Steinhart II II II II II I I I I IMAGE PROCESSING MAGE PROCESSING MAGE PROCESSING MAGE PROCESSING MAGE PROCESSING Introduction 49 1. Image Smoothing and Sharpening by Discrete Convolution 50 Dale A. Schumacher 2. A Comparison of Digital Halftoning Techniques C 75 Dale A. Schumacher 3. Color Dithering C 72 Spencer W. Thomas and Rod G. Bogart 4. Fast Anamorphic Image Scaling 78 Dale A. Schumacher 5. Real Pixels 80 Greg Ward 6. A Fast 90-Degree Bitmap Rotator C 84 Sue-Ken Yap CONTENTSix CONTENTS 7. Rotation of Run-Length Encoded Image Data C 86 Jeff Holt 8. Adaptive Run-Length Encoding 89 Andrew S. Glassner 9. Image File Compression Made Easy 93 Alan W. Paeth 10. An Optimal Filter for Image Reconstruction 101 Nelson Max 11. Noise Thresholding in Edge Images 105 John Schlag 12. Computing the Area, the Circumference, and the Genus of a Binary Digital Image C 107 Hanspeter Bieri and Andreas Kohler III III III III III F F F F FRAM RAM RAM RAM RAME E E E E BUFFER TECH BUFFER TECH BUFFER TECH BUFFER TECH BUFFER TECHN N N N NIQUES IQUES IQUES IQUES IQUES Introduction 115 1. Efficient Inverse Color Map Computation C 116 Spencer W. Thomas 2. Efficient Statistical Computations for Optimal Color Quantization 126 Xiaolin Wu 3. A Random Color Map Animation Algorithm C 134 Ken Musgrave 4. A Fast Approach to PHIGS PLUS Pseudo Color 138 Mapping James Hall and Terence Lindgren 5. Mapping RGB Triples onto 16 Distinct Values 143x 6. Television Color Encoding and “Hot” Broadcast Colors C 147 David Martindale and Alan W. Paeth 7. An Inexpensive Method of Setting the Monitor White Point 159 Gary W. Meyer 8. Some Tips for Making Color Hardcopy 163 Ken Musgrave IV IV IV IV IV 3 3 3 3 3D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS D GEOMETRY AND ALGORITHMS Introduction 169 1. Area of Planar Polygons and Volume of Polyhedra 170 Ronald N. Goldman 2. Getting Around on a Sphere 172 Clifford A. Shaffer 3. Exact Dihedral Metrics for Common Polyhedra 174 Alan W. Paeth 4. A Simple Viewing Geometry 179 Andrew S. Glassner 5. View Correlation C 181 Rod G. Bogart 6. Maintaining Winged-Edge Models 191 Andrew S. Glassner 7. Quadtree/Octree-to-Boundary Conversion 202 Claudio Montani and Roberto Scopigno 8. Three-Dimensional Homogeneous Clipping of Triangle Strips C 219 Patrick-Gilles Maillot CONTENTSxi 9. InterPhong Shading C 232 Nadia Magnenat Thalmann, Daniel Thalmann, and Hong Tong Minh V V V V V R R R R RAY TRACING AY TRACING AY TRACING AY TRACING AY TRACING Introduction 245 1. Fast Ray–Convex Polyhedron Intersection C 247 Eric Haines 2. Intersecting a Ray with an Elliptical Torus C 251 Joseph M. Cychosz 3. Ray–Triangle Intersection Using Binary Recursive Subdivision 257 Douglas Voorhies and David Kirk 4. Improved Ray Tagging for Voxel-Based Ray Tracing 264 David Kirk and James Arvo 5. Efficiency Improvements for Hierarchy Traversal in Ray Tracing 267 Eric Haines 6. A Recursive Shadow Voxel Cache for Ray Tracing C 273 Andrew Pearce 7. Avoiding Incorrect Shadow Intersections for Ray Tracing 275 Andrew Pearce 8. A Body Color Model: Absorption of Light through Translucent Media 277 Mark E. Lee and Samuel P. Uselton 9. More Shadow Attenuation for Ray Tracing Transparent or Translucent Objects 283 Mark E. Lee and Samuel P. Uselton CONTENTSxii CONTENTS Vl Vl Vl Vl Vl R R R R RADIOSITY ADIOSITY ADIOSITY ADIOSITY ADIOSITY Introduction 293 1. Implementing Progressive Radiosity with User- Provided Polygon Display Routines C 295 Shenchang Eric Chen 2. A Cubic Tetrahedral Adaptation of the Hemi-Cube Algorithm 299 Jeffrey C. Beran-Koehn and Mark J. Pavicic 3. Fast Vertex Radiosity Update C 303 Filippo Tampieri 4. Radiosity via Ray Tracing 306 Peter Shirley 5. Detection of Shadow Boundaries for Adaptive Meshing in Radiosity 311 François Sillion Vll Vll Vll Vll Vll M M M M MATRIX TECHNIQUES ATRIX TECHNIQUES ATRIX TECHNIQUES ATRIX TECHNIQUES ATRIX TECHNIQUES Introduction 319 1. Decomposing a Matrix into Simple Transformations C 320 Spencer W. Thomas 2. Recovering the Data from the Transformation Matrix 324 Ronald N. Goldman 3. Transformations as Exponentials 332 Ronald N. Goldman 4. More Matrices and Transformations: Shear and Pseudo-Perspective 338 Ronald N. Goldmanxiii 5. Fast Matrix Inversion C 342 Kevin Wu 6. Quaternions and 4 × 4 Matrices 351 Ken Shoemake 7. Random Rotation Matrices C 355 James Arvo 8. Classifying Small Sparse Matrices C 357 James Arvo Vlll Vlll Vlll Vlll Vlll N N N N NUMERICAL AND PROGRAMMING UMERICAL AND PROGRAMMING UMERICAL AND PROGRAMMING UMERICAL AND PROGRAMMING UMERICAL AND PROGRAMMING T T T T TECHNIQUES ECHNIQUES ECHNIQUES ECHNIQUES ECHNIQUES Introduction 365 1. Bit Picking 366 Ken Shoemake 2. Faster Fourier Transform 368 Ken Shoemake 3. Of Integers, Fields, and Bit Counting C 371 Alan W. Paeth and David Schilling 4. Using Geometric Constructions to Interpolate Orientation with Quaternions 377 John Schlag 5. A Half-Angle Identity for Digital Computation: 381 The Joys of the Halved Tangent Alan W. Paeth 6. An Integer Square Root Algorithm C 387 Christopher J. Musial 7. Fast Approximation to the Arctangent 389 Ron Capelli CONTENTSxiv CONTENTS 8. Fast Sign of Cross Product Calculation C 392 Jack Ritter 9. Interval Sampling 394 Ken Shoemake 10.A Recursive Implementation of the Perlin Noise Function C 396 Greg Ward I I I I IX X X X X C C C C CURVES AN URVES AN URVES AN URVES AN URVES AND D D D D SURFACES SURFACES SURFACES SURFACES SURFACES Introduction 405 1. Least-Squares Approximations to Bézier Curves and Surfaces 406 Doug Moore and Joe Warren 2. Beyond Bézier Curves 412 Ken Shoemake 3. A Simple Formulation for Curve Interpolation with Variable Control Point Approximation 417 John Schlag 4. Symmetric Evaluation of Polynomials 420 Terence Lindgren 5. Menelaus’s Theorem 424 Hans-Peter Seidel 6. Geometrically Continuous Cubic Bézier Curves 428 Hans-Peter Siedel 7. A Good Straight-Line Approximation of a Circular Arc C 435 Christopher J. Musial 8. Great Circle Plotting 440 Alan W. Paeth 9. Fast Anti-Aliased Circle Generation 446 Xiaolin Wu
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Part One — Matrices 1 Basic properties of vectors and matrices3 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 3Matrices: addition and multiplication . . . . . . . . . . . . . . .4 4The transpose of a matrix . . . . . . . . . . . . . . . . . . . . .6 5Square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . .6 6Linear forms and quadratic forms . . . . . . . . . . . . . . . . .7 7The rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . .8 8The inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 9The determinant . . . . . . . . . . . . . . . . . . . . . . . . . . 10 10 The trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 11 Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . 11 12 Complex matrices . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . 14 14 Schur’s decomposition theorem . . . . . . . . . . . . . . . . . . 17 15 The Jordan decomposition . . . . . . . . . . . . . . . . . . . . . 18 16 The singular-value decomposition . . . . . . . . . . . . . . . . . 19 17 Further results concerning eigenvalues . . . . . . . . . . . . . . 20 18 Positive (semi)definite matrices . . . . . . . . . . . . . . . . . . 23 19 Three further results for positive definite matrices . . . . . . . 25 20 A useful result . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2 Kronecker products, the vec operator and the Moore-Penrose inverse 31 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2The Kronecker product . . . . . . . . . . . . . . . . . . . . . . 31 3Eigenvalues of a Kronecker product . . . . . . . . . . . . . . . . 33 4The vec operator . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5The Moore-Penrose (MP) inverse . . . . . . . . . . . . . . . . . 36 6Existence and uniqueness of the MP inverse . . . . . . . . . . . 37viContents 7Some properties of the MP inverse . . . . . . . . . . . . . . . . 38 8Further properties . . . . . . . . . . . . . . . . . . . . . . . . . 39 9The solution of linear equation systems . . . . . . . . . . . . . 41 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3 Miscellaneous matrix results47 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2The adjoint matrix . . . . . . . . . . . . . . . . . . . . . . . . . 47 3Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . 49 4Bordered determinants . . . . . . . . . . . . . . . . . . . . . . . 51 5The matrix equation AX = 0 . . . . . . . . . . . . . . . . . . . 51 6The Hadamard product . . . . . . . . . . . . . . . . . . . . . . 53 7The commutation matrix Kmn. . . . . . . . . . . . . . . . . . 54 8The duplication matrix Dn. . . . . . . . . . . . . . . . . . . . 56 9Relationship between Dn+1and Dn, I . . . . . . . . . . . . . . 58 10 Relationship between Dn+1and Dn, II . . . . . . . . . . . . . . 60 11 Conditions for a quadratic form to be positive (negative) sub- ject to linear constraints . . . . . . . . . . . . . . . . . . . . . . 61 12 Necessary and sufficient conditions for r(A : B) = r(A) + r(B)64 13 The bordered Gramian matrix . . . . . . . . . . . . . . . . . . 66 14 The equations X1A + X2B′= G1,X1B = G2. . . . . . . . . . 68 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Part Two — Differentials: the theory 4 Mathematical preliminaries75 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 2Interior points and accumulation points . . . . . . . . . . . . . 75 3Open and closed sets . . . . . . . . . . . . . . . . . . . . . . . . 76 4The Bolzano-Weierstrass theorem . . . . . . . . . . . . . . . . . 79 5Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6The limit of a function . . . . . . . . . . . . . . . . . . . . . . . 81 7Continuous functions and compactness . . . . . . . . . . . . . . 82 8Convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 9Convex and concave functions . . . . . . . . . . . . . . . . . . . 85 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5 Differentials and differentiability89 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3Differentiability and linear approximation . . . . . . . . . . . . 91 4The differential of a vector function . . . . . . . . . . . . . . . . 93 5Uniqueness of the differential . . . . . . . . . . . . . . . . . . . 95 6Continuity of differentiable functions . . . . . . . . . . . . . . . 96 7Partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 97Contentsvii 8The first identification theorem . . . . . . . . . . . . . . . . . . 98 9Existence of the differential, I . . . . . . . . . . . . . . . . . . . 99 10 Existence of the differential, II . . . . . . . . . . . . . . . . . . 101 11 Continuous differentiability . . . . . . . . . . . . . . . . . . . . 103 12 The chain rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 13 Cauchy invariance . . . . . . . . . . . . . . . . . . . . . . . . . 105 14 The mean-value theorem for real-valued functions . . . . . . . . 106 15 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . 107 16 Some remarks on notation . . . . . . . . . . . . . . . . . . . . . 109 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6 The second differential113 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 2Second-order partial derivatives . . . . . . . . . . . . . . . . . . 113 3The Hessian matrix . . . . . . . . . . . . . . . . . . . . . . . . . 114 4Twice differentiability and second-order approximation, I . . . 115 5Definition of twice differentiability . . . . . . . . . . . . . . . . 116 6The second differential . . . . . . . . . . . . . . . . . . . . . . . 118 7(Column) symmetry of the Hessian matrix . . . . . . . . . . . . 120 8The second identification theorem . . . . . . . . . . . . . . . . 122 9Twice differentiability and second-order approximation, II . . . 123 10 Chain rule for Hessian matrices . . . . . . . . . . . . . . . . . . 125 11 The analogue for second differentials . . . . . . . . . . . . . . . 126 12 Taylor’s theorem for real-valued functions . . . . . . . . . . . . 128 13 Higher-order differentials . . . . . . . . . . . . . . . . . . . . . . 129 14 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 7 Static optimization133 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 2Unconstrained optimization . . . . . . . . . . . . . . . . . . . . 134 3The existence of absolute extrema . . . . . . . . . . . . . . . . 135 4Necessary conditions for a local minimum . . . . . . . . . . . . 137 5Sufficient conditions for a local minimum: first-derivative test . 138 6Sufficient conditions for a local minimum: second-derivative test140 7Characterization of differentiable convex functions . . . . . . . 142 8Characterization of twice differentiable convex functions . . . . 145 9Sufficient conditions for an absolute minimum . . . . . . . . . . 147 10 Monotonic transformations . . . . . . . . . . . . . . . . . . . . 147 11 Optimization subject to constraints . . . . . . . . . . . . . . . . 148 12 Necessary conditions for a local minimum under constraints . . 149 13 Sufficient conditions for a local minimum under constraints . . 154 14 Sufficient conditions for an absolute minimum under constraints158 15 A note on constraints in matrix form . . . . . . . . . . . . . . . 159 16 Economic interpretation of Lagrange multipliers . . . . . . . . . 160 Appendix: the implicit function theorem . . . . . . . . . . . . . . . . 162viiiContents Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Part Three — Differentials: the practice 8 Some important differentials167 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 2Fundamental rules of differential calculus . . . . . . . . . . . . 167 3The differential of a determinant . . . . . . . . . . . . . . . . . 169 4The differential of an inverse . . . . . . . . . . . . . . . . . . . 171 5Differential of the Moore-Penrose inverse . . . . . . . . . . . . . 172 6The differential of the adjoint matrix . . . . . . . . . . . . . . . 175 7On differentiating eigenvalues and eigenvectors . . . . . . . . . 177 8The differential of eigenvalues and eigenvectors: symmetric case 179 9The differential of eigenvalues and eigenvectors: complex case . 182 10 Two alternative expressions for dλ . . . . . . . . . . . . . . . . 185 11 Second differential of the eigenvalue function . . . . . . . . . . 188 12 Multiple eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 189 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 9 First-order differentials and Jacobian matrices193 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 2Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 3Bad notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 4Good notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5Identification of Jacobian matrices . . . . . . . . . . . . . . . . 198 6The first identification table . . . . . . . . . . . . . . . . . . . . 198 7Partitioning of the derivative . . . . . . . . . . . . . . . . . . . 199 8Scalar functions of a vector . . . . . . . . . . . . . . . . . . . . 200 9Scalar functions of a matrix, I: trace . . . . . . . . . . . . . . . 200 10 Scalar functions of a matrix, II: determinant . . . . . . . . . . . 202 11 Scalar functions of a matrix, III: eigenvalue . . . . . . . . . . . 204 12 Two examples of vector functions . . . . . . . . . . . . . . . . . 204 13 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . 205 14 Kronecker products . . . . . . . . . . . . . . . . . . . . . . . . . 208 15 Some other problems . . . . . . . . . . . . . . . . . . . . . . . . 210 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 10 Second-order differentials and Hessian matrices213 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 2The Hessian matrix of a matrix function . . . . . . . . . . . . . 213 3Identification of Hessian matrices . . . . . . . . . . . . . . . . . 214 4The second identification table . . . . . . . . . . . . . . . . . . 215 5An explicit formula for the Hessian matrix . . . . . . . . . . . . 217 6Scalar functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 7Vector functions . . . . . . . . . . . . . . . . . . . . . . . . . . 219 8Matrix functions, I . . . . . . . . . . . . . . . . . . . . . . . . . 220Contentsix 9Matrix functions, II . . . . . . . . . . . . . . . . . . . . . . . . 221 Part Four — Inequalities 11 Inequalities225 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 2The Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . . . 225 3Matrix analogues of the Cauchy-Schwarz inequality . . . . . . . 227 4The theorem of the arithmetic and geometric means . . . . . . 228 5The Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . 230 6Concavity of λ1, convexity of λn. . . . . . . . . . . . . . . . . 231 7Variational description of eigenvalues . . . . . . . . . . . . . . . 232 8Fischer’s min-max theorem . . . . . . . . . . . . . . . . . . . . 233 9Monotonicity of the eigenvalues . . . . . . . . . . . . . . . . . . 235 10 The Poincar´e separation theorem . . . . . . . . . . . . . . . . . 236 11 Two corollaries of Poincar´e’s theorem . . . . . . . . . . . . . . 237 12 Further consequences of the Poincar´e theorem . . . . . . . . . . 238 13 Multiplicative version . . . . . . . . . . . . . . . . . . . . . . . 239 14 The maximum of a bilinear form . . . . . . . . . . . . . . . . . 241 15 Hadamard’s inequality . . . . . . . . . . . . . . . . . . . . . . . 242 16 An interlude: Karamata’s inequality . . . . . . . . . . . . . . . 243 17 Karamata’s inequality applied to eigenvalues . . . . . . . . . . 245 18 An inequality concerning positive semidefinite matrices . . . . . 245 19 A representation theorem for (Pap i)1/p. . . . . . . . . . . . . 246 20 A representation theorem for (trAp)1/p. . . . . . . . . . . . . . 248 21 H¨older’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . 249 22 Concavity of log|A| . . . . . . . . . . . . . . . . . . . . . . . . . 250 23 Minkowski’s inequality . . . . . . . . . . . . . . . . . . . . . . . 252 24 Quasilinear representation of |A|1/n. . . . . . . . . . . . . . . . 254 25 Minkowski’s determinant theorem . . . . . . . . . . . . . . . . . 256 26 Weighted means of order p . . . . . . . . . . . . . . . . . . . . . 256 27 Schl¨omilch’s inequality . . . . . . . . . . . . . . . . . . . . . . . 259 28 Curvature properties of Mp(x,a) . . . . . . . . . . . . . . . . . 260 29 Least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 30 Generalized least squares . . . . . . . . . . . . . . . . . . . . . 263 31 Restricted least squares . . . . . . . . . . . . . . . . . . . . . . 263 32 Restricted least squares: matrix version . . . . . . . . . . . . . 265 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Part Five — The linear model 12 Statistical preliminaries275 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 2The cumulative distribution function . . . . . . . . . . . . . . . 275 3The joint density function . . . . . . . . . . . . . . . . . . . . . 276 4Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276xContents 5Variance and covariance . . . . . . . . . . . . . . . . . . . . . . 277 6Independence of two random variables . . . . . . . . . . . . . . 279 7Independence of n random variables . . . . . . . . . . . . . . . 281 8Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 9The one-dimensional normal distribution . . . . . . . . . . . . . 281 10 The multivariate normal distribution . . . . . . . . . . . . . . . 282 11 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 13 The linear regression model287 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 2Affine minimum-trace unbiased estimation . . . . . . . . . . . . 288 3The Gauss-Markov theorem . . . . . . . . . . . . . . . . . . . . 289 4The method of least squares . . . . . . . . . . . . . . . . . . . . 292 5Aitken’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 293 6Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . 295 7Estimable functions . . . . . . . . . . . . . . . . . . . . . . . . 297 8Linear constraints: the case M(R′) ⊂ M(X′) . . . . . . . . . . 299 9Linear constraints: the general case . . . . . . . . . . . . . . . . 302 10 Linear constraints: the case M(R′) ∩ M(X′) = {0} . . . . . . . 305 11 A singular variance matrix: the case M(X) ⊂ M(V ) . . . . . . 306 12 A singular variance matrix: the case r(X′V+X) = r(X) . . . . 308 13 A singular variance matrix: the general case, I . . . . . . . . . . 309 14 Explicit and implicit linear constraints . . . . . . . . . . . . . . 310 15 The general linear model, I . . . . . . . . . . . . . . . . . . . . 313 16 A singular variance matrix: the general case, II . . . . . . . . . 314 17 The general linear model, II . . . . . . . . . . . . . . . . . . . . 317 18 Generalized least squares . . . . . . . . . . . . . . . . . . . . . 318 19 Restricted least squares . . . . . . . . . . . . . . . . . . . . . . 319 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 14 Further topics in the linear model323 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 2Best quadratic unbiased estimation of σ2. . . . . . . . . . . . 323 3The best quadratic and positive unbiased estimator of σ2. . . 324 4The best quadratic unbiased estimator of σ2. . . . . . . . . . . 326 5Best quadratic invariant estimation of σ2. . . . . . . . . . . . 329 6The best quadratic and positive invariant estimator of σ2. . . 330 7The best quadratic invariant estimator of σ2. . . . . . . . . . . 331 8Best quadratic unbiased estimation: multivariate normal case . 332 9Bounds for the bias of the least squares estimator of σ2, I . . . 335 10 Bounds for the bias of the least squares estimator of σ2, II . . . 336 11 The prediction of disturbances . . . . . . . . . . . . . . . . . . 338 12 Best linear unbiased predictors with scalar variance matrix . . 339 13 Best linear unbiased predictors with fixed variance matrix, I . . 341Contentsxi 14 Best linear unbiased predictors with fixed variance matrix, II . 344 15 Local sensitivity of the posterior mean . . . . . . . . . . . . . . 345 16 Local sensitivity of the posterior precision . . . . . . . . . . . . 347 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Part Six — Applications to maximum likelihood estimation 15 Maximum likelihood estimation351 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 2The method of maximum likelihood (ML) . . . . . . . . . . . . 351 3ML estimation of the multivariate normal distribution . . . . . 352 4Symmetry: implicit versus explicit treatment . . . . . . . . . . 354 5The treatment of positive definiteness . . . . . . . . . . . . . . 355 6The information matrix . . . . . . . . . . . . . . . . . . . . . . 356 7ML estimation of the multivariate normal distribution: distinct means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 8The multivariate linear regression model . . . . . . . . . . . . . 358 9The errors-in-variables model . . . . . . . . . . . . . . . . . . . 361 10 The non-linear regression model with normal errors . . . . . . . 364 11 Special case: functional independence of mean- and variance parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 12 Generalization of Theorem 6 . . . . . . . . . . . . . . . . . . . 366 Miscellaneous exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 16 Simultaneous equations371 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 2The simultaneous equations model . . . . . . . . . . . . . . . . 371 3The identification problem . . . . . . . . . . . . . . . . . . . . . 373 4Identification with linear constraints on B and Γ only . . . . . 375 5Identification with linear constraints on B,Γ and Σ . . . . . . . 375 6Non-linear constraints . . . . . . . . . . . . . . . . . . . . . . . 377 7Full-information maximum likelihood (FIML): the information matrix (general case) . . . . . . . . . . . . . . . . . . . . . . . . 378 8Full-information maximum likelihood (FIML): the asymptotic variance matrix (special case) . . . . . . . . . . . . . . . . . . . 380 9Limited-informationmaximumlikelihood(LIML): thefirst-order conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 10 Limited-information maximum likelihood (LIML): the informa- tion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 11 Limited-information maximum likelihood (LIML): the asymp- totic variance matrix . . . . . . . . . . . . . . . . . . . . . . . . 388 Bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值