梯度增强 Kriging 代理模型理论相关推导

04 梯度增强Kriging模型理论的相关推导

梯度信息可用于提高 Kriging 模型精度,而如果采用Adjoint方法等快速求解梯度方法,还可提高建立Kriging模型的效率。利用梯度信息来提高Kriging模型的精度,成为一种新的代理模型方法,称为梯度增强型 Kriging(Gradient-Enhanced Kriging, GEK)模型。

4.1 问题定义

除一阶导数信息外,二阶导数信息也可用于提高代理模型的精度。而由于获得二阶导数的计算代价较大,且对于提高代理模型精度的作用有限,因而本文主要介绍在Kriging模型中引入一阶偏导数信息的情况。

对于一个具有 m m m 个设计变量的优化问题,首先假设对目标函数(或状态变量) y y y n n n 个抽样位置,获得 n n n 个函数值及 n × m n \times m n×m 个偏导数值,则获得设计空间抽样位置及函数响应值分别为:
S = [ x ( 1 ) … x ( n ) x ( 1 ) … x ( 1 ) … x ( n ) … x ( n ) ] T ∈ R ( n + n m ) × m y s = [ y ( 1 ) … y ( n ) ∂ y ( 1 ) ∂ x 1 … ∂ y ( 1 ) ∂ x n … ∂ y ( n ) ∂ x 1 … ∂ y ( n ) ∂ x m ] T ∈ R n + n m (1) \begin {align} &\mathbf S= \begin {bmatrix} \mathbf x^{(1)} & \ldots & \mathbf x^{(n)} & \mathbf x^{(1)} & \ldots & \mathbf x^{(1)} & \ldots & \mathbf x^{(n)} \ldots & \mathbf x^{(n)} \end {bmatrix}^T\in\R^{(n+nm)\times m}\\ & \mathbf y_s=\begin {bmatrix}y^{(1)} & \ldots & y^{(n)} & \frac {\partial y^{(1)}}{\partial x_1} & \ldots & \frac{\partial y^{(1)}}{\partial x_n} & \ldots &\frac {\partial y^{(n)}}{\partial x_1} & \ldots & \frac {\partial y^{(n)}}{\partial x_m} \end {bmatrix}^T \in \R^{n+nm} \end {align} \tag{1} S=[x(1)x(n)x(1)x(1)x(n)x(n)]TR(n+nm)×mys=[y(1)y(n)x1y(1)xny(1)x1y(n)xmy(n)]TRn+nm(1)
由上式,每一个偏导数信息都被看作一个独立的样本信息。如果在某些点处的样本信息不可用,则从上述样本数据集的相应位置上去除即可;如果没有任何偏导数信息,GEK模型将退化为传统的Kriging模型。

4.2 GEK模型的建立

GEK模型对未知函数的预估值定义为所有抽样函数值和偏导数值的线性加权,即:
y ^ ( x ) = ∑ i = 1 n ω ( i ) y ( i ) + ∑ j = 1 m ∑ i = 1 n λ j ( i ) ∂ y ( i ) ∂ x j (2) \hat y(\mathbf x)=\sum_{i=1}^n\omega^{(i)}y^{(i)}+\sum_{j=1}^{m}\sum_{i=1}^n\lambda_j^{(i)}\frac {\partial y^{(i)}}{\partial x_j} \tag{2} y^(x)=i=1nω(i)y(i)+j=1mi=1nλj(i)xjy(i)(2)
上式中: ω ( i ) \omega^{(i)} ω(i) 为第 i i i 个(抽样位置)函数值的加权系数; λ j ( i ) \lambda_j^{(i)} λj(i) ∂ y ( i ) / ∂ x j \partial y^{(i)}/\partial x_j y(i)/xj (第 i i i 个抽样位置处函数对 j j j 维设计变量的偏导数)的加权系数。如果未知函数为物理量,则 λ j ( i ) \lambda_j^{(i)} λj(i) 的量纲与 ω Δ x k \omega \Delta x_k ωΔxk 相同。与传统的 Kriging 模型类似,引入静态随机过程假设:
Y ( x ) = β 0 + Z ( x ) (3) Y(\mathbf x)=\beta_0 + Z(\mathbf x) \tag 3 Y(x)=β0+Z(x)(3)
上式中, β 0 \beta_0 β0 为未知常数,也被称为全局趋势模型,代表 Y ( x ) Y(\mathbf x) Y(x)的数学期望值; Z ( ⋅ ) Z(\cdot ) Z() 是均值为零、方差为 σ 2 \sigma ^2 σ2 的静态随机过程。在设计空间不同位置处,对应的随机变量之间的协方差满足:
C o v [ Z ( x ( i ) ) , Z ( x ( j ) ) ] = σ 2 R ( x ( i ) , x ( j ) ) C o v [ ∂ Z ( x ( i ) ) ∂ x k ( i ) , Z ( x ( j ) ) ] = σ 2 ∂ R ( x ( i ) , x ( j ) ) ∂ x k ( i ) C o v [ Z ( x ( i ) ) , ∂ Z ( x ( j ) ) ∂ x l ( j ) ] = σ 2 ∂ R ( x ( i ) , x ( j ) ) ∂ x k ( j ) C o v [ ∂ Z ( x ( i ) ) ∂ x l ( i ) , ∂ Z ( x ( j ) ) ∂ x k ( j ) ] = σ 2 ∂ 2 R ( x ( i ) , x ( j ) ) ∂ x l ( i ) ∂ x k ( j ) (4) \begin {align} & Cov[Z(\mathbf x^{(i)}),Z(\mathbf x^{(j)})]=\sigma^2R(\mathbf x^{(i)},\mathbf x^{(j)}) \\ & Cov[\frac {\partial Z(\mathbf x^{(i)})}{\partial \mathbf x_k^{(i)}},Z(\mathbf x^{(j)})]=\sigma ^2\frac {\partial R(\mathbf x^{(i)},\mathbf x^{(j)})}{\partial \mathbf x_k^{(i)}}\\ &Cov[Z(\mathbf x^{(i)}),\frac {\partial Z(\mathbf x^{(j)})}{\partial \mathbf x^{(j)}_l}]=\sigma^2\frac {\partial R(\mathbf x^{(i)},\mathbf x^{(j)})}{\partial \mathbf x_k^{(j)}} \\ & Cov[\frac {\partial Z(\mathbf x^{(i)})}{\partial \mathbf x_l^{(i)}},\frac {\partial Z(\mathbf x^{(j)})}{\partial \mathbf x_k^{(j)}}]=\sigma^2\frac {\partial^2R(\mathbf x^{(i)},\mathbf x^{(j)})}{\partial \mathbf x_l^{(i)} \partial\mathbf x_k^{(j)}} \end {align} \tag{4} Cov[Z(x(i)),Z(x(j))]=σ2R(x(i),x(j))Cov[xk(i)Z(x(i)),Z(x(j))]=σ2xk(i)R(x(i),x(j))Cov[Z(x(i)),xl(j)Z(x(j))]=σ2xk(j)R(x(i),x(j))Cov[xl(i)Z(x(i)),xk(j)Z(x(j))]=σ2xl(i)xk(j)2R(x(i),x(j))(4)
GEK模型寻找 ω \boldsymbol \omega ω λ \lambda λ 使得下述均方差最小:
M S E [ y ^ ( x ) ] = E [ ( ∑ i = 1 n ω ( i ) y ( i ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) ∂ y ( i ) ∂ x j − Y ( x ) ) 2 ] (5) MSE[\hat y(\mathbf x)]=E\left[ \left(\sum_{i=1}^n\omega^{(i)}y^{(i)} +\sum_{j=1}^n\sum_{i=1}^n\lambda_j^{(i)}\frac{\partial y^{(i)}}{\partial x_j} -Y(\mathbf x) \right)^2 \right] \tag{5} MSE[y^(x)]=E (i=1nω(i)y(i)+j=1ni=1nλj(i)xjy(i)Y(x))2 (5)
并满足无偏估计条件:
E [ Y ( x ) ] = E [ ∑ i = 1 n ω ( i ) y ( i ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) ∂ y ( i ) ∂ x j − Y ( x ) ] (6) E \left[ Y(\mathbf x) \right] = E\left [ \sum_{i=1}^n \omega^{(i)}y^{(i)}+\sum_{j=1}^n\sum_{i=1}^n \lambda_j^{(i)}\frac {\partial y^{(i)}}{\partial x_j}-Y(\mathbf x) \right]\tag{6} E[Y(x)]=E[i=1nω(i)y(i)+j=1ni=1nλj(i)xjy(i)Y(x)](6)
无偏估计条件可简化如下:
∑ i = 1 n ω ( i ) = 1 (7) \sum_{i=1}^n\omega^{(i)}=1 \tag{7} i=1nω(i)=1(7)
构建拉格朗日函数:
L = E [ ( ∑ i = 1 n ω ( i ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) ∂ y ( i ) ∂ x j − Y ( x ) ) 2 ] + μ [ ∑ i = 1 n ω ( i ) − 1 ] (8) L=E\left[ \left(\sum_{i=1}^n\omega^{(i)}+\sum_{j=1}^n\sum_{i=1}^{n}\lambda_j^{(i)}\frac {\partial y^{(i)}}{\partial x_j}-Y(\mathbf x) \right)^2 \right]+\mu\left[\sum_{i=1}^n\omega^{(i)}-1 \right] \tag{8} L=E (i=1nω(i)+j=1ni=1nλj(i)xjy(i)Y(x))2 +μ[i=1nω(i)1](8)
令:
{ ∂ L ∂ ω ( k ) = E [ 2 ( ∑ i = 1 n ω ( i ) y ( i ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) ∂ y ( i ) ∂ x j − Y ( x ) ) y ( x ( k ) ) ] + μ = 0 ∂ L ∂ λ l ( k ) = E [ 2 ( ∑ i = 1 n ω ( i ) y ( i ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) − Y ( x ) ) ∂ y ( k ) ∂ x l ( k ) ] = 0 ∂ L ∂ μ = ∑ i = 1 n ω ( i ) − 1 = 0 (9) \begin {cases} \frac {\partial L}{\partial \omega^{(k)}}=E\left[ 2\left(\sum_{i=1}^n\omega^{(i)}y^{(i)}+\sum_{j=1}^n\sum_{i=1}^n \lambda_j^{(i)}\frac {\partial y^{(i)}}{\partial x_j}-Y(\mathbf x)\right)y(\mathbf x^{(k)})\right]+\mu=0 \\ \frac {\partial L}{\partial \lambda_l^{(k)}}=E\left[ 2\left( \sum_{i=1}^n\omega^{(i)}y^{(i)}+\sum_{j=1}^n\sum_{i=1}^n\lambda_j^{(i)}-Y(\mathbf x) \right)\frac{\partial y^{(k)}}{\partial x_l^{(k)}} \right]=0 \\ \frac {\partial L}{\partial \mu}=\sum_{i=1}^n\omega^{(i)}-1=0 \end {cases} \tag {9} ω(k)L=E[2(i=1nω(i)y(i)+j=1ni=1nλj(i)xjy(i)Y(x))y(x(k))]+μ=0λl(k)L=E[2(i=1nω(i)y(i)+j=1ni=1nλj(i)Y(x))xl(k)y(k)]=0μL=i=1nω(i)1=0(9)
又:
E [ Y ( x ( i ) ) ⋅ Y ( x ( k ) ) ] = β 0 2 + σ 2 R ( x ( k ) , x ( i ) ) E [ ∂ Y ( x ( i ) ) ∂ x j ( i ) ⋅ Y ( x ( k ) ) ] = σ 2 ∂ R ( x ( k ) , x ( i ) ) ∂ x j ( i ) E [ Y ( x ) ⋅ Y ( x ( k ) ) ] = β 0 2 + σ 2 R ( x ( k ) , x ) E [ Y ( x ( i ) ) ⋅ ∂ Y ( x ( k ) ) ∂ x l ( k ) ] = β 0 2 + σ 2 ∂ R ( x ( i ) , x ( k ) ) ∂ x l ( k ) E [ ∂ Y ( x ( i ) ) ∂ x j ( i ) ⋅ ∂ Y ( x ( k ) ) ∂ x l ( k ) ] = β 0 2 + σ 2 ∂ 2 R ( x ( i ) , x ( k ) ) ∂ x j ( i ) ∂ x l ( k ) E [ Y ( x ) ⋅ ∂ Y ( x ( k ) ) ∂ x l ( k ) ] = β 0 2 + σ 2 R ( x ( k ) , x ) ∂ x l ( k ) (10) \begin {align} & E[Y(\mathbf x^{(i)})\cdot Y(\mathbf x^{(k)})]=\beta_0^2+\sigma^2R(\mathbf x^{(k)},\mathbf x^{(i)}) \\ & E\left[ \frac {\partial Y(\mathbf x^{(i)})}{\partial \mathbf x_j^{(i)}}\cdot Y(\mathbf x^{(k)}) \right]=\sigma^2\frac {\partial R (\mathbf x^{(k)},\mathbf x^{(i)})}{\partial \mathbf x_j^{(i)}} \\ &E\left[ Y(\mathbf x )\cdot Y(\mathbf x^{(k)}) \right]=\beta_0^2 +\sigma^2R(\mathbf x^{(k)},\mathbf x) \\ & E\left[ Y(\mathbf x^{(i)})\cdot\frac {\partial Y(\mathbf x^{(k)})}{\partial \mathbf x_l^{(k)}} \right]=\beta_0^2+\sigma^2 \frac {\partial R(\mathbf x^{(i)}, \mathbf x^{(k)})}{\partial \mathbf x_l^{(k)}} \\ & E\left[\frac {\partial Y(\mathbf x^{(i)})}{\partial \mathbf x_j^{(i)}} \cdot \frac {\partial Y(\mathbf x^{(k)})}{\partial \mathbf x_l^{(k)}} \right]=\beta_0^2+\sigma^2 \frac {\partial^2R(\mathbf x^{(i)},\mathbf x^{(k)})}{\partial \mathbf x_j^{(i)} \partial \mathbf x_l^{(k)}} \\ & E\left[ Y(\mathbf x)\cdot\frac {\partial Y(\mathbf x^{(k)})}{\partial \mathbf x_l^{(k)}} \right]=\beta_0^2 +\sigma^2\frac {R(\mathbf x^{(k)},\mathbf x)}{\partial \mathbf x_l^{(k)}} \end {align} \tag{10} E[Y(x(i))Y(x(k))]=β02+σ2R(x(k),x(i))E[xj(i)Y(x(i))Y(x(k))]=σ2xj(i)R(x(k),x(i))E[Y(x)Y(x(k))]=β02+σ2R(x(k),x)E[Y(x(i))xl(k)Y(x(k))]=β02+σ2xl(k)R(x(i),x(k))E[xj(i)Y(x(i))xl(k)Y(x(k))]=β02+σ2xj(i)xl(k)2R(x(i),x(k))E[Y(x)xl(k)Y(x(k))]=β02+σ2xl(k)R(x(k),x)(10)
代入式 (8) 可得:
{ ∑ i = 1 n ω ( i ) R ( x ( k ) , x ( i ) ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) ∂ R ( x ( k ) , x ( i ) ) ∂ x j ( i ) + μ 2 σ 2 = R ( x ( k ) , x ) ∑ i = 1 n ω ( i ) y ( i ) ∂ R ( x ( k ) , x ( i ) ) ∂ x l ( k ) + ∑ j = 1 n ∑ i = 1 n λ j ( i ) ∂ 2 R ( x ( k ) , x ( i ) ) ∂ x l ( k ) ∂ x j ( i ) = ∂ R ( x ( k ) , x ( i ) ) ∂ x l ( k ) ∑ i = 1 n ω ( i ) = 1 (11) \begin {cases}\sum_{i=1}^n \omega^{(i)}R(\mathbf x^{(k)},\mathbf x^{(i)})+ \sum_{j=1}^{n}\sum_{i=1}^{n}\lambda_j^{(i)}\frac {\partial R(\mathbf x^{(k)},\mathbf x^{(i)})}{\partial \mathbf x_j^{(i)}}+\frac {\mu}{2\sigma^2}=R(\mathbf x^{(k)},\mathbf x) \\ \sum_{i=1}^n\omega^{(i)}y^{(i)}\frac {\partial R(\mathbf x^{(k)},\mathbf x^{(i)})}{\partial \mathbf x_l^{(k)}}+\sum_{j=1}^n\sum_{i=1}^n\lambda_j^{(i)} \frac {\partial ^2 R(\mathbf x^{(k)},\mathbf x^{(i)})}{\partial \mathbf x_l^{(k)}\partial \mathbf x_j^{(i)}}=\frac {\partial R(\mathbf x^{(k)},\mathbf x^{(i)})}{\partial \mathbf x_l^{(k)}} \\ \sum_{i=1}^n \omega^{(i)}=1 \end {cases} \tag{11} i=1nω(i)R(x(k),x(i))+j=1ni=1nλj(i)xj(i)R(x(k),x(i))+2σ2μ=R(x(k),x)i=1nω(i)y(i)xl(k)R(x(k),x(i))+j=1ni=1nλj(i)xl(k)xj(i)2R(x(k),x(i))=xl(k)R(x(k),x(i))i=1nω(i)=1(11)
将上式改写为矩阵形式:
[ R ‾ F ‾ F ‾ T 0 ] [ λ μ ~ ] = [ r ‾ 1 ] (12) \begin {bmatrix} \overline {\mathbf R} & \overline {\mathbf F} \\ \overline {\mathbf F}^T &0 \end {bmatrix}\begin{bmatrix} \boldsymbol \lambda \\ \widetilde \mu \end {bmatrix}=\begin {bmatrix} \overline {\mathbf r} \\ 1 \end {bmatrix} \tag {12} [RFTF0][λμ ]=[r1](12)
上式中:
R ‾ = [ R ∂ R ∂ R T ∂ 2 R ] (13) \overline {\mathbf R}=\begin {bmatrix} \mathbf R & \partial \mathbf R \\ \partial \mathbf R^T & \partial^2\mathbf R \end {bmatrix} \tag{13} R=[RRTR2R](13)
其中, R \mathbf R R 与 Kriging 模型中一致:
R = [ R ( x ( 1 ) , x ( 1 ) ) … R ( x ( 1 ) , x ( n ) ) ⋮ ⋱ ⋮ R ( x ( n ) , x ( 1 ) ) … R ( x ( n ) , x ( n ) ) ] ∂ R = [ ∂ R ( x ( 1 ) , x ( 1 ) ) ∂ x 1 ( 1 ) … ∂ R ( x ( 1 ) , x ( 1 ) ) ∂ x m ( 1 ) … ∂ R ( x ( 1 ) , x ( n ) ) ∂ x 1 ( n ) … ∂ R ( x ( 1 ) , x ( n ) ) ∂ x 1 ( n ) ⋮ ⋱ ⋮ ⋱ ⋮ ⋱ ⋮ ∂ R ( x ( n ) , x ( 1 ) ) ∂ x 1 ( n ) … ∂ R ( x ( n ) , x ( 1 ) ) ∂ x m ( n ) … ∂ R ( x ( n ) , x ( n ) ) ∂ x 1 ( n ) … ∂ R ( x ( n ) , x ( n ) ) ∂ x m ( n ) ] ∂ 2 R = [ ∂ 2 R ( x ( 1 ) , x ( 1 ) ) ∂ ( x 1 ( 1 ) ) 2 … ∂ 2 R ( x ( 1 ) , x ( 1 ) ) ∂ x 1 ( 1 ) ∂ x m ( 1 ) … ∂ 2 R ( x ( 1 ) , x ( 1 ) ) ∂ x 1 ( 1 ) ∂ x 1 ( n ) … ∂ 2 R ( x ( 1 ) , x ( n ) ) ∂ x 1 ( 1 ) x m ( n ) ⋮ ⋱ ⋮ ⋱ ⋮ ⋱ ⋮ ∂ 2 R ( x ( 1 ) , x ( 1 ) ) ∂ x m ( 1 ) ∂ x 1 ( 1 ) … ∂ 2 R ( x ( 1 ) , x ( 1 ) ) ( ∂ x m ( 1 ) ) 2 … ∂ 2 R ( x ( 1 ) , x ( n ) ) ∂ x m ( 1 ) ∂ x 1 ( n ) … ∂ 2 R ( x ( 1 ) , x ( n ) ) ∂ x m ( 1 ) ∂ x m ( n ) ⋮ ⋱ ⋮ ⋱ ⋮ ⋱ ⋮ ∂ 2 R ( x ( n ) , x ( 1 ) ) ∂ x 1 ( n ) ∂ x 1 ( 1 ) … ∂ 2 R ( x ( n ) , x ( 1 ) ) ∂ x 1 ( n ) ∂ x m ( 1 ) … ∂ 2 R ( x ( n ) , x ( n ) ) ( ∂ x 1 ( n ) ) 2 … ∂ 2 R ( x ( n ) , x ( n ) ) ∂ x 1 ( n ) ∂ x m ( n ) ⋮ ⋱ ⋮ ⋱ ⋮ ⋱ ⋮ ∂ 2 R ( x ( n ) , x ( 1 ) ) ∂ x m ( n ) ∂ x 1 ( 1 ) … ∂ 2 R ( x ( n ) , x ( 1 ) ) ∂ x m ( n ) ∂ x m ( 1 ) … ∂ 2 R ( x ( n ) , x ( n ) ) ∂ x m ( n ) ∂ x 1 ( n ) … ∂ 2 R ( x ( n ) , x ( n ) ) ( ∂ x m ( n ) ) 2 ] (14) \begin {align} & \mathbf R = \begin {bmatrix} R(\mathbf x^{(1)},\mathbf x^{(1)}) & \ldots & R(\mathbf x^{(1)},\mathbf x^{(n)})\\ \vdots & \ddots & \vdots \\ R(\mathbf x^{(n)},\mathbf x^{(1)}) & \ldots & R(\mathbf x^{(n)},\mathbf x^{(n)}) \end {bmatrix} \\ &\partial \mathbf R=\begin {bmatrix} \frac {\partial R(\mathbf x^{(1)},\mathbf x^{(1)})}{\partial \mathbf x_1^{(1)}} & \ldots & \frac {\partial R(\mathbf x^{(1)},\mathbf x^{(1)})}{\partial \mathbf x_m^{(1)}} & \ldots & \frac {\partial R(\mathbf x^{(1)},\mathbf x^{(n)})}{\partial \mathbf x_1^{(n)}} & \ldots & \frac {\partial R(\mathbf x^{(1)},\mathbf x^{(n)})}{\partial \mathbf x_1^{(n)}} \\ \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \frac{\partial R(\mathbf x^{(n)},\mathbf x^{(1)})}{\partial \mathbf x_1^{(n)}} & \ldots & \frac {\partial R(\mathbf x^{(n)},\mathbf x^{(1)})}{\partial \mathbf x_m^{(n)}} & \ldots & \frac {\partial R(\mathbf x^{(n)},\mathbf x^{(n)})}{\partial \mathbf x_1^{(n)}} & \ldots & \frac {\partial R(\mathbf x^{(n)},\mathbf x^{(n)})}{\partial \mathbf x_m^{(n)}} \end {bmatrix} \\ & \partial^2 \mathbf R= \begin {bmatrix} \frac {\partial ^2 R(\mathbf x^{(1)},\mathbf x^{(1)})}{\partial (\mathbf x_1^{(1)})^2} & \ldots & \frac {\partial ^2 R(\mathbf x^{(1)},\mathbf x^{(1)})}{\partial \mathbf x_1^{(1)} \partial \mathbf x_m^{(1)}} & \ldots & \frac {\partial ^2 R(\mathbf x^{(1)},\mathbf x^{(1)})}{\partial \mathbf x_1^{(1)} \partial \mathbf x_1^{(n)}} & \ldots & \frac{\partial^2R(\mathbf x^{(1)},\mathbf x^{(n)})}{\partial \mathbf x_1^{(1)} \mathbf x_m^{(n)}} \\ \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \frac {\partial ^2 R(\mathbf x^{(1)},\mathbf x^{(1)})}{\partial \mathbf x_m^{(1)}\partial \mathbf x_1^{(1)}} & \ldots & \frac {\partial ^2 R(\mathbf x^{(1)},\mathbf x^{(1)})}{ (\partial \mathbf x_m^{(1)})^2} & \ldots & \frac {\partial ^2 R(\mathbf x^{(1)},\mathbf x^{(n)})}{\partial \mathbf x_m^{(1)} \partial \mathbf x_1^{(n)}} & \ldots & \frac{\partial^2R(\mathbf x^{(1)},\mathbf x^{(n)})}{\partial \mathbf x_m^{(1)} \partial \mathbf x_m^{(n)}} \\ \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \frac {\partial ^2 R(\mathbf x^{(n)},\mathbf x^{(1)})}{\partial \mathbf x_1^{(n)}\partial \mathbf x_1^{(1)}} & \ldots & \frac {\partial ^2 R(\mathbf x^{(n)},\mathbf x^{(1)})}{ \partial \mathbf x_1^{(n)}\partial \mathbf x_m^{(1)}} & \ldots & \frac {\partial ^2 R(\mathbf x^{(n)},\mathbf x^{(n)})}{ (\partial \mathbf x_1^{(n)})^2} & \ldots & \frac{\partial^2R(\mathbf x^{(n)},\mathbf x^{(n)})}{\partial \mathbf x_1^{(n)} \partial \mathbf x_m^{(n)}} \\ \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \frac {\partial ^2 R(\mathbf x^{(n)},\mathbf x^{(1)})}{\partial \mathbf x_m^{(n)}\partial \mathbf x_1^{(1)}} & \ldots & \frac {\partial ^2 R(\mathbf x^{(n)},\mathbf x^{(1)})}{ \partial \mathbf x_m^{(n)}\partial \mathbf x_m^{(1)}} & \ldots & \frac {\partial ^2 R(\mathbf x^{(n)},\mathbf x^{(n)})}{ \partial \mathbf x_m^{(n)}\partial \mathbf x_1^{(n)}} & \ldots & \frac{\partial^2R(\mathbf x^{(n)},\mathbf x^{(n)})}{(\partial\mathbf x_m^{(n)})^2}\end {bmatrix} \end {align} \tag {14} R= R(x(1),x(1))R(x(n),x(1))R(x(1),x(n))R(x(n),x(n)) R= x1(1)R(x(1),x(1))x1(n)R(x(n),x(1))xm(1)R(x(1),x(1))xm(n)R(x(n),x(1))x1(n)R(x(1),x(n))x1(n)R(x(n),x(n))x1(n)R(x(1),x(n))xm(n)R(x(n),x(n)) 2R= (x1(1))22R(x(1),x(1))xm(1)x1(1)2R(x(1),x(1))x1(n)x1(1)2R(x(n),x(1))xm(n)x1(1)2R(x(n),x(1))x1(1)xm(1)2R(x(1),x(1))(xm(1))22R(x(1),x(1))x1(n)xm(1)2R(x(n),x(1))xm(n)xm(1)2R(x(n),x(1))x1(1)x1(n)2R(x(1),x(1))xm(1)x1(n)2R(x(1),x(n))(x1(n))22R(x(n),x(n))xm(n)x1(n)2R(x(n),x(n))x1(1)xm(n)2R(x(1),x(n))xm(1)xm(n)2R(x(1),x(n))x1(n)xm(n)2R(x(n),x(n))(xm(n))22R(x(n),x(n)) (14)
式 (11) 中:
r ‾ = [ r ∂ r ] (15) \overline {\mathbf r}= \begin {bmatrix} \mathbf r \\ \partial \mathbf r \end {bmatrix} \tag {15} r=[rr](15)
其中, r \mathbf r r 与 Kriging 模型中一致:
r = [ R ( x ( 1 ) , x ) … R ( x ( n ) , x ) ] T ∂ r = [ ∂ R ( x ( 1 ) , x ) ∂ x 1 ( 1 ) … ∂ R ( x ( 1 ) , x ) ∂ x m ( 1 ) … ∂ R ( x ( n ) , x ) ∂ x 1 ( n ) … ∂ R ( x ( n ) , x ) ∂ x m ( n ) ] T (16) \begin {align} &\mathbf r = \begin {bmatrix} R(\mathbf x^{(1)},\mathbf x) & \ldots & R(\mathbf x^{(n)},\mathbf x) \end {bmatrix}^T \\ & \partial \mathbf r= \begin {bmatrix} \frac {\partial R(\mathbf x^{(1)},\mathbf x)}{\partial \mathbf x_1^{(1)}} & \ldots & \frac {\partial R(\mathbf x^{(1)},\mathbf x)}{\partial \mathbf x_m^{(1)}} & \ldots & \frac {\partial R(\mathbf x^{(n)},\mathbf x)}{\partial\mathbf x_1^{(n)}} & \ldots & \frac {\partial R(\mathbf x^{(n)},\mathbf x)}{\partial \mathbf x_m^{(n)}} \end {bmatrix}^T \end {align} \tag {16} r=[R(x(1),x)R(x(n),x)]Tr=[x1(1)R(x(1),x)xm(1)R(x(1),x)x1(n)R(x(n),x)xm(n)R(x(n),x)]T(16)
式 (11) 中:
λ = [ ω ( 1 ) … ω ( n ) λ 1 ( 1 ) … λ m ( 1 ) λ 1 ( n ) … λ m ( n ) ] T F ‾ = [ 1     …     1 ⏟ n     0     …     0 ⏟ n m ] T ∈ R n + n m (17) \begin {align} & \boldsymbol \lambda=\begin {bmatrix} \omega^{(1)} & \ldots & \omega^{(n)} & \lambda_1^{(1)} &\ldots & \lambda_m^{(1)} & \lambda_1^{(n)} & \ldots & \lambda_m^{(n)} \end {bmatrix}^T \\ & \overline {\mathbf F}= [\underbrace{ 1 \ \ \ \ldots \ \ \ 1}_{n} \ \ \ \underbrace{0 \ \ \ \ldots \ \ \ 0}_{nm}]^T \in \R^{n+nm} \end {align} \tag{17} λ=[ω(1)ω(n)λ1(1)λm(1)λ1(n)λm(n)]TF=[n 1      1   nm 0      0]TRn+nm(17)
综上得到 GEK 模型的预测值为:
y ^ ( x ) = [ r ‾ 1 ] T [ R ‾ F ‾ F ‾ T 0 ] [ y s 0 ] = β 0 + r ‾ T R ‾ − 1 ( y s − β 0 F ‾ ) (18) \hat y(\mathbf x)=\begin {bmatrix} \overline {\mathbf r} \\ 1 \end {bmatrix}^T\begin {bmatrix}\overline {\mathbf R} & \overline {\mathbf F} \\ \overline {\mathbf F}^T & 0 \end {bmatrix} \begin {bmatrix} \mathbf y_s \\ 0 \end {bmatrix} =\beta_0 +\overline {\mathbf r}^T \overline {\mathbf R}^{-1}(\mathbf y_s-\beta_0\overline {\mathbf F}) \tag {18} y^(x)=[r1]T[RFTF0][ys0]=β0+rTR1(ysβ0F)(18)
上式中:
β 0 = ( F ‾ T R ‾ − 1 F ‾ ) − 1 F ‾ T R ‾ − 1 y s (19) \beta_0 = (\overline{\mathbf F}^T \overline{\mathbf R}^{-1}\overline{\mathbf F})^{-1}\overline{\mathbf F}^T \overline{\mathbf R}^{-1}\mathbf y_s \tag {19} β0=(FTR1F)1FTR1ys(19)
预测的均方误差为:
M S E [ y ^ ( x ) ] = σ 2 ( 1 − [ r ‾ 1 ] T [ R ‾ F ‾ F ‾ T 0 ] [ r ‾ 1 ] ) = σ 2 ( 1 − r ‾ T R ‾ − 1 r ‾ + ( 1 − F ‾ T R ‾ − 1 r ‾ ) 2 F ‾ T R ‾ − 1 F ‾ ) (20) MSE[\hat y(\mathbf x)]=\sigma^2\left( 1 -\begin{bmatrix} \overline {\mathbf r} \\ 1 \end {bmatrix}^T\begin {bmatrix} \overline {\mathbf R} &\overline {\mathbf F} \\ \overline {\mathbf F}^T & 0 \end {bmatrix} \begin {bmatrix}\overline {\mathbf r}\\1 \end {bmatrix} \right)=\sigma^2 \left(1- \overline{\mathbf r}^T \overline {\mathbf R}^{-1} \overline {\mathbf r} +\frac {(1-\overline{\mathbf F}^T\overline {\mathbf R}^{-1}\overline {\mathbf r})^2}{\overline {\mathbf F}^T\overline {\mathbf R}^{-1}\overline {\mathbf F}} \right) \tag {20} MSE[y^(x)]=σ2(1[r1]T[RFTF0][r1])=σ2(1rTR1r+FTR1F(1FTR1r)2)(20)

梯度增强 Kriging 模型的相关函数,模型训练过程与Kriging模型十分相似,可以参考之前的博客文章(链接: link),在此不再赘述。

参考文献

[1] HAN Z H, Görtz S, Zimmermann R. Improving Variable-fidelity Surrogate Modeling via Gradient-enhanced Kriging and a Generalized Hybrid Bridge Function[J]. Aerospace Science and Technology, 2013, 25(1):177-189.

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值