A Policy Update Strategy in Model-free Policy Search: Expectation-Maximization

Expectation-Maximization Algorithm

Policy gradient methods require the user to specify the learning rate which can be problematic and often results in an unstable learning process or slow convergence. By formualting policy search as an inference problem with latent variables and using the EM algorithm to infer a new policy, this problem can be avoided since no learning rate is required.

The standard EM algorithm, which is well-known for determining the maximum likelihood solution of a probabilitistic latent variable model, takes the parameter update as a weighted maximum likelihood estimate which has a closed form solution for most of the used polices.

Let’s assume that :

yzpθ(y,z): observed random variable : unobserved random variable : parameterized joint distribution  y :  observed random variable  z :  unobserved random variable  p θ ( y , z ) :  parameterized joint distribution 
Given a data set Y=[y[1],,y[N]]T Y = [ y [ 1 ] , … , y [ N ] ] T , we wanna approximate the parameter θ θ which means maximizing the log likelihood:
maxθlogpθ(Y,Z) max θ log ⁡ p θ ( Y , Z )
Since Z Z is latent variable we cannot solve the maximization problem directly. But by computing the expectation of Z, we can maximize the log-marginal likelihood of Y Y :
logpθ(Y)=i=1Nlogpθ(y[i])=i=1Nlogpθ(y[i],z)dz
However, it is evident that we cannot obtain a closed-form solution for the parameter θ θ of our probability model pθ(y,z) p θ ( y , z ) . Do you know what is closed-form solution?

Closed form solution :

An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally-accepted set. For example, an infinite sum would generally not be considered closed-form. However, the choice of what to call closed-form and what not is rather arbitrary since a new “closed-form” function could simply be defined in terms of the infinite sum.

EM(Expectation-Maximization) is a powerful method to estimate parameterized latent variables. The basic idea behind is that if parameter θ θ is known then we can estimate the optimal latent variable Z Z in view of Y (E-Step); If latent variable Z Z is known, we can estimateθ by maximum likelihood estimation. EM method can be seen as a kind of coordinate descent method to maximize the lower-bound of the log likelihood.

The iterative procedure for estimating the maximum log-likelihood consists of two main segments: Expectation Steps and Maximization Steps as mentioned above. Assume that we begin at the θ0 θ 0 . Then we execute the following iterative steps:

  • Based on θt θ t estimating the expectation of the latent variable Zt Z t .
  • Based on Y Y and Zt estimating the parameter θt+1 θ t + 1 by maximum likelihood estimation.

In general, we do not feel like the expectation of Z Z but the distribution of Z, i.e. pθt(Z|Y) p θ t ( Z | Y ) . To be specific, let’s introduce an auxiliary distribution q(Z) q ( Z ) , which is variational, to decompose the marginal log-likelihood by using the identity pθ(Y)=pθ(Y,Z)/pθ(Z|Y) p θ ( Y ) = p θ ( Y , Z ) / p θ ( Z | Y ) :

logpθ(Y)=====logpθ(Y)q(Z)dZq(Z)logpθ(Y)dZq(Z)logq(Z)pθ(Y,Z)q(Z)pθ(Z|Y)dZq(Z)logpθ(Y,Z)q(Z)dZ+q(Z)logq(Z)pθ(Z|Y)dZLθ(q)+KL(q(Z)pθ(Z|Y)) log ⁡ p θ ( Y ) = log ⁡ p θ ( Y ) ∫ q ( Z ) d Z = ∫ q ( Z ) log ⁡ p θ ( Y ) d Z = ∫ q ( Z ) log ⁡ q ( Z ) p θ ( Y , Z ) q ( Z ) p θ ( Z | Y ) d Z = ∫ q ( Z ) log ⁡ p θ ( Y , Z ) q ( Z ) d Z + ∫ q ( Z ) log ⁡ q ( Z ) p θ ( Z | Y ) d Z = L θ ( q ) + K L ( q ( Z ) ‖ p θ ( Z | Y ) )
Since the KL divergence is always larger or equal to zero, the term Lθ(q) L θ ( q ) is a lower bound of the log-marginal likelihood.

E-Step

In E-step we update the variational distribution q(Z) q ( Z ) by minimizing the KL divergence KL(q(Z)pθ(Z|Y)) K L ( q ( Z ) ‖ p θ ( Z | Y ) ) , i.e. setting q(Z)=pθ(Z|Y) q ( Z ) = p θ ( Z | Y ) . Note that the value of the log-likelihood logpθ(Y) log ⁡ p θ ( Y ) has nothing to do with the variational distribution q(Z) q ( Z ) . In summary, E-step :

Update q(Z) Minimize KL(q(Z)pθ(Z|Y)) Set q(Z)=pθ(Z|Y) Update  q ( Z ) ⇐  Minimize  K L ( q ( Z ) ‖ p θ ( Z | Y ) ) ⇐  Set  q ( Z ) = p θ ( Z | Y )

M-Step

In M-step we optimize the lower bound w.r.t. θ θ , i.e.

θnew=====argmaxθLθ(q)argmaxθq(Z)logpθ(Y,Z)q(Z)dZargmaxθq(Z)logpθ(Y,Z)dZ+H(q)argmaxθEq(Z)[logpθ(Y,Z)]argmaxθQθ(q) θ n e w = arg ⁡ max θ L θ ( q ) = arg ⁡ max θ ∫ q ( Z ) log ⁡ p θ ( Y , Z ) q ( Z ) d Z = arg ⁡ max θ ∫ q ( Z ) log ⁡ p θ ( Y , Z ) d Z + H ( q ) = arg ⁡ max θ E q ( Z ) [ log ⁡ p θ ( Y , Z ) ] = arg ⁡ max θ Q θ ( q )
where H(q) H ( q ) denotes the entropy of q q , Q is the expected complete data log-likelihood. The log now acts on the joint distribution directly. So M-step can be obtained in closed form. Moreover,
Qθ(q)=q(Z)logpθ(Y,Z)dZ=i=1Nqi(z)logpθ(y[i],z)dZ Q θ ( q ) = ∫ q ( Z ) log ⁡ p θ ( Y , Z ) d Z = ∑ i = 1 N ∫ q i ( z ) log ⁡ p θ ( y [ i ] , z ) d Z
The M-step is based on a weighted maximum likelihood estimate of θ θ using the complete data points [y[i],z] [ y [ i ] , z ] weighted by qi(z) q i ( z ) . In summary, M-step:
Update θ Maximize Qθ(q) Update  θ ⇐  Maximize  Q θ ( q )

Reformulate Policy Search as an Inference Problem

Let’s assume that :

Binary reward event RTrajectory τ: observed variable: unobserved variable Binary reward event  R :  observed variable Trajectory  τ :  unobserved variable
Maximizing the reward implies maximizing the probability of the reward event, and, hence, our trjectory distribution pθ(τ) p θ ( τ ) needs to assign high probability to trajectories with high reward probability p(R=1|τ) p ( R = 1 | τ ) .

We would like to find a parameter vector θ θ that maximizes the probability of the reward event, i.e.

logpθ(R)=τp(R|τ)pθ(τ)dτ log ⁡ p θ ( R ) = ∫ τ p ( R | τ ) p θ ( τ ) d τ
As for the standard EM algorithm, a variational distribution q(τ) q ( τ ) is used to decompose the log-marginal likelihood into tow terms:
logpθ(R)=Lθ(q)+KL(q(τ)pθ(τ|R)) log ⁡ p θ ( R ) = L θ ( q ) + K L ( q ( τ ) ‖ p θ ( τ | R ) )
where the reward-weighted trajectory distribution:
pθ(τ|R)=p(R|τ)pθ(τ)pθ(R)=p(R|τ)pθ(τ)p(R|τ)pθ(τ)dτp(R|τ)pθ(τ) p θ ( τ | R ) = p ( R | τ ) p θ ( τ ) p θ ( R ) = p ( R | τ ) p θ ( τ ) ∫ p ( R | τ ) p θ ( τ ) d τ ∝ p ( R | τ ) p θ ( τ )

E-Step

Update q(τ) Minimize KL(q(τ)pθ(τ|R)) Set q(τ)=pθ(τ|R) Update  q ( τ ) ⇐  Minimize  K L ( q ( τ ) ‖ p θ ( τ | R ) ) ⇐  Set  q ( τ ) = p θ ( τ | R )

M-Step

θnew=argmaxθLθ(q)=argmaxθq(τ)logpθ(R,τ)q(τ)dτ=argmaxθq(τ)logpθ(R,τ)dτ+H(q)=argmaxθq(τ)log(p(R|τ)pθ(τ))dτQθ(q)=argmaxθq(τ)logpθ(τ)dτ+f(q)=argminθq(τ)(logpθ(τ))dτ=argminθ[q(τ)logq(τ)pθ(τ)dτ+q(τ)log1q(τ)dτ]=argminθKL(q(τ)pθ(τ)) θ n e w = arg ⁡ max θ L θ ( q ) = arg ⁡ max θ ∫ q ( τ ) log ⁡ p θ ( R , τ ) q ( τ ) d τ = arg ⁡ max θ ∫ q ( τ ) log ⁡ p θ ( R , τ ) d τ + H ( q ) = arg ⁡ max θ ∫ q ( τ ) log ⁡ ( p ( R | τ ) p θ ( τ ) ) d τ ⏟ Q θ ( q ) = arg ⁡ max θ ∫ q ( τ ) log ⁡ p θ ( τ ) d τ + f ( q ) = arg ⁡ min θ ∫ q ( τ ) ( − log ⁡ p θ ( τ ) ) d τ = arg ⁡ min θ [ ∫ q ( τ ) log ⁡ q ( τ ) p θ ( τ ) d τ + ∫ q ( τ ) log ⁡ 1 q ( τ ) d τ ] = arg ⁡ min θ K L ( q ( τ ) ‖ p θ ( τ ) )
i.e.
Update θ Maximize Qθ(q) Minimize KL(q(τ)pθ(τ)) Update  θ ⇐  Maximize  Q θ ( q ) ⇐  Minimize  K L ( q ( τ ) ‖ p θ ( τ ) )

EM-based Policy Search Algorithms

MC-EM-algorithm uses a sample-based approximation for the variational distribution q q , i.e. in the E-step, MC-EM minimizes the KL divergence KL(q(Z)pθ(Z|Y)) by using samples Zjpθ(Z|Y) Z j ∼ p θ ( Z | Y ) . Subsequently, these samples Zj Z j are used to estimate the expectation of the complete data log-liklihood:

Qθ(q)=j=1Klogpθ(Y,Zj) Q θ ( q ) = ∑ j = 1 K log ⁡ p θ ( Y , Z j )
In terms of policy search, MC_EM methods use samples τ[i] τ [ i ] from the old trajectory distribution pθ p θ ′ to represent the variational distribution q(τ)p(R|τ)pθ(τ) q ( τ ) ∝ p ( R | τ ) p θ ′ ( τ ) over trajectories. As τ[i] τ [ i ] has already been sampled from pθ(τ) p θ ′ ( τ ) , q(τ[i])p(R|τ[i]) q ( τ [ i ] ) ∝ p ( R | τ [ i ] ) . Consequently, in the M-step, we maximize:
Qθ(θ)=τ[i]pθ(τ)p(R|τ[i])logpθ(τ[i]) Q θ ( θ ′ ) = ∑ τ [ i ] ∼ p θ ′ ( τ ) p ( R | τ [ i ] ) log ⁡ p θ ( τ [ i ] )

There are Episode-based EM-algorithms such as Reward-Weighted Regression(RWR) and Cost-Regularized Kernel Regression(CrKR), and Step-based EM-algorithms such as Episodic Reward-Weighted Regression(eRWR) and Policy Learning by Weighting Exploration with Returns(PoWER).

Variational Inference-based Methods

The MC-EM approach uses a weighted maximum likelihood estimate to obtain the new parameters θ θ of the policy. It averages over several modes of the reward function. Such a behavior might result in slow convergence to good policies as the average of several modes might be in an area with low reward.

The maximization used for the MC-EM approach is equivalent to minimizing:

KL(p(R|τ)pθ(τ)pθ(τ))=p(R|τ[i])pθ(τ[i])logp(R|τ)pθ(τ[i])pθ(τ[i]) K L ( p ( R | τ ) p θ ′ ( τ ) ‖ p θ ( τ ) ) = ∫ p ( R | τ [ i ] ) p θ ′ ( τ [ i ] ) log ⁡ p ( R | τ ) p θ ′ ( τ [ i ] ) p θ ( τ [ i ] )
w.r.t. parameter θ θ . This minimization is also called the Moment Projection of the reward-weighted trajectory distribution as it matches the moments of pθ(τ) p θ ( τ ) with the moments of p(R|τ)pθ(τ) p ( R | τ ) p θ ′ ( τ ) .

Alternatively, we can use the Information projection argminθKL(pθ(τ)p(R|τ)pθ(τ)) arg ⁡ min θ K L ( p θ ( τ ) ‖ p ( R | τ ) p θ ′ ( τ ) ) to update the policy. This projection forces the new trajectory distribution pθ(τ) p θ ( τ ) to be zero everywhere where the reward-weighted trajectory distribution is zero.

  • Thanks J. Peters et al for their great work of A Survey on Policy Search for Robotics .
  • 感谢周志华——《机器学习》清华大学出版社
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值