MLlib - Optimization Module - Updater

MLlib - Optimization Module - Updater

@(Hadoop & Spark)[machine learning|algorithm|statistics|Spark]

Topic: Updater - SquaredL2Updater

Inference

  • Optimization Equation
    l(x,w,λ)=12n=1N{tnwTϕ(xn)}+λ2wTw
  • Gradient Computation
    gradient.compute(xi,yi,w,gi)

    (x,w,λ)wi=gi+λwi
  • SGD Updater
    wnew=woldσ(gi+λwi)
Reference

Bishop CM. Pattern Recognition and Machine Learning. (Jordan M, Kleinberg J, Schölkopf B, eds.). Springer; 2006:738. doi:10.1117/1.2819119. Page144 - Regulariz ed least squares

Code annotation

/**
 * :: DeveloperApi ::
 * Updater for L2 regularized problems.
 *          R(w) = 1/2 ||w||^2
 * Uses a step-size decreasing with the square root of the number of iterations.
 */
@DeveloperApi
class SquaredL2Updater extends Updater {
  override def compute(
      weightsOld: Vector,
      gradient: Vector,
      stepSize: Double,
      iter: Int,
      regParam: Double): (Vector, Double) = {
    // add up both updates from the gradient of the loss (= step) as well as
    // the gradient of the regularizer (= regParam * weightsOld)
    // w' = w - thisIterStepSize * (gradient + regParam * w)
    // w' = (1 - thisIterStepSize * regParam) * w - thisIterStepSize * gradient
    val thisIterStepSize = stepSize / math.sqrt(iter)
    val brzWeights: BV[Double] = weightsOld.toBreeze.toDenseVector
    brzWeights :*= (1.0 - thisIterStepSize * regParam)
    brzAxpy(-thisIterStepSize, gradient.toBreeze, brzWeights)
    val norm = brzNorm(brzWeights, 2.0)

    (Vectors.fromBreeze(brzWeights), 0.5 * regParam * norm * norm)
  }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值