梯度下降算法、随机梯度下降算法scala实现

梯度下降和随机梯度下降是机器学习中最常用的算法之一。关于其具体的原理这里不多做介绍,网络上可以很方便的找到。例如可以参考博客:http://blog.csdn.net/woxincd/article/details/7040944
scala代码实现如下:

object SGD{
        /*X:输入变量
          y:输入结果
          learnRate:学习步长
          iterNum:迭代次数
          thres:损失函数阈值
        */
        def gradientDescent(X:Array[Array[Int]],y:Array[Int],learnRate:Double = 0.001,iterNum:Int = 1000,thres :Double = 0.0001):Array[Double]={
                val theta:Array[Double] = new Array(X(0).length)
                var loss = 10000.0
                for(i<- 0 to iterNum if loss > thres){
                        var errorSum = 0.0
                        for(row<-0 until X.length){
                                var rowSum = 0.0
                                for(col <- 0 until X(0).length){
                                        rowSum += X(row)(col)*theta(col)
                                }
                                errorSum += y(row)-rowSum
                                for(col <- 0 until X(0).length){
                                        theta(col) += learnRate*errorSum*X(row)(col)
                                }
                        }
                        loss = 0.0
                        for(row <- 0 until X.length){
                                var rowSum= 0.0
                                for(col <- 0 until X(0).length){
                                        rowSum += X(row)(col)*theta(col)
                                }
                                loss += (rowSum - result(row))*(rowSum - result(row))
                        }
                }
                theta
        }

        def stochasticGradientDescent(X:Array[Array[Int]],y:Array[Int],learnRate:Double = 0.001,iterNum:Int = 1000,thres :Double = 0.0001):Array[Double]={
                var theta:Array[Double] = new Array(X(0).length)
                var loss = 10000.0
                for(i<- 0 to iterNum if loss > thres){
                        var errorSum = 0.0
                        val row = i % X.length
                        var rowSum = 0.0
                        for(col <- 0 until X(0).length){
                                rowSum += X(row)(col)*theta(col)
                        }
                        errorSum += y(row)-rowSum
                        for(col <- 0 until X(0).length){
                                theta(col) += learnRate*errorSum*X(row)(col)
                        }
                        loss = 0.0
                        for(row <- 0 until X.length){
                                var rowSum = 0.0
                                for(col <- 0 until X(0).length){
                                        rowSum += X(row)(col)*theta(col)
                                }
                                loss += (rowSum - result(row))*(rowSum - result(row))
                        }
                }
                theta
        }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值