Pytorch_Day01_回归问题


亲爱的朋友们!
任何时候都要抬头挺胸收下巴,慢慢追赶!

简单的一元二次方程

y = w ∗ x + b y=w*x+b y=wx+b

损失函数

l o s s = ( w x + b − y ) 2 loss=(wx+b-y)^2 loss=(wx+by)2

def compute_errer_for_line_give_points(b, w, points):   #points是一系列坐标点
    """
    计算所给点的误差(损失函数 )
    """
    totalerror = 0
    for i in range(0, len(points)):
        x = points[i, 0]  #取第i个坐标的第一个元素-x
        y = points[i, 1]  #取第i个坐标的第二个元素-y
        totalerror += (y - (w * x + b)) ** 2  #计算loss
    return totalerror / float(len(points))  #loss值取平均

梯度方程

Δ l o s s Δ w = 2 ( w x + b − y ) ∗ x \frac{\Delta loss}{\Delta w}=2(wx+b-y)*x ΔwΔloss=2(wx+by)x
Δ l o s s Δ b = 2 ( w x + b − y ) ∗ 1 \frac{\Delta loss}{\Delta b}=2(wx+b-y)*1 ΔbΔloss=2(wx+by)1
b ^ = b − l r ∗ Δ l o s s Δ b \hat{b}=b-lr*\frac{\Delta loss}{\Delta b} b^=blrΔbΔloss
w ^ = w − l r ∗ Δ l o s s Δ w \hat{w}=w-lr*\frac{\Delta loss}{\Delta w} w^=wlrΔwΔloss

def step_gradient(b_current, w_current, points, learningRate):
    """
    计算梯度下降
    """
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    print(N)
    for i in range(0, len(points)):
        x = points[i, 0]  #获得x,y的值
        y = points[i, 1]
        b_gradient += -(2/N) * (y-((w_current * x )+ b_current))  #/N求平均
        w_gradient += -(2/N) * x * (y-((w_current * x)+ b_current))
    new_b = b_current - (learningRate * b_gradient)  #求得b'值
    new_w = w_current - (learningRate * w_gradient)  # 求得w'值
    return [new_b, new_w]

循环迭代梯度信息

def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
    """
    将所得的梯度信息循环迭代
    """
    b = starting_b
    w = starting_w
    for i in range(num_iterations):    #num_iterations循环迭代次数
        b, w = step_gradient(b, w, np.array(points), learning_rate)
    return [b, w]

完整代码

import numpy as np

def compute_error_for_line_give_points(b, w, points):   #points是一系列坐标点
    """
    计算所给点的误差(损失函数 )
    """
    totalerror = 0
    for i in range(0, len(points)):
        x = points[i, 0]  #取第i个坐标的第一个元素-x
        y = points[i, 1]  #取第i个坐标的第二个元素-y
        totalerror += (y - (w * x + b)) ** 2  #计算loss
    return totalerror / float(len(points))  #loss值取平均

def step_gradient(b_current, w_current, points, learning_rate):
    """
    计算梯度下降
    """
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    #print(N)
    for i in range(0, len(points)):
        x = points[i, 0]  #获得x,y的值
        y = points[i, 1]
        b_gradient += -(2/N) * (y-((w_current * x )+ b_current))  #/N求平均
        w_gradient += -(2/N) * x * (y-((w_current * x)+ b_current))
    new_b = b_current - (learning_rate * b_gradient)  #求得b'值
    new_w = w_current - (learning_rate * w_gradient)  # 求得w'值
    return [new_b, new_w]

def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
    """
    将所得的梯度信息循环迭代
    """
    b = starting_b
    w = starting_w
    for i in range(num_iterations):    #num_iterations循环迭代次数
        b, w = step_gradient(b, w, np.array(points), learning_rate)
    return [b, w]

def main():
    points = np.genfromtxt("data.csv", delimiter=",")
    learning_rate = 0.0001
    initial_b = 0
    initial_w = 0
    num_iterations = 1000
    print("Starting gradient descent at b ={0}, w = {1}, error = {2}"
          .format(initial_b, initial_w, compute_error_for_line_give_points(initial_b, initial_w, points)))
    print("Running...")
    [b, w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
    print("After {0} iterations b = {1}, w = {2}, error = {3}"
          .format(num_iterations, b, w,
                 compute_error_for_line_give_points(b, w, points))
          )

if __name__ == '__main__':
    main()

输出

Starting gradient descent at b =0, w = 0, error = 5565.107834483211
Running...
After 1000 iterations b = 0.08893651993741346, w = 1.4777440851894448, error = 112.61481011613473

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值