机器学习之线性回归python实现

线性回归

粗略的记录了一下今天上课将的内容……

1 损失函数

J ( a , b ) = 1 2 n ∑ i = 0 n ( y i − y ^ i ) 2 J(a,b)=\frac{1}{2n}\displaystyle \sum_{i=0}^n(y_i- \hat y_i)^2 J(a,b)=2n1i=0n(yiy^i)2

2 优化方法

∂ J ∂ a = ∂ 1 2 n ∑ i = 0 n ( y i − y i ^ ) 2 ∂ a = 1 n ∑ i = 0 n ( y i − a x i − b ) ∂ ( y i − a x i − b ) ( − x i ) ∂ a \frac{\partial J}{\partial a} =\frac{\partial\frac{1}{2n}\displaystyle \sum_{i=0}^n(y_i- \hat{y_i})^2}{\partial a}=\frac{1}{n}\displaystyle \sum_{i=0}^n(y_i-ax_i-b)\frac{\partial(y_i-ax_i-b)(-x_i)}{\partial a} aJ=a2n1i=0n(yiyi^)2=n1i=0n(yiaxib)a(yiaxib)(xi)
= 1 n ∑ i = 0 n ( y i − a x i − b ) ( − x i ) = 1 n ∑ i = 0 n x ( y i ^ − y i ) =\frac{1}{n}\displaystyle \sum_{i=0}^n(y_i-ax_i-b)(-x_i) =\frac{1}{n} \displaystyle \sum_{i=0}^nx(\hat{y_i}-y_i) =n1i=0n(yiaxib)(xi)=n1i=0nx(yi^yi)

∂ J ∂ b = ∂ 1 2 n ∑ i = 0 n ( y i − y i ^ ) 2 ∂ b = 1 n ∑ i = 0 n ( y i − a x i − b ) ∂ ( y i − a x i − b ) ( − x i ) ∂ b \frac{\partial J}{\partial b} =\frac{\partial\frac{1}{2n}\displaystyle \sum_{i=0}^n(y_i- \hat{y_i})^2}{\partial b} =\frac{1}{n}\displaystyle \sum_{i=0}^n(y_i-ax_i-b)\frac{\partial(y_i-ax_i-b)(-x_i)}{\partial b} bJ=b2n1i=0n(yiyi^)2=n1i=0n(yiaxib)b(yiaxib)(xi)
= 1 n ∑ i = 0 n ( y i − a x i − b ) ( − 1 ) = 1 n ∑ i = 0 n ( y i ^ − y i ) =\frac{1}{n}\displaystyle \sum_{i=0}^n(y_i-ax_i-b)(-1) =\frac{1}{n} \displaystyle \sum_{i=0}^n(\hat{y_i}-y_i) =n1i=0n(yiaxib)(1)=n1i=0n(yi^yi)

更新 a a a b b b的值(梯度下降):
a = a − α ∂ J ∂ a a = a-\alpha \frac{\partial J}{\partial a} a=aαaJ
b = b − α ∂ J ∂ b b = b-\alpha \frac{\partial J}{\partial b} b=bαbJ

3 代码实现

'''
Description: 
Author: Weijian Ma
Date: 2020-09-16 18:47:40
LastEditTime: 2020-09-16 19:23:12
LastEditors: Weijian Ma
'''
import numpy as np
import matplotlib.pyplot as plt

plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False

## 数据及参数的初始化
x = [13854,12213,11009,10655,9503] 
x = np.reshape(x,newshape=(5,1)) / 10000.0
y =  [21332, 20162, 19138, 18621, 18016] 
y = np.reshape(y,newshape=(5,1)) / 10000.0
a = 1 
b = 1
alpha = 1e-1
n = len(x)

## 模型
def myModel(x):
    return a*x + b

## 损失函数
def costFunction(x, y, a, b):
    return 0.5/n*(np.square(a*x+b-y)).sum()

## 优化
def opt(x, y, a, b):
    yi = myModel(x)
    da = (1/n) * ((yi-y)*x).sum()
    db = (1/n) * ((yi-y).sum())
    a = a-alpha*da
    b = a-alpha*db
    return a, b

## 训练模型
fig = plt.figure(figsize=(8,4))
sub01 = plt.subplot(121)
sub02 = plt.subplot(122)
costList = []

for i in range(50):
    print('训练次数:{}'.format(i+1))
    cost = costFunction(x, y, a, b)
    costList.append(cost)
    a, b = opt(x, y, a, b)
    sub01.cla()
    sub02.cla()
    sub01.plot(x, a*x+b)
    sub01.scatter(x, y)
    sub01.set_xlabel('x')
    sub01.set_ylabel('y')
    sub01.set_title('a={0}, b={1}'.format(a, b))
    sub02.set_xlabel('训练次数')
    sub02.set_ylabel('损失函数值')
    sub02.set_title('当前损失函数值:{}'.format(cost))
    sub02.plot(costList)
    plt.pause(0.001)
plt.show()

结果

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值