第十二章 随机梯度下降(Stochastic_Gradient_Descent)

12.1 原始线性模型

        接下来用用代码建立一个:

y = 0.4 + 0.8*x

def make_prediction(input_row,coefficients):
    # 预测值初始化为第一个系数值
    out_put_y_hat = coefficients[0]
    for i in range(len(input_row)-1):
        out_put_y_hat += coefficients[i+1]*input_row[i]
    return out_put_y_hat

# 主函数
if '__main__' == __name__:
    test_dataset = [[1,1],[2,3],[4,3],[3,2],[5,5]]
    test_coefficients = [0.4,0.8]
    
    for row in test_dataset:
        y_hat = make_prediction(row, test_coefficients)
        print("True_Y_value = %.3f, our_prediction = %.3f"%(row[-1],y_hat))

12.2 梯度下降优化         

        epochs:次数,我们这个模型不断通过学习training data,不断进行迭代更新coefficient

b = b-learningrate*error*x

        (1)把所有的epochs进行Loop

        (2)在每一个epoch里面训练集row进行循环

        (3)每次循环都会进行系数(coefficient)调优

 b_0(t+1) = b_0(t)-learning\;rate*error(t)

b_1(t+1) = b_1(t)-learning\;rate*error(t)*x_1(t)

error = prediction-true\;value

# -*- coding: utf-8 -*-
"""
Created on Mon May  9 09:02:31 2022

@author: xiaofeng
"""

def make_prediction(input_row, coefficients):
    out_put_y_hat = coefficients[0]
    for i in range(len(input_row) - 1):
        out_put_y_hat += coefficients[i + 1] * input_row[i]
    return out_put_y_hat


def using_sgd_method_to_calculate_coefficients(training_dataset, learning_rate, n_times_epoch):
    coefficients = [0.0 for i in range(len(training_dataset[0]))]
    for epoch in range(n_times_epoch):
        the_sum_of_error = 0
        for row in training_dataset:
            y_hat = make_prediction(row, coefficients)
            error = y_hat - row[-1]
            the_sum_of_error += error ** 2
            coefficients[0] = coefficients[0] - learning_rate * error
            for i in range(len(row) - 1):
                coefficients[i + 1] = coefficients[i + 1] - learning_rate * error * row[i]
        print("This is epoch [%d],the learning rate we are using is [%.3f],the error is [%.3f]" % (
        epoch, learning_rate, the_sum_of_error))
    return coefficients


# 主函数
if '__main__' == __name__:
    your_trainging_dataset = [[1, 1], [2, 3], [4, 3], [3, 2], [5, 5]]
    your_model_learning_rate = 0.001
    your_n_epoch = 500
    your_coefficients = using_sgd_method_to_calculate_coefficients(your_trainging_dataset,
                                                                   your_model_learning_rate,
                                                                   your_n_epoch)
    print(your_coefficients)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

IntelligentRS

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值