人工智能实战_第三次作业_杨佳宁

使用Minibatch的方式进行梯度下降

作业要求

项目内容
这个作业属于哪个课程班级博客
这个作业的要求在哪里作业要求
我在这个课程的目标是对于人工智能有一定的了解
这个作业在哪个具体方面帮助我实现目标能够有平台支持我对于人工智能更加深入的了解与交流
作业正文见下
其他参考文献ai-edu/B-教学案例与实践/B6-神经网络基本原理简明教程-有关梯度下降的代码

作业正文

1、pathon实现Minibatch方式的梯度下降

代码
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path

x_data_name = "TemperatureControlXData.dat"
y_data_name = "TemperatureControlYData.dat"

class CData(object):
    def __init__(self, loss, w, b, epoch, iteration):
        self.loss = loss
        self.w = w
        self.b = b
        self.epoch = epoch
        self.iteration = iteration

def ReadData():
    Xfile = Path(x_data_name)
    Yfile = Path(y_data_name)
    if Xfile.exists() & Yfile.exists():
        X = np.load(Xfile)
        Y = np.load(Yfile)
        return X.reshape(1,-1),Y.reshape(1,-1)
    else:
        return None,None

def ForwardCalculationBatch(W,B,batch_x):
    Z = np.dot(W, batch_x) + B
    return Z

def BackPropagationBatch(batch_x, batch_y, batch_z):
    m = batch_x.shape[1]
    dZ = batch_z - batch_y
    dB = dZ.sum(axis=1, keepdims=True)/m
    dW = np.dot(dZ, batch_x.T)/m
    return dW, dB

def UpdateWeights(w, b, dW, dB, eta):
    w = w - eta*dW
    b = b - eta*dB
    return w,b

def CheckLoss(W, B, X, Y):
    m = X.shape[1]
    Z = np.dot(W, X) + B
    LOSS = (Z - Y)**2
    loss = LOSS.sum()/m/2
    return loss

def GetBatchSamples(X,Y,batch_size,iteration):
    M = np.linspace(0,X.shape[1]-1,X.shape[1])
    K = np.random.choice(M,batch_size,replace=False)
    batch_x = []
    batch_y = []
    for i in range(batch_size):
        L = int(K[i])
        batch_x.append(X[:,L][0])
        batch_y.append(Y[:,L][0])
        np.delete(X ,L ,1)
        np.delete(Y ,L ,1)
    batch_x = np.array([batch_x])
    batch_y = np.array([batch_y])
    batch_x.reshape(1,batch_size)
    batch_y.reshape(1,batch_size)
    return batch_x, batch_y, X, Y

def ShowLossHistory(dict_loss):
    loss = []
    for key in dict_loss:
        loss.append(key)

    #plt.plot(loss)
    plt.plot(loss[30:800])
    plt.xlabel("epoch")
    plt.ylabel("loss")
    plt.show()


if __name__ == '__main__':

    eta=0.1
    max_epoch=50
    batch_size=10
    
    W = np.zeros((1, 1))
    B = np.zeros((1, 1))

    loss = 5
    dict_loss = {}
   
    X, Y = ReadData()

    num_example = X.shape[1]
    num_feature = X.shape[0]

    max_iteration = (int)(num_example / batch_size)
    for epoch in range(max_epoch):
        check_X = X
        check_Y = Y
        for iteration in range(max_iteration):
            batch_x, batch_y ,check_X ,check_Y= GetBatchSamples(check_X,check_Y,batch_size,iteration)
            batch_z = ForwardCalculationBatch(W, B, batch_x)
            dW, dB = BackPropagationBatch(batch_x, batch_y, batch_z)
            W, B = UpdateWeights(W, B, dW, dB, eta)
            loss = CheckLoss(W,B,X,Y)
            prev_loss = loss
            dict_loss[loss] = CData(loss, W, B, epoch, iteration)            
    ShowLossHistory(dict_loss)
输出结果
batch_size=5

1612742-20190325182254735-614825390.png

batch_size=10

1612742-20190325182350485-187242049.png

batch_size=15

1612742-20190325182431227-1060540718.png

2、关于损失函数的2D示意图的问题

- 1、为什么是椭圆而不是圆?如何把这个图变成一个圆?
    - 因为两个权重对函数的影响不一致,只要将影响一致或只采用一个权重,就可以将图变为圆。
- 2、为什么中心是个椭圆区域而不是一个点?
    - 因为散点分布的形式是一个区域。

转载于:https://www.cnblogs.com/yjn200/p/10595470.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值