相当于f(x)对x进行求导。求导为正,w就会减小。最终使得目标往最小值靠拢
随机梯度下降,随机取一部分代替所有
在深度学习里面,那个
import matplotlib.pyplot as plt
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
w = 1.0
def forward(x):
return x * w
##计算loss
def cost(xs, ys):
cost = 0
for x,y in zip(xs,ys):
y_pred = forward(x)
cost +=(y_pred - y) ** 2
return cost/len(xs)
def gridient(xs, ys):
grad = 0
for x, y in zip(xs,ys):
grad +=2*x*(x*w-y)
return grad/len(xs)
print("Predict (before training)", 4, forward(4))
w_list = []
mse_list = []
for epoch in range(1000):
cost_val = cost(x_data, y_data)
grad_val = gridient(x_data, y_data)
w -= 0.01*grad_val
print("Epoch:", epoch, "w=", w, "loss=", cost_val)
w_list.append(epoch)
mse_list.append(cost_val)
print("Predict(after training)", 4, forward(4))
plt.plot(w_list, mse_list)
plt.ylabel("cost_val")
plt.xlabel("epoch")
plt.show()
实际深度学习以epoch学习的方法
import matplotlib.pyplot as plt
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
w = 1.0
def forward(x):
return x * w
def loss(x,y):
y_pred = forward(x)
return (y_pred - y)**2
def gradient(x,y):
return x*2*(x*w-y)
print("Predict (before training)", 4, forward(4))
w_list = []
mse_list = []
for epoch in range(100):
for x,y in zip(x_data, y_data):
grad = gradient(x, y)
w = w - 0.01*grad
print("\tgrad:", x, y, grad)
l=loss(x, y)
print("epoch:",epoch, "w=", w)
w_list.append(epoch)
mse_list.append(l)
print("Prediict(after training)",4, forward(4))
plt.plot(w_list, mse_list)
plt.ylabel("mse_list")
plt.xlabel("w_list")
plt.show()
所以,每次均是从总体里面,抽取一部分来进行训练,叫做Batch。
Batch*Batch_size=数据集的总数