【TensorFlow自学2】基础语句+激活函数+损失函数+正则化缓解过拟合+优化器(SGD、Adam等)

2.1预备知识

tf.where()–条件语句真的话返回A,假的话返回B

tf.where(条件语句,A(真-返回),B(假-返回))
import tensorflow as tf

a=tf.constant([1,2,3,1,1])
b=tf.constant([0,1,3,4,5])
c=tf.where(tf.greater(a,b),a,b) #判断是否a>b,如果是返回a的值,不是返回b(输出两个数组中较大的元素)
print("c: ",c)
c:  tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32)

np.random.RandomState.rand()–返回一个[0,1)之间的随机数

np.random.RandomState.rand(维度) #维度为空时返回标量
import numpy as np

rdm=np.random.RandomState(seed=1) #seed=常数 每次生成的随机数相同
a=rdm.rand() #返回一个标量
b=rdm.rand(2,3) #返回一个2行3列的随机矩阵
print("a:",a)
print("b:",b)
a: 0.417022004702574
b: [[7.20324493e-01 1.14374817e-04 3.02332573e-01]
 [1.46755891e-01 9.23385948e-02 1.86260211e-01]]

np.vstack()–将两个数组按垂直方向叠加(堆叠)

np.vstack(数组1,数组2)
import numpy as np

a=np.array([1,2,3])
b=np.array([4,5,6])
c=np.stack((a,b))
print("c:\n",c)
c:
 [[1 2 3]
 [4 5 6]]

np.mgrid np.ravel() np.c_[]–生成网格

np.mgrid[起始值:结束值:步长,起始值:结束值:步长,起始值:结束值:步长,....]
x.ravel()--将x变成一维数组(把矩阵拉直)
np.c_[数组1,数组2,...]--使返回的间隔数值点配对
import numpy as np

x,y=np.mgrid[1:3:1,2:4:0.5]
grid=np.c_[x.ravel(),y.ravel()]
print("x:",x)
print("y:",y)
print("grid:\n",grid)
x: [[1. 1. 1. 1.]
 [2. 2. 2. 2.]]
y: [[2.  2.5 3.  3.5]
 [2.  2.5 3.  3.5]]
grid:
 [[1.  2. ]
 [1.  2.5]
 [1.  3. ]
 [1.  3.5]
 [2.  2. ]
 [2.  2.5]
 [2.  3. ]
 [2.  3.5]]

2.2神经网络(NN)复杂度

NN复杂度–多用NN层数和NN参数个数来表示

空间复杂度:

层数=隐藏层的层数+1个输出层(不算输入层)

总参数=总W+总b

时间复杂度:

乘加运算次数

学习率的选取:过大无法找到最优、过小迭代次数较大才能找到

指数衰减学习率:先用较大的学习率,快速找到最优解,然后逐步减小学习率,使得模型在训练后期稳定

指数衰减学习率=初始学习率*学习率衰减率^{当前轮数/多少轮衰减一次}

epoch=40
LR_BASE=0.2
LR_DECAY=0.99
LR_STEP=1

lr=LR_BASE*LR_DECAY**(epoch/LR_STEP) #添加进循环
import tensorflow as tf

w=tf.Variable(tf.constant(5,dtype=tf.float32))
epoch=40
LR_BASE=0.2
LR_DECAY=0.99
LR_STEP=1

for epoch in range(epoch): #对数据集的循环次数
    lr=LR_BASE*LR_DECAY**(epoch/LR_STEP) #添加进循环
    with tf.GradientTape() as tape: # with...grads是loss函数对w求梯度的过程
        loss=tf.square(w+1) #损失函数为$L=(w+1)^2$
    grads=tape.gradient(loss,w) # .gradient函数用来说明A对B求导

    w.assign_sub(lr*grads) #参数自己减梯度(向负梯度方向前进)
    print("After %s epoch, w is %f, loss is %f" %(epoch,w.numpy(),loss))

# 有效的找到最优解
After 0 epoch, w is 2.600000, loss is 36.000000
After 1 epoch, w is 1.174400, loss is 12.959999
After 2 epoch, w is 0.321948, loss is 4.728015
After 3 epoch, w is -0.191126, loss is 1.747547
After 4 epoch, w is -0.501926, loss is 0.654277
After 5 epoch, w is -0.691392, loss is 0.248077
After 6 epoch, w is -0.807611, loss is 0.095239
After 7 epoch, w is -0.879339, loss is 0.037014
After 8 epoch, w is -0.923874, loss is 0.014559
After 9 epoch, w is -0.951691, loss is 0.005795
After 10 epoch, w is -0.969167, loss is 0.002334
After 11 epoch, w is -0.980209, loss is 0.000951
After 12 epoch, w is -0.987226, loss is 0.000392
After 13 epoch, w is -0.991710, loss is 0.000163
After 14 epoch, w is -0.994591, loss is 0.000069
After 15 epoch, w is -0.996452, loss is 0.000029
After 16 epoch, w is -0.997660, loss is 0.000013
After 17 epoch, w is -0.998449, loss is 0.000005
After 18 epoch, w is -0.998967, loss is 0.000002
After 19 epoch, w is -0.999308, loss is 0.000001
After 20 epoch, w is -0.999535, loss is 0.000000
After 21 epoch, w is -0.999685, loss is 0.000000
After 22 epoch, w is -0.999786, loss is 0.000000
After 23 epoch, w is -0.999854, loss is 0.000000
After 24 epoch, w is -0.999900, loss is 0.000000
After 25 epoch, w is -0.999931, loss is 0.000000
After 26 epoch, w is -0.999952, loss is 0.000000
After 27 epoch, w is -0.999967, loss is 0.000000
After 28 epoch, w is -0.999977, loss is 0.000000
After 29 epoch, w is -0.999984, loss is 0.000000
After 30 epoch, w is -0.999989, loss is 0.000000
After 31 epoch, w is -0.999992, loss is 0.000000
After 32 epoch, w is -0.999994, loss is 0.000000
After 33 epoch, w is -0.999996, loss is 0.000000
After 34 epoch, w is -0.999997, loss is 0.000000
After 35 epoch, w is -0.999998, loss is 0.000000
After 36 epoch, w is -0.999999, loss is 0.000000
After 37 epoch, w is -0.999999, loss is 0.000000
After 38 epoch, w is -0.999999, loss is 0.000000
After 39 epoch, w is -0.999999, loss is 0.000000

2.3激活函数

1)Sigmoid函数–tf.nn.sigmoid(x)
f ( x ) = 1 1 + e − x f(x)=\frac{1}{1+e^{-x}} f(x)=1+ex1
易造成梯度消失(多个0-0.25的数字相乘)、收敛慢、幂运算复杂训练时间长

2)Tanh函数–tf.math.tanh(x)
f ( x ) = 1 − e − 2 x 1 + e − 2 x f(x)=\frac{1-e^{-2x}}{1+e^{-2x}} f(x)=1+e2x1e2x
输出是0均值、梯度消失、幂运算复杂

3)ReLU函数–tf.nn.relu(x)
f ( x ) = max ⁡ ( x , 0 ) f(x)=\max(x,0) f(x)=max(x,0)
解决了梯度消失问题、计算简单速度快、收敛快
输出非0均值,收敛慢、DeadReLU问题(某些神经元可能永远不被激活,对应参数无法更新)

4)Leaky_ReLU函数–tf.nn.leaky_relu(x)
f ( x ) = max ⁡ ( α x , x ) f(x)=\max(\alpha x,x) f(x)=max(αx,x)
解决了DeadReLU问题,但实际操作中并不是完全优于ReLU

✨Tips:

首选ReLU函数、学习率设置较小值、输入特征标准化(标准正态分布)、初始参数中心化(随机生成的初始参数满足0均值 2 / f e a t u r e   n u m b e r \sqrt{2/feature\ number} 2/feature number 标准差的正态分布)

2.4损失函数(loss)

loss:预测值(y)与已知真实值(y_)的差距

NN的优化目标:loss最小–MSE(Mean Squared Error)、ce(Cross Entropy)、自定义

(1)均方误差MSE–loss_mse=tf.reduce_mean(tf.square(y_-y))
M S E ( y _ , y ) = 1 n ∑ i = 1 n ( y _ − y ) 2 MSE(y\_,y)=\frac{1}{n}\sum_{i=1}^n(y\_-y)^2 MSE(y_,y)=n1i=1n(y_y)2

#预测酸奶日销量y
#假设x1与x2是影响销量的因素,自建一组数据:随机生成x1和x2,计算x1+x2,并添加噪声:-0.05~0.05,作为观测真实值y_

import tensorflow as tf 
import numpy as np

SEED=23455

rdm=np.random.RandomState(seed=SEED) #生成[0,1)之间的随机数
x=rdm.rand(32,2) #32行2列的矩阵,32组真实数据
y_=[[x1+x2+(rdm.rand()/10.0-0.05)] for (x1,x2) in x] #生成带噪声的真实值
x=tf.cast(x,dtype=tf.float32)

w1=tf.Variable(tf.random.normal([2,1],stddev=1,seed=1))

epoch=15000
lr=0.002

for epoch in range(epoch):
    with tf.GradientTape() as tape:
        y=tf.matmul(x,w1)
        loss_mse=tf.reduce_mean(tf.square(y_-y))
    
    grads=tape.gradient(loss_mse,w1)
    w1.assign_sub(lr*grads)

    if epoch%500==0:
        print("After %d training steps, w1 is " %(epoch))
        print(w1.numpy(),"\n")
print("final w1 is: ", w1.numpy())

After 0 training steps, w1 is 
[[ 0.8266879 ]
 [-0.68062395]] 

After 500 training steps, w1 is 
[[1.2981918 ]
 [0.12153205]] 

After 1000 training steps, w1 is 
[[1.3930992]
 [0.4468745]] 

After 1500 training steps, w1 is 
[[1.3728648 ]
 [0.60195106]] 

After 2000 training steps, w1 is 
[[1.3233058]
 [0.691584 ]] 

After 2500 training steps, w1 is 
[[1.2715786]
 [0.7524795]] 

After 3000 training steps, w1 is 
[[1.2254242]
 [0.7981871]] 

After 3500 training steps, w1 is 
[[1.1863139 ]
 [0.83425367]] 

After 4000 training steps, w1 is 
[[1.153813  ]
 [0.86335814]] 

After 4500 training steps, w1 is 
[[1.1270124 ]
 [0.88707036]] 

After 5000 training steps, w1 is 
[[1.1049817]
 [0.9064663]] 

After 5500 training steps, w1 is 
[[1.0868948 ]
 [0.92235756]] 

After 6000 training steps, w1 is 
[[1.0720533]
 [0.9353866]] 

After 6500 training steps, w1 is 
[[1.0598779 ]
 [0.94607174]] 

After 7000 training steps, w1 is 
[[1.0498903]
 [0.9548356]] 

After 7500 training steps, w1 is 
[[1.0416976 ]
 [0.96202403]] 

After 8000 training steps, w1 is 
[[1.0349776]
 [0.9679204]] 

After 8500 training steps, w1 is 
[[1.0294647]
 [0.9727569]] 

After 9000 training steps, w1 is 
[[1.0249432 ]
 [0.97672427]] 

After 9500 training steps, w1 is 
[[1.0212348]
 [0.9799782]] 

After 10000 training steps, w1 is 
[[1.0181928 ]
 [0.98264736]] 

After 10500 training steps, w1 is 
[[1.0156975 ]
 [0.98483694]] 

After 11000 training steps, w1 is 
[[1.0136509]
 [0.9866328]] 

After 11500 training steps, w1 is 
[[1.0119709]
 [0.9881056]] 

After 12000 training steps, w1 is 
[[1.0105952 ]
 [0.98931384]] 

After 12500 training steps, w1 is 
[[1.009466  ]
 [0.99030447]] 

After 13000 training steps, w1 is 
[[1.0085402]
 [0.9911175]] 

After 13500 training steps, w1 is 
[[1.0077778 ]
 [0.99178374]] 

After 14000 training steps, w1 is 
[[1.007155 ]
 [0.9923309]] 

After 14500 training steps, w1 is 
[[1.0066465]
 [0.9927791]] 

final w1 is:  [[1.0062273]
 [0.9931454]]

(2)自定义损失函数

上述例子中,如果酸奶的成本和利润是一定的,那么预测过多损失的是成本;预测少损失的是利润。

当成本与利润不等时,例如酸奶的成本是1元,售价为100元,那么利润就是99元,我们更加希望预测值不完全准确时倾向于往多了预测。

因此我们自定义损失函数:
KaTeX parse error: No such environment: equation* at position 8: \begin{̲e̲q̲u̲a̲t̲i̲o̲n̲*̲}̲ f(y\_,y)= \lef…

import tensorflow as tf 
import numpy as np

SEED=23455

rdm=np.random.RandomState(seed=SEED) #生成[0,1)之间的随机数
x=rdm.rand(32,2) #32行2列的矩阵,32组真实数据
y_=[[x1+x2+(rdm.rand()/10.0-0.05)] for (x1,x2) in x] #生成带噪声的真实值
x=tf.cast(x,dtype=tf.float32)

w1=tf.Variable(tf.random.normal([2,1],stddev=1,seed=1))

epoch=15000
lr=0.002
cost=1
profit=99

for epoch in range(epoch):
    with tf.GradientTape() as tape:
        y=tf.matmul(x,w1)
        loss=tf.reduce_sum(tf.where(tf.greater(y,y_),(y-y_)*cost,(y_-y)*profit)) #只改变了损失函数
    
    grads=tape.gradient(loss,w1)
    w1.assign_sub(lr*grads)

    if epoch%500==0:
        print("After %d training steps, w1 is " %(epoch))
        print(w1.numpy(),"\n")
print("final w1 is: ", w1.numpy())
After 0 training steps, w1 is 
[[1.8674549]
 [3.9414268]] 

After 500 training steps, w1 is 
[[1.156094 ]
 [1.0404317]] 

After 1000 training steps, w1 is 
[[1.1464739]
 [1.0717155]] 

After 1500 training steps, w1 is 
[[1.136854 ]
 [1.1029994]] 

After 2000 training steps, w1 is 
[[1.1272339]
 [1.1342831]] 

After 2500 training steps, w1 is 
[[1.1762469]
 [1.1768597]] 

After 3000 training steps, w1 is 
[[1.1458087]
 [1.0316733]] 

After 3500 training steps, w1 is 
[[1.1361889]
 [1.0629573]] 

After 4000 training steps, w1 is 
[[1.1265689]
 [1.0942411]] 

After 4500 training steps, w1 is 
[[1.1755823]
 [1.1368182]] 

After 5000 training steps, w1 is 
[[1.1659625]
 [1.1681021]] 

After 5500 training steps, w1 is 
[[1.135524 ]
 [1.0229155]] 

After 6000 training steps, w1 is 
[[1.125904 ]
 [1.0541992]] 

After 6500 training steps, w1 is 
[[1.1162838]
 [1.0854828]] 

After 7000 training steps, w1 is 
[[1.1066637]
 [1.1167666]] 

After 7500 training steps, w1 is 
[[1.1556768]
 [1.1593434]] 

After 8000 training steps, w1 is 
[[1.1252388]
 [1.014157 ]] 

After 8500 training steps, w1 is 
[[1.1156188]
 [1.0454409]] 

After 9000 training steps, w1 is 
[[1.1059989]
 [1.0767248]] 

After 9500 training steps, w1 is 
[[1.1550124]
 [1.1193019]] 

After 10000 training steps, w1 is 
[[1.1453927]
 [1.150586 ]] 

After 10500 training steps, w1 is 
[[1.1944054]
 [1.1931624]] 

After 11000 training steps, w1 is 
[[1.1639671]
 [1.0479759]] 

After 11500 training steps, w1 is 
[[1.1543471]
 [1.0792596]] 

After 12000 training steps, w1 is 
[[1.1447271]
 [1.1105435]] 

After 12500 training steps, w1 is 
[[1.135107 ]
 [1.1418272]] 

After 13000 training steps, w1 is 
[[1.1841207]
 [1.1844045]] 

After 13500 training steps, w1 is 
[[1.1536822]
 [1.0392178]] 

After 14000 training steps, w1 is 
[[1.1440623]
 [1.0705017]] 

After 14500 training steps, w1 is 
[[1.1344426]
 [1.1017858]] 

final w1 is:  [[1.1359003]
 [1.155832 ]]

(3)交叉熵函数CE(Cross Entropy)表示两个概率分布之间的距离
H ( y _ , y ) = − ∑ y _ ∗ ln ⁡ y H(y\_,y)=-\sum y\_*\ln y H(y_,y)=y_lny
值越小,概率分布越接近–tf.losses.categorical_crossentropy(y_,y)

softmax与交叉熵结合:输出先过softmax函数,再计算y与y_的交叉熵损失函数

–tf.nn.softmax_cross_entropy_with_losits(y_,y)

2.5缓解过拟合

欠拟合:学习的不够彻底,使模型不能有效拟合数据集–【增加特征、增加参数、减少正则化参数】

过拟合:模型对当前数据拟合的过好,缺乏泛化力(对没见过的数据预测不好)–【数据清洗减少噪声、增大训练集、采用正则化、增大正则化参数】

✅正则化缓解过拟合

正则化:在损失函数中引入模型复杂度,利用给参数阵W加权值,弱化训练的噪声(一般情况不会正则化b)
l o s s = l o s s ( y , y _ ) + r e g u l a r i z e r ∗ l o s s ( w ) loss=loss(y,y\_)+regularizer*loss(w) loss=loss(y,y_)+regularizerloss(w)

l o s s L 1 ( w ) = ∑ i ∣ w i ∣ l o s s L 2 ( w ) = ∑ i ∣ w i 2 ∣ loss_{L1}(w)=\sum_i |w_i|\qquad loss_{L2}(w)=\sum_i |w_i^2| lossL1(w)=iwilossL2(w)=iwi2

L1正则化:会使很多参数变为0,该方法通过稀疏参数(减少参数数量)来降低复杂度

L2正则化:会使参数接近0,但是不为0,通过减小值的大小来缓解数据噪声引起的过拟合

#不带正则化的情形

# 导入所需模块
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd

# 读入数据/标签 生成x_train y_train
df = pd.read_csv('/Users/viva/Downloads/class2/dot.csv')
x_data = np.array(df[['x1', 'x2']])
y_data = np.array(df['y_c'])

x_train = np.vstack(x_data).reshape(-1,2) #vstack将数据按行堆叠起来
y_train = np.vstack(y_data).reshape(-1,1) #reshape中-1代表自动计算,2表示确定两列,行数自动计算

Y_c = [['red' if y else 'blue'] for y in y_train]

# 转换x的数据类型,否则后面矩阵相乘时会因数据类型问题报错
x_train = tf.cast(x_train, tf.float32)
y_train = tf.cast(y_train, tf.float32)

# from_tensor_slices函数切分传入的张量的第一个维度,生成相应的数据集,使输入特征和标签值一一对应
train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)

# 生成神经网络的参数,输入层为2个神经元,隐藏层为11个神经元,1层隐藏层,输出层为1个神经元
# 用tf.Variable()保证参数可训练
w1 = tf.Variable(tf.random.normal([2, 11]), dtype=tf.float32)
b1 = tf.Variable(tf.constant(0.01, shape=[11])) #这里的11与w1的列数相同,保持同样的维度

w2 = tf.Variable(tf.random.normal([11, 1]), dtype=tf.float32) #第二层的输入11要与第一层输出数量相同
b2 = tf.Variable(tf.constant(0.01, shape=[1]))

lr = 0.01  # 学习率
epoch = 400  # 循环轮数

# 训练部分
for epoch in range(epoch):
    for step, (x_train, y_train) in enumerate(train_db):
        with tf.GradientTape() as tape:  # 记录梯度信息

            h1 = tf.matmul(x_train, w1) + b1  # 记录神经网络乘加运算
            h1 = tf.nn.relu(h1) #带有激活函数
            y = tf.matmul(h1, w2) + b2

            # 采用均方误差损失函数mse = mean(sum(y-out)^2)
            loss = tf.reduce_mean(tf.square(y_train - y))

        # 计算loss对各个参数的梯度
        variables = [w1, b1, w2, b2]
        grads = tape.gradient(loss, variables)

        # 实现梯度更新
        # w1 = w1 - lr * w1_grad tape.gradient是自动求导结果与[w1, b1, w2, b2] 索引为0,1,2,3 
        w1.assign_sub(lr * grads[0])
        b1.assign_sub(lr * grads[1])
        w2.assign_sub(lr * grads[2])
        b2.assign_sub(lr * grads[3])

    # 每20个epoch,打印loss信息
    if epoch % 20 == 0:
        print('epoch:', epoch, 'loss:', float(loss))

# 预测部分
print("*******predict*******")
# xx在-3到3之间以步长为0.1,yy在-3到3之间以步长0.01,生成间隔数值点
xx, yy = np.mgrid[-3:3:.1, -3:3:.1]
# 将xx , yy拉直,并合并配对为二维张量,生成二维坐标点
grid = np.c_[xx.ravel(), yy.ravel()]
grid = tf.cast(grid, tf.float32)
# 将网格坐标点喂入神经网络,进行预测,probs为输出
probs = []
for x_test in grid:
    # 使用训练好的参数进行预测
    h1 = tf.matmul([x_test], w1) + b1
    h1 = tf.nn.relu(h1)
    y = tf.matmul(h1, w2) + b2  # y为预测结果
    probs.append(y)

# 取第0列给x1,取第1列给x2
x1 = x_data[:, 0]
x2 = x_data[:, 1]
# probs的shape调整成xx的样子
probs = np.array(probs).reshape(xx.shape)
plt.scatter(x1, x2, color=np.squeeze(Y_c)) #squeeze去掉纬度是1的纬度,相当于去掉[['red'],[''blue]],内层括号变为['red','blue']
# 把坐标xx yy和对应的值probs放入contour<[‘kɑntʊr]>函数,给probs值为0.5的所有点上色  plt点show后 显示的是红蓝点的分界线
plt.contour(xx, yy, probs, levels=[.5])
plt.show()

# 读入红蓝点,画出分割线,不包含正则化
# 不清楚的数据,建议print出来查看 
epoch: 0 loss: 0.6055795550346375
epoch: 20 loss: 0.07649826258420944
epoch: 40 loss: 0.06883294880390167
epoch: 60 loss: 0.06506730616092682
epoch: 80 loss: 0.060232751071453094
epoch: 100 loss: 0.056612174957990646
epoch: 120 loss: 0.05371078848838806
epoch: 140 loss: 0.052048470824956894
epoch: 160 loss: 0.05126640945672989
epoch: 180 loss: 0.05126921460032463
epoch: 200 loss: 0.05139847844839096
epoch: 220 loss: 0.04914329573512077
epoch: 240 loss: 0.046171944588422775
epoch: 260 loss: 0.04357674717903137
epoch: 280 loss: 0.0416145883500576
epoch: 300 loss: 0.03968871012330055
epoch: 320 loss: 0.037830088287591934
epoch: 340 loss: 0.036351241171360016
epoch: 360 loss: 0.03420327603816986
epoch: 380 loss: 0.03207312151789665
*******predict*******

在这里插入图片描述

#带有L2正则化的情形

# 导入所需模块
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd

# 读入数据/标签 生成x_train y_train
df = pd.read_csv('/Users/viva/Downloads/class2/dot.csv')
x_data = np.array(df[['x1', 'x2']])
y_data = np.array(df['y_c'])

x_train = x_data
y_train = y_data.reshape(-1, 1)

Y_c = [['red' if y else 'blue'] for y in y_train]

# 转换x的数据类型,否则后面矩阵相乘时会因数据类型问题报错
x_train = tf.cast(x_train, tf.float32)
y_train = tf.cast(y_train, tf.float32)

# from_tensor_slices函数切分传入的张量的第一个维度,生成相应的数据集,使输入特征和标签值一一对应
train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)

# 生成神经网络的参数,输入层为4个神经元,隐藏层为32个神经元,2层隐藏层,输出层为3个神经元
# 用tf.Variable()保证参数可训练
w1 = tf.Variable(tf.random.normal([2, 11]), dtype=tf.float32)
b1 = tf.Variable(tf.constant(0.01, shape=[11]))

w2 = tf.Variable(tf.random.normal([11, 1]), dtype=tf.float32)
b2 = tf.Variable(tf.constant(0.01, shape=[1]))

lr = 0.01  # 学习率为
epoch = 400  # 循环轮数

# 训练部分
for epoch in range(epoch):
    for step, (x_train, y_train) in enumerate(train_db):
        with tf.GradientTape() as tape:  # 记录梯度信息

            h1 = tf.matmul(x_train, w1) + b1  # 记录神经网络乘加运算
            h1 = tf.nn.relu(h1)
            y = tf.matmul(h1, w2) + b2

            # 采用均方误差损失函数mse = mean(sum(y-out)^2)
            loss_mse = tf.reduce_mean(tf.square(y_train - y))
            # 添加l2正则化
            loss_regularization = []
            # tf.nn.l2_loss(w)=sum(w ** 2) / 2
            loss_regularization.append(tf.nn.l2_loss(w1))
            loss_regularization.append(tf.nn.l2_loss(w2))
            # 求和
            # 例:x=tf.constant(([1,1,1],[1,1,1]))
            #   tf.reduce_sum(x)
            # >>>6
            # loss_regularization = tf.reduce_sum(tf.stack(loss_regularization))
            loss_regularization = tf.reduce_sum(loss_regularization)
            loss = loss_mse + 0.03 * loss_regularization #REGULARIZER = 0.03

        # 计算loss对各个参数的梯度
        variables = [w1, b1, w2, b2]
        grads = tape.gradient(loss, variables)

        # 实现梯度更新
        # w1 = w1 - lr * w1_grad
        w1.assign_sub(lr * grads[0])
        b1.assign_sub(lr * grads[1])
        w2.assign_sub(lr * grads[2])
        b2.assign_sub(lr * grads[3])

    # 每200个epoch,打印loss信息
    if epoch % 20 == 0:
        print('epoch:', epoch, 'loss:', float(loss))

# 预测部分
print("*******predict*******")
# xx在-3到3之间以步长为0.01,yy在-3到3之间以步长0.01,生成间隔数值点
xx, yy = np.mgrid[-3:3:.1, -3:3:.1]
# 将xx, yy拉直,并合并配对为二维张量,生成二维坐标点
grid = np.c_[xx.ravel(), yy.ravel()]
grid = tf.cast(grid, tf.float32)
# 将网格坐标点喂入神经网络,进行预测,probs为输出
probs = []
for x_predict in grid:
    # 使用训练好的参数进行预测
    h1 = tf.matmul([x_predict], w1) + b1
    h1 = tf.nn.relu(h1)
    y = tf.matmul(h1, w2) + b2  # y为预测结果
    probs.append(y)

# 取第0列给x1,取第1列给x2
x1 = x_data[:, 0]
x2 = x_data[:, 1]
# probs的shape调整成xx的样子
probs = np.array(probs).reshape(xx.shape)
plt.scatter(x1, x2, color=np.squeeze(Y_c))
# 把坐标xx yy和对应的值probs放入contour<[‘kɑntʊr]>函数,给probs值为0.5的所有点上色  plt点show后 显示的是红蓝点的分界线
plt.contour(xx, yy, probs, levels=[.5])
plt.show()

# 读入红蓝点,画出分割线,包含正则化
# 不清楚的数据,建议print出来查看 
epoch: 0 loss: 0.6997354030609131
epoch: 20 loss: 0.39997491240501404
epoch: 40 loss: 0.34798115491867065
epoch: 60 loss: 0.3107355535030365
epoch: 80 loss: 0.2789038121700287
epoch: 100 loss: 0.24940520524978638
epoch: 120 loss: 0.22390803694725037
epoch: 140 loss: 0.20139609277248383
epoch: 160 loss: 0.18274584412574768
epoch: 180 loss: 0.16758093237876892
epoch: 200 loss: 0.15457896888256073
epoch: 220 loss: 0.14414116740226746
epoch: 240 loss: 0.1351311355829239
epoch: 260 loss: 0.12741810083389282
epoch: 280 loss: 0.12007804214954376
epoch: 300 loss: 0.11288371682167053
epoch: 320 loss: 0.10653729736804962
epoch: 340 loss: 0.10097507387399673
epoch: 360 loss: 0.0966401919722557
epoch: 380 loss: 0.09320375323295593
*******predict*******

在这里插入图片描述

2.6神经网络参数优化器

待优化的参数为w、损失函数loss、学习率lr、每次迭代一个batch、t表示当前batch迭代的总次数

优化步骤:

1.计算t时刻损失函数关于当前参数的梯度: g t = ∇ l o s s g_t=\nabla loss gt=loss

2.计算t时刻的一阶动量 m t m_t mt(与梯度相关的函数)和二阶动量 V t V_t Vt(与梯度平方相关的函数)

3.计算t时刻下降梯度: η t = l r ⋅ m t / V t \eta_t=lr\cdot m_t/\sqrt V_t ηt=lrmt/V t

4.计算t+1时刻的参数: w t + 1 = w t − η t w_{t+1}=w_t-\eta_t wt+1=wtηt

优化器种类:

⋅ \cdot SGD(无momentum动量)常用的梯度下降法

m t = g t ,   V t = 1 m_t=g_t, \ V_t=1 mt=gt, Vt=1
w t + 1 = w t − l r ∗ ∂ l o s s ∂ w t w_{t+1}=w_t-lr* \frac{\partial loss}{\partial w_t} wt+1=wtlrwtloss

⋅ \cdot SGDM(含有momentum动量)SGD增加一阶动量

m t = β ⋅ m t − 1 + ( 1 − β ) ⋅ g t ,   V t = 1 m_t=\beta \cdot m_{t-1}+(1-\beta)\cdot g_t, \ V_t=1 mt=βmt1+(1β)gt, Vt=1 β \beta β经验取值0.9)
w t + 1 = w t − l r ∗ ( β ⋅ m t + ( 1 − β ) ⋅ g t ) w_{t+1}=w_t-lr*(\beta \cdot m_{t}+(1-\beta)\cdot g_t) wt+1=wtlr(βmt+(1β)gt)

⋅ \cdot Adagrad 在SGD的基础上增加二阶动量(对模型中的参数分配自适应学习率)

m t = g t ,   V t = ∑ τ = 1 t g τ 2 m_t=g_t, \ V_t=\sum_{\tau=1}^t g_{\tau}^2 mt=gt, Vt=τ=1tgτ2
w t + 1 = w t − l r ∗ g t / ( ∑ τ = 1 t g τ 2 ) w_{t+1}=w_t-lr*g_t/(\sqrt{\sum_{\tau=1}^t g_{\tau}^2}) wt+1=wtlrgt/(τ=1tgτ2 )

⋅ \cdot RMSProp 在SGD的基础上增加二阶动量,使用指数滑动平均值计算,表示过去时间的平均值

m t = g t ,   V t = β ⋅ V t − 1 + ( 1 − β ) ⋅ g t 2 m_t=g_t, \ V_t=\beta\cdot V_{t-1}+(1-\beta)\cdot g_t^2 mt=gt, Vt=βVt1+(1β)gt2
w t + 1 = w t − l r ∗ g t / ( β ⋅ V t + ( 1 − β ) ⋅ g t 2 ) w_{t+1}=w_t-lr*g_t/(\sqrt{\beta\cdot V_{t}+(1-\beta)\cdot g_t^2}) wt+1=wtlrgt/(βVt+(1β)gt2 )

⋅ \cdot Adam 结合SGDM的一阶动量与RMSProp的二阶动量

m t = β 1 ⋅ m t − 1 + ( 1 − β 1 ) ⋅ g t ,   V t = β 2 ⋅ V t − 1 + ( 1 − β 2 ) ⋅ g t 2 m_t=\beta_1 \cdot m_{t-1}+(1-\beta_1)\cdot g_t, \ V_t=\beta_2 \cdot V_{t-1}+(1-\beta_2)\cdot g_t^2 mt=β1mt1+(1β1)gt, Vt=β2Vt1+(1β2)gt2
w t + 1 = w t − l r ⋅ m t 1 − β 1 t / V t 1 − β 2 t w_{t+1}=w_t-lr\cdot \frac{m_t}{1-\beta_1^t}/\sqrt{\frac{V_t}{1-\beta_2^t}} wt+1=wtlr1β1tmt/1β2tVt

# 利用鸢尾花数据集,实现前向传播、反向传播,可视化loss曲线--Adam优化器

# 导入所需模块
import tensorflow as tf
from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
import time  ##1##

# 导入数据,分别为输入特征和标签
x_data = datasets.load_iris().data
y_data = datasets.load_iris().target

# 随机打乱数据(因为原始数据是顺序的,顺序不打乱会影响准确率)
# seed: 随机数种子,是一个整数,当设置之后,每次生成的随机数都一样(为方便教学,以保每位同学结果一致)
np.random.seed(116)  # 使用相同的seed,保证输入特征和标签一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)
tf.random.set_seed(116)

# 将打乱后的数据集分割为训练集和测试集,训练集为前120行,测试集为后30行
x_train = x_data[:-30]
y_train = y_data[:-30]
x_test = x_data[-30:]
y_test = y_data[-30:]

# 转换x的数据类型,否则后面矩阵相乘时会因数据类型不一致报错
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)

# from_tensor_slices函数使输入特征和标签值一一对应。(把数据集分批次,每个批次batch组数据)
train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)
test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)

# 生成神经网络的参数,4个输入特征故,输入层为4个输入节点;因为3分类,故输出层为3个神经元
# 用tf.Variable()标记参数可训练
# 使用seed使每次生成的随机数相同(方便教学,使大家结果都一致,在现实使用时不写seed)
w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))
b1 = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))

lr = 0.1  # 学习率为0.1
train_loss_results = []  # 将每轮的loss记录在此列表中,为后续画loss曲线提供数据
test_acc = []  # 将每轮的acc记录在此列表中,为后续画acc曲线提供数据
epoch = 500  # 循环500轮
loss_all = 0  # 每轮分4个step,loss_all记录四个step生成的4个loss的和

##########################################################################
#初始化参数
m_w, m_b = 0, 0
v_w, v_b = 0, 0
beta1, beta2 = 0.9, 0.999
delta_w, delta_b = 0, 0
global_step = 0
##########################################################################

# 训练部分
now_time = time.time()  ##2##
for epoch in range(epoch):  # 数据集级别的循环,每个epoch循环一次数据集
    for step, (x_train, y_train) in enumerate(train_db):  # batch级别的循环 ,每个step循环一个batch
 ##########################################################################       
        global_step += 1
 ##########################################################################       
        with tf.GradientTape() as tape:  # with结构记录梯度信息
            y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
            y = tf.nn.softmax(y)  # 使输出y符合概率分布(此操作后与独热码同量级,可相减求loss)
            y_ = tf.one_hot(y_train, depth=3)  # 将标签值转换为独热码格式,方便计算loss和accuracy
            loss = tf.reduce_mean(tf.square(y_ - y))  # 采用均方误差损失函数mse = mean(sum(y-out)^2)
            loss_all += loss.numpy()  # 将每个step计算出的loss累加,为后续求loss平均值提供数据,这样计算的loss更准确
        # 计算loss对各个参数的梯度
        grads = tape.gradient(loss, [w1, b1])

##########################################################################
 # adam
        m_w = beta1 * m_w + (1 - beta1) * grads[0]
        m_b = beta1 * m_b + (1 - beta1) * grads[1]
        v_w = beta2 * v_w + (1 - beta2) * tf.square(grads[0])
        v_b = beta2 * v_b + (1 - beta2) * tf.square(grads[1])

        m_w_correction = m_w / (1 - tf.pow(beta1, int(global_step))) #修正后的一阶和二阶动量
        m_b_correction = m_b / (1 - tf.pow(beta1, int(global_step)))
        v_w_correction = v_w / (1 - tf.pow(beta2, int(global_step)))
        v_b_correction = v_b / (1 - tf.pow(beta2, int(global_step)))

        w1.assign_sub(lr * m_w_correction / tf.sqrt(v_w_correction))
        b1.assign_sub(lr * m_b_correction / tf.sqrt(v_b_correction))
##########################################################################

    # 每个epoch,打印loss信息
    print("Epoch {}, loss: {}".format(epoch, loss_all / 4))
    train_loss_results.append(loss_all / 4)  # 将4个step的loss求平均记录在此变量中
    loss_all = 0  # loss_all归零,为记录下一个epoch的loss做准备

    # 测试部分
    # total_correct为预测对的样本个数, total_number为测试的总样本数,将这两个变量都初始化为0
    total_correct, total_number = 0, 0
    for x_test, y_test in test_db:
        # 使用更新后的参数进行预测
        y = tf.matmul(x_test, w1) + b1
        y = tf.nn.softmax(y)
        pred = tf.argmax(y, axis=1)  # 返回y中最大值的索引,即预测的分类
        # 将pred转换为y_test的数据类型
        pred = tf.cast(pred, dtype=y_test.dtype)
        # 若分类正确,则correct=1,否则为0,将bool型的结果转换为int型
        correct = tf.cast(tf.equal(pred, y_test), dtype=tf.int32)
        # 将每个batch的correct数加起来
        correct = tf.reduce_sum(correct)
        # 将所有batch中的correct数加起来
        total_correct += int(correct)
        # total_number为测试的总样本数,也就是x_test的行数,shape[0]返回变量的行数
        total_number += x_test.shape[0]
    # 总的准确率等于total_correct/total_number
    acc = total_correct / total_number
    test_acc.append(acc)
    print("Test_acc:", acc)
    print("--------------------------")
total_time = time.time() - now_time  ##3##
print("total_time", total_time)  ##4##

# 绘制 loss 曲线
plt.title('Loss Function Curve')  # 图片标题
plt.xlabel('Epoch')  # x轴变量名称
plt.ylabel('Loss')  # y轴变量名称
plt.plot(train_loss_results, label="$Loss$")  # 逐点画出trian_loss_results值并连线,连线图标是Loss
plt.legend()  # 画出曲线图标
plt.show()  # 画出图像

# 绘制 Accuracy 曲线
plt.title('Acc Curve')  # 图片标题
plt.xlabel('Epoch')  # x轴变量名称
plt.ylabel('Acc')  # y轴变量名称
plt.plot(test_acc, label="$Accuracy$")  # 逐点画出test_acc值并连线,连线图标是Accuracy
plt.legend()
plt.show()

# 请将loss曲线、ACC曲线、total_time记录到 class2\优化器对比.docx  对比各优化器收敛情况

Epoch 0, loss: 0.219841156154871
Test_acc: 0.5333333333333333
--------------------------
Epoch 1, loss: 0.14480512216687202
Test_acc: 0.5333333333333333
--------------------------
Epoch 2, loss: 0.10274341143667698
Test_acc: 0.6666666666666666
--------------------------
Epoch 3, loss: 0.08922165259718895
Test_acc: 0.5333333333333333
--------------------------
Epoch 4, loss: 0.0860080998390913
Test_acc: 0.9
--------------------------
Epoch 5, loss: 0.06994969490915537
Test_acc: 0.8
--------------------------
Epoch 6, loss: 0.06724503170698881
Test_acc: 0.8
--------------------------
Epoch 7, loss: 0.061045361682772636
Test_acc: 1.0
--------------------------
Epoch 8, loss: 0.05573830287903547
Test_acc: 0.9333333333333333
--------------------------
Epoch 9, loss: 0.054052991792559624
Test_acc: 1.0
--------------------------
Epoch 10, loss: 0.0490921288728714
Test_acc: 1.0
--------------------------
Epoch 11, loss: 0.04825884848833084
Test_acc: 1.0
--------------------------
Epoch 12, loss: 0.04458624869585037
Test_acc: 1.0
--------------------------
Epoch 13, loss: 0.043710471130907536
Test_acc: 1.0
--------------------------
Epoch 14, loss: 0.04151816386729479
Test_acc: 1.0
--------------------------
Epoch 15, loss: 0.04042448848485947
Test_acc: 1.0
--------------------------
Epoch 16, loss: 0.03921938082203269
Test_acc: 1.0
--------------------------
Epoch 17, loss: 0.037702415604144335
Test_acc: 1.0
--------------------------
Epoch 18, loss: 0.03746768459677696
Test_acc: 1.0
--------------------------
Epoch 19, loss: 0.035456204786896706
Test_acc: 1.0
--------------------------
Epoch 20, loss: 0.03618265129625797
Test_acc: 1.0
--------------------------
Epoch 21, loss: 0.03353231865912676
Test_acc: 1.0
--------------------------
Epoch 22, loss: 0.03510467382147908
Test_acc: 1.0
--------------------------
Epoch 23, loss: 0.031799004413187504
Test_acc: 1.0
--------------------------
Epoch 24, loss: 0.034055131021887064
Test_acc: 1.0
--------------------------
Epoch 25, loss: 0.030265061650425196
Test_acc: 1.0
--------------------------
Epoch 26, loss: 0.032884009182453156
Test_acc: 1.0
--------------------------
Epoch 27, loss: 0.02898171078413725
Test_acc: 1.0
--------------------------
Epoch 28, loss: 0.03153973026201129
Test_acc: 1.0
--------------------------
Epoch 29, loss: 0.028038074262440205
Test_acc: 1.0
--------------------------
Epoch 30, loss: 0.03012115554884076
Test_acc: 1.0
--------------------------
Epoch 31, loss: 0.027460468467324972
Test_acc: 1.0
--------------------------
Epoch 32, loss: 0.028777056373655796
Test_acc: 1.0
--------------------------
Epoch 33, loss: 0.027132945135235786
Test_acc: 1.0
--------------------------
Epoch 34, loss: 0.02761527057737112
Test_acc: 1.0
--------------------------
Epoch 35, loss: 0.026862222235649824
Test_acc: 1.0
--------------------------
Epoch 36, loss: 0.02668716199696064
Test_acc: 1.0
--------------------------
Epoch 37, loss: 0.026503012515604496
Test_acc: 1.0
--------------------------
Epoch 38, loss: 0.025996222160756588
Test_acc: 1.0
--------------------------
Epoch 39, loss: 0.026023756247013807
Test_acc: 1.0
--------------------------
Epoch 40, loss: 0.02549735689535737
Test_acc: 1.0
--------------------------
Epoch 41, loss: 0.025481850374490023
Test_acc: 1.0
--------------------------
Epoch 42, loss: 0.025107113178819418
Test_acc: 1.0
--------------------------
Epoch 43, loss: 0.02495762286707759
Test_acc: 1.0
--------------------------
Epoch 44, loss: 0.02474382519721985
Test_acc: 1.0
--------------------------
Epoch 45, loss: 0.024502399377524853
Test_acc: 1.0
--------------------------
Epoch 46, loss: 0.024368189740926027
Test_acc: 1.0
--------------------------
Epoch 47, loss: 0.024121213238686323
Test_acc: 1.0
--------------------------
Epoch 48, loss: 0.023987011751160026
Test_acc: 1.0
--------------------------
Epoch 49, loss: 0.023787275422364473
Test_acc: 1.0
--------------------------
Epoch 50, loss: 0.023625574307516217
Test_acc: 1.0
--------------------------
Epoch 51, loss: 0.023471781983971596
Test_acc: 1.0
--------------------------
Epoch 52, loss: 0.023299274034798145
Test_acc: 1.0
--------------------------
Epoch 53, loss: 0.023163875099271536
Test_acc: 1.0
--------------------------
Epoch 54, loss: 0.023004963994026184
Test_acc: 1.0
--------------------------
Epoch 55, loss: 0.022868600441142917
Test_acc: 1.0
--------------------------
Epoch 56, loss: 0.022730717435479164
Test_acc: 1.0
--------------------------
Epoch 57, loss: 0.022593244444578886
Test_acc: 1.0
--------------------------
Epoch 58, loss: 0.02246865793131292
Test_acc: 1.0
--------------------------
Epoch 59, loss: 0.022338322130963206
Test_acc: 1.0
--------------------------
Epoch 60, loss: 0.022218456026166677
Test_acc: 1.0
--------------------------
Epoch 61, loss: 0.02209916291758418
Test_acc: 1.0
--------------------------
Epoch 62, loss: 0.02198244072496891
Test_acc: 1.0
--------------------------
Epoch 63, loss: 0.02187166642397642
Test_acc: 1.0
--------------------------
Epoch 64, loss: 0.021760689094662666
Test_acc: 1.0
--------------------------
Epoch 65, loss: 0.021654917625710368
Test_acc: 1.0
--------------------------
Epoch 66, loss: 0.021550980396568775
Test_acc: 1.0
--------------------------
Epoch 67, loss: 0.021449335850775242
Test_acc: 1.0
--------------------------
Epoch 68, loss: 0.021351256873458624
Test_acc: 1.0
--------------------------
Epoch 69, loss: 0.02125451061874628
Test_acc: 1.0
--------------------------
Epoch 70, loss: 0.02116080466657877
Test_acc: 1.0
--------------------------
Epoch 71, loss: 0.0210691608954221
Test_acc: 1.0
--------------------------
Epoch 72, loss: 0.020979427732527256
Test_acc: 1.0
--------------------------
Epoch 73, loss: 0.020892175612971187
Test_acc: 1.0
--------------------------
Epoch 74, loss: 0.020806559594348073
Test_acc: 1.0
--------------------------
Epoch 75, loss: 0.02072303812019527
Test_acc: 1.0
--------------------------
Epoch 76, loss: 0.020641361363232136
Test_acc: 1.0
--------------------------
Epoch 77, loss: 0.02056135074235499
Test_acc: 1.0
--------------------------
Epoch 78, loss: 0.020483192754909396
Test_acc: 1.0
--------------------------
Epoch 79, loss: 0.020406611962243915
Test_acc: 1.0
--------------------------
Epoch 80, loss: 0.020331630716100335
Test_acc: 1.0
--------------------------
Epoch 81, loss: 0.020258230855688453
Test_acc: 1.0
--------------------------
Epoch 82, loss: 0.020186285953968763
Test_acc: 1.0
--------------------------
Epoch 83, loss: 0.020115784369409084
Test_acc: 1.0
--------------------------
Epoch 84, loss: 0.02004669327288866
Test_acc: 1.0
--------------------------
Epoch 85, loss: 0.019978916039690375
Test_acc: 1.0
--------------------------
Epoch 86, loss: 0.019912483636289835
Test_acc: 1.0
--------------------------
Epoch 87, loss: 0.019847277086228132
Test_acc: 1.0
--------------------------
Epoch 88, loss: 0.01978331431746483
Test_acc: 1.0
--------------------------
Epoch 89, loss: 0.019720530603080988
Test_acc: 1.0
--------------------------
Epoch 90, loss: 0.019658871227875352
Test_acc: 1.0
--------------------------
Epoch 91, loss: 0.01959836296737194
Test_acc: 1.0
--------------------------
Epoch 92, loss: 0.019538898719474673
Test_acc: 1.0
--------------------------
Epoch 93, loss: 0.01948050269857049
Test_acc: 1.0
--------------------------
Epoch 94, loss: 0.019423116696998477
Test_acc: 1.0
--------------------------
Epoch 95, loss: 0.019366707652807236
Test_acc: 1.0
--------------------------
Epoch 96, loss: 0.019311273004859686
Test_acc: 1.0
--------------------------
Epoch 97, loss: 0.019256748724728823
Test_acc: 1.0
--------------------------
Epoch 98, loss: 0.019203145755454898
Test_acc: 1.0
--------------------------
Epoch 99, loss: 0.01915041427128017
Test_acc: 1.0
--------------------------
Epoch 100, loss: 0.019098545191809535
Test_acc: 1.0
--------------------------
Epoch 101, loss: 0.019047508016228676
Test_acc: 1.0
--------------------------
Epoch 102, loss: 0.018997267819941044
Test_acc: 1.0
--------------------------
Epoch 103, loss: 0.018947835080325603
Test_acc: 1.0
--------------------------
Epoch 104, loss: 0.018899151124060154
Test_acc: 1.0
--------------------------
Epoch 105, loss: 0.01885123853571713
Test_acc: 1.0
--------------------------
Epoch 106, loss: 0.018804020481184125
Test_acc: 1.0
--------------------------
Epoch 107, loss: 0.018757542595267296
Test_acc: 1.0
--------------------------
Epoch 108, loss: 0.01871174667030573
Test_acc: 1.0
--------------------------
Epoch 109, loss: 0.018666642485186458
Test_acc: 1.0
--------------------------
Epoch 110, loss: 0.018622159957885742
Test_acc: 1.0
--------------------------
Epoch 111, loss: 0.01857834868133068
Test_acc: 1.0
--------------------------
Epoch 112, loss: 0.01853515743277967
Test_acc: 1.0
--------------------------
Epoch 113, loss: 0.018492567585781217
Test_acc: 1.0
--------------------------
Epoch 114, loss: 0.01845061290077865
Test_acc: 1.0
--------------------------
Epoch 115, loss: 0.018409190233796835
Test_acc: 1.0
--------------------------
Epoch 116, loss: 0.018368377350270748
Test_acc: 1.0
--------------------------
Epoch 117, loss: 0.018328116508200765
Test_acc: 1.0
--------------------------
Epoch 118, loss: 0.018288369989022613
Test_acc: 1.0
--------------------------
Epoch 119, loss: 0.018249196698889136
Test_acc: 1.0
--------------------------
Epoch 120, loss: 0.018210522131994367
Test_acc: 1.0
--------------------------
Epoch 121, loss: 0.018172357231378555
Test_acc: 1.0
--------------------------
Epoch 122, loss: 0.018134709680452943
Test_acc: 1.0
--------------------------
Epoch 123, loss: 0.01809751708060503
Test_acc: 1.0
--------------------------
Epoch 124, loss: 0.018060836708173156
Test_acc: 1.0
--------------------------
Epoch 125, loss: 0.018024600110948086
Test_acc: 1.0
--------------------------
Epoch 126, loss: 0.01798883592709899
Test_acc: 1.0
--------------------------
Epoch 127, loss: 0.01795350224711001
Test_acc: 1.0
--------------------------
Epoch 128, loss: 0.0179186190944165
Test_acc: 1.0
--------------------------
Epoch 129, loss: 0.017884166911244392
Test_acc: 1.0
--------------------------
Epoch 130, loss: 0.01785012916661799
Test_acc: 1.0
--------------------------
Epoch 131, loss: 0.017816518899053335
Test_acc: 1.0
--------------------------
Epoch 132, loss: 0.01778329210355878
Test_acc: 1.0
--------------------------
Epoch 133, loss: 0.017750480910763144
Test_acc: 1.0
--------------------------
Epoch 134, loss: 0.01771805272437632
Test_acc: 1.0
--------------------------
Epoch 135, loss: 0.017685997067019343
Test_acc: 1.0
--------------------------
Epoch 136, loss: 0.017654340248554945
Test_acc: 1.0
--------------------------
Epoch 137, loss: 0.01762302708812058
Test_acc: 1.0
--------------------------
Epoch 138, loss: 0.017592082964256406
Test_acc: 1.0
--------------------------
Epoch 139, loss: 0.017561492044478655
Test_acc: 1.0
--------------------------
Epoch 140, loss: 0.01753125316463411
Test_acc: 1.0
--------------------------
Epoch 141, loss: 0.017501337686553597
Test_acc: 1.0
--------------------------
Epoch 142, loss: 0.01747178891673684
Test_acc: 1.0
--------------------------
Epoch 143, loss: 0.017442531185224652
Test_acc: 1.0
--------------------------
Epoch 144, loss: 0.0174136261921376
Test_acc: 1.0
--------------------------
Epoch 145, loss: 0.017385033192113042
Test_acc: 1.0
--------------------------
Epoch 146, loss: 0.017356734722852707
Test_acc: 1.0
--------------------------
Epoch 147, loss: 0.01732876803725958
Test_acc: 1.0
--------------------------
Epoch 148, loss: 0.017301088199019432
Test_acc: 1.0
--------------------------
Epoch 149, loss: 0.01727368892170489
Test_acc: 1.0
--------------------------
Epoch 150, loss: 0.01724661234766245
Test_acc: 1.0
--------------------------
Epoch 151, loss: 0.017219802364706993
Test_acc: 1.0
--------------------------
Epoch 152, loss: 0.017193270614370704
Test_acc: 1.0
--------------------------
Epoch 153, loss: 0.017167032696306705
Test_acc: 1.0
--------------------------
Epoch 154, loss: 0.017141057876870036
Test_acc: 1.0
--------------------------
Epoch 155, loss: 0.01711534452624619
Test_acc: 1.0
--------------------------
Epoch 156, loss: 0.017089904751628637
Test_acc: 1.0
--------------------------
Epoch 157, loss: 0.01706471759825945
Test_acc: 1.0
--------------------------
Epoch 158, loss: 0.017039785627275705
Test_acc: 1.0
--------------------------
Epoch 159, loss: 0.01701511791907251
Test_acc: 1.0
--------------------------
Epoch 160, loss: 0.01699067302979529
Test_acc: 1.0
--------------------------
Epoch 161, loss: 0.0169664837885648
Test_acc: 1.0
--------------------------
Epoch 162, loss: 0.016942543676123023
Test_acc: 1.0
--------------------------
Epoch 163, loss: 0.016918820096179843
Test_acc: 1.0
--------------------------
Epoch 164, loss: 0.016895336331799626
Test_acc: 1.0
--------------------------
Epoch 165, loss: 0.01687209028750658
Test_acc: 1.0
--------------------------
Epoch 166, loss: 0.0168490509968251
Test_acc: 1.0
--------------------------
Epoch 167, loss: 0.01682625967077911
Test_acc: 1.0
--------------------------
Epoch 168, loss: 0.016803660430014133
Test_acc: 1.0
--------------------------
Epoch 169, loss: 0.016781298676505685
Test_acc: 1.0
--------------------------
Epoch 170, loss: 0.016759122721850872
Test_acc: 1.0
--------------------------
Epoch 171, loss: 0.016737180994823575
Test_acc: 1.0
--------------------------
Epoch 172, loss: 0.01671542995609343
Test_acc: 1.0
--------------------------
Epoch 173, loss: 0.01669387868605554
Test_acc: 1.0
--------------------------
Epoch 174, loss: 0.016672548837959766
Test_acc: 1.0
--------------------------
Epoch 175, loss: 0.016651394311338663
Test_acc: 1.0
--------------------------
Epoch 176, loss: 0.016630453057587147
Test_acc: 1.0
--------------------------
Epoch 177, loss: 0.016609684331342578
Test_acc: 1.0
--------------------------
Epoch 178, loss: 0.016589115606620908
Test_acc: 1.0
--------------------------
Epoch 179, loss: 0.01656873431056738
Test_acc: 1.0
--------------------------
Epoch 180, loss: 0.016548529034480453
Test_acc: 1.0
--------------------------
Epoch 181, loss: 0.016528512351214886
Test_acc: 1.0
--------------------------
Epoch 182, loss: 0.01650867867283523
Test_acc: 1.0
--------------------------
Epoch 183, loss: 0.01648900588043034
Test_acc: 1.0
--------------------------
Epoch 184, loss: 0.016469520051032305
Test_acc: 1.0
--------------------------
Epoch 185, loss: 0.0164502018596977
Test_acc: 1.0
--------------------------
Epoch 186, loss: 0.016431050142273307
Test_acc: 1.0
--------------------------
Epoch 187, loss: 0.016412083758041263
Test_acc: 1.0
--------------------------
Epoch 188, loss: 0.01639324496500194
Test_acc: 1.0
--------------------------
Epoch 189, loss: 0.016374609898775816
Test_acc: 1.0
--------------------------
Epoch 190, loss: 0.01635610591620207
Test_acc: 1.0
--------------------------
Epoch 191, loss: 0.01633776957169175
Test_acc: 1.0
--------------------------
Epoch 192, loss: 0.016319590155035257
Test_acc: 1.0
--------------------------
Epoch 193, loss: 0.016301566967740655
Test_acc: 1.0
--------------------------
Epoch 194, loss: 0.01628370420075953
Test_acc: 1.0
--------------------------
Epoch 195, loss: 0.016265979502350092
Test_acc: 1.0
--------------------------
Epoch 196, loss: 0.016248409636318684
Test_acc: 1.0
--------------------------
Epoch 197, loss: 0.01623099227435887
Test_acc: 1.0
--------------------------
Epoch 198, loss: 0.016213689697906375
Test_acc: 1.0
--------------------------
Epoch 199, loss: 0.016196565004065633
Test_acc: 1.0
--------------------------
Epoch 200, loss: 0.016179573256522417
Test_acc: 1.0
--------------------------
Epoch 201, loss: 0.016162706771865487
Test_acc: 1.0
--------------------------
Epoch 202, loss: 0.01614598883315921
Test_acc: 1.0
--------------------------
Epoch 203, loss: 0.01612941548228264
Test_acc: 1.0
--------------------------
Epoch 204, loss: 0.016112952027469873
Test_acc: 1.0
--------------------------
Epoch 205, loss: 0.01609664922580123
Test_acc: 1.0
--------------------------
Epoch 206, loss: 0.016080467961728573
Test_acc: 1.0
--------------------------
Epoch 207, loss: 0.016064405906945467
Test_acc: 1.0
--------------------------
Epoch 208, loss: 0.016048494493588805
Test_acc: 1.0
--------------------------
Epoch 209, loss: 0.01603268599137664
Test_acc: 1.0
--------------------------
Epoch 210, loss: 0.016017016489058733
Test_acc: 1.0
--------------------------
Epoch 211, loss: 0.016001465497538447
Test_acc: 1.0
--------------------------
Epoch 212, loss: 0.01598604186438024
Test_acc: 1.0
--------------------------
Epoch 213, loss: 0.015970721375197172
Test_acc: 1.0
--------------------------
Epoch 214, loss: 0.015955543145537376
Test_acc: 1.0
--------------------------
Epoch 215, loss: 0.01594048086553812
Test_acc: 1.0
--------------------------
Epoch 216, loss: 0.015925514977425337
Test_acc: 1.0
--------------------------
Epoch 217, loss: 0.01591069041751325
Test_acc: 1.0
--------------------------
Epoch 218, loss: 0.015895966440439224
Test_acc: 1.0
--------------------------
Epoch 219, loss: 0.01588133885525167
Test_acc: 1.0
--------------------------
Epoch 220, loss: 0.015866854693740606
Test_acc: 1.0
--------------------------
Epoch 221, loss: 0.01585245318710804
Test_acc: 1.0
--------------------------
Epoch 222, loss: 0.0158381809014827
Test_acc: 1.0
--------------------------
Epoch 223, loss: 0.01582401292398572
Test_acc: 1.0
--------------------------
Epoch 224, loss: 0.015809949254617095
Test_acc: 1.0
--------------------------
Epoch 225, loss: 0.015795978251844645
Test_acc: 1.0
--------------------------
Epoch 226, loss: 0.01578212482854724
Test_acc: 1.0
--------------------------
Epoch 227, loss: 0.015768370823934674
Test_acc: 1.0
--------------------------
Epoch 228, loss: 0.015754723688587546
Test_acc: 1.0
--------------------------
Epoch 229, loss: 0.015741163166239858
Test_acc: 1.0
--------------------------
Epoch 230, loss: 0.0157277206890285
Test_acc: 1.0
--------------------------
Epoch 231, loss: 0.01571435621008277
Test_acc: 1.0
--------------------------
Epoch 232, loss: 0.015701111406087875
Test_acc: 1.0
--------------------------
Epoch 233, loss: 0.01568794925697148
Test_acc: 1.0
--------------------------
Epoch 234, loss: 0.015674879774451256
Test_acc: 1.0
--------------------------
Epoch 235, loss: 0.015661920653656125
Test_acc: 1.0
--------------------------
Epoch 236, loss: 0.015649036969989538
Test_acc: 1.0
--------------------------
Epoch 237, loss: 0.015636253403499722
Test_acc: 1.0
--------------------------
Epoch 238, loss: 0.01562356436625123
Test_acc: 1.0
--------------------------
Epoch 239, loss: 0.015610968694090843
Test_acc: 1.0
--------------------------
Epoch 240, loss: 0.015598442871123552
Test_acc: 1.0
--------------------------
Epoch 241, loss: 0.015586030436679721
Test_acc: 1.0
--------------------------
Epoch 242, loss: 0.015573688666336238
Test_acc: 1.0
--------------------------
Epoch 243, loss: 0.015561436070129275
Test_acc: 1.0
--------------------------
Epoch 244, loss: 0.015549276489764452
Test_acc: 1.0
--------------------------
Epoch 245, loss: 0.015537198167294264
Test_acc: 1.0
--------------------------
Epoch 246, loss: 0.015525204362347722
Test_acc: 1.0
--------------------------
Epoch 247, loss: 0.015513290418311954
Test_acc: 1.0
--------------------------
Epoch 248, loss: 0.015501468908041716
Test_acc: 1.0
--------------------------
Epoch 249, loss: 0.01548970746807754
Test_acc: 1.0
--------------------------
Epoch 250, loss: 0.015478059882298112
Test_acc: 1.0
--------------------------
Epoch 251, loss: 0.01546645408961922
Test_acc: 1.0
--------------------------
Epoch 252, loss: 0.015454950043931603
Test_acc: 1.0
--------------------------
Epoch 253, loss: 0.015443534590303898
Test_acc: 1.0
--------------------------
Epoch 254, loss: 0.01543217059224844
Test_acc: 1.0
--------------------------
Epoch 255, loss: 0.015420892275869846
Test_acc: 1.0
--------------------------
Epoch 256, loss: 0.015409694518893957
Test_acc: 1.0
--------------------------
Epoch 257, loss: 0.015398570918478072
Test_acc: 1.0
--------------------------
Epoch 258, loss: 0.015387511346489191
Test_acc: 1.0
--------------------------
Epoch 259, loss: 0.015376557945273817
Test_acc: 1.0
--------------------------
Epoch 260, loss: 0.01536564459092915
Test_acc: 1.0
--------------------------
Epoch 261, loss: 0.015354814822785556
Test_acc: 1.0
--------------------------
Epoch 262, loss: 0.015344054787419736
Test_acc: 1.0
--------------------------
Epoch 263, loss: 0.015333378803916276
Test_acc: 1.0
--------------------------
Epoch 264, loss: 0.015322742285206914
Test_acc: 1.0
--------------------------
Epoch 265, loss: 0.015312209143303335
Test_acc: 1.0
--------------------------
Epoch 266, loss: 0.015301719540730119
Test_acc: 1.0
--------------------------
Epoch 267, loss: 0.015291306306608021
Test_acc: 1.0
--------------------------
Epoch 268, loss: 0.015280970837920904
Test_acc: 1.0
--------------------------
Epoch 269, loss: 0.01527068973518908
Test_acc: 1.0
--------------------------
Epoch 270, loss: 0.015260480344295502
Test_acc: 1.0
--------------------------
Epoch 271, loss: 0.01525033253710717
Test_acc: 1.0
--------------------------
Epoch 272, loss: 0.015240265754982829
Test_acc: 1.0
--------------------------
Epoch 273, loss: 0.015230232733301818
Test_acc: 1.0
--------------------------
Epoch 274, loss: 0.015220295172184706
Test_acc: 1.0
--------------------------
Epoch 275, loss: 0.015210400801151991
Test_acc: 1.0
--------------------------
Epoch 276, loss: 0.015200570342130959
Test_acc: 1.0
--------------------------
Epoch 277, loss: 0.01519080065190792
Test_acc: 1.0
--------------------------
Epoch 278, loss: 0.015181119553744793
Test_acc: 1.0
--------------------------
Epoch 279, loss: 0.015171458013355732
Test_acc: 1.0
--------------------------
Epoch 280, loss: 0.015161880757659674
Test_acc: 1.0
--------------------------
Epoch 281, loss: 0.015152374980971217
Test_acc: 1.0
--------------------------
Epoch 282, loss: 0.015142898308113217
Test_acc: 1.0
--------------------------
Epoch 283, loss: 0.01513349951710552
Test_acc: 1.0
--------------------------
Epoch 284, loss: 0.015124151017516851
Test_acc: 1.0
--------------------------
Epoch 285, loss: 0.015114857698790729
Test_acc: 1.0
--------------------------
Epoch 286, loss: 0.015105637721717358
Test_acc: 1.0
--------------------------
Epoch 287, loss: 0.01509645814076066
Test_acc: 1.0
--------------------------
Epoch 288, loss: 0.015087342355400324
Test_acc: 1.0
--------------------------
Epoch 289, loss: 0.015078289317898452
Test_acc: 1.0
--------------------------
Epoch 290, loss: 0.015069274348206818
Test_acc: 1.0
--------------------------
Epoch 291, loss: 0.015060322941280901
Test_acc: 1.0
--------------------------
Epoch 292, loss: 0.015051410999149084
Test_acc: 1.0
--------------------------
Epoch 293, loss: 0.01504258718341589
Test_acc: 1.0
--------------------------
Epoch 294, loss: 0.015033786883577704
Test_acc: 1.0
--------------------------
Epoch 295, loss: 0.01502505224198103
Test_acc: 1.0
--------------------------
Epoch 296, loss: 0.015016362420283258
Test_acc: 1.0
--------------------------
Epoch 297, loss: 0.015007730107754469
Test_acc: 1.0
--------------------------
Epoch 298, loss: 0.014999139588326216
Test_acc: 1.0
--------------------------
Epoch 299, loss: 0.014990614261478186
Test_acc: 1.0
--------------------------
Epoch 300, loss: 0.014982132706791162
Test_acc: 1.0
--------------------------
Epoch 301, loss: 0.014973703306168318
Test_acc: 1.0
--------------------------
Epoch 302, loss: 0.014965309645049274
Test_acc: 1.0
--------------------------
Epoch 303, loss: 0.0149569904897362
Test_acc: 1.0
--------------------------
Epoch 304, loss: 0.01494870288297534
Test_acc: 1.0
--------------------------
Epoch 305, loss: 0.014940456254407763
Test_acc: 1.0
--------------------------
Epoch 306, loss: 0.014932268531993032
Test_acc: 1.0
--------------------------
Epoch 307, loss: 0.014924140414223075
Test_acc: 1.0
--------------------------
Epoch 308, loss: 0.014916043146513402
Test_acc: 1.0
--------------------------
Epoch 309, loss: 0.014907981734722853
Test_acc: 1.0
--------------------------
Epoch 310, loss: 0.01489999785553664
Test_acc: 1.0
--------------------------
Epoch 311, loss: 0.01489203330129385
Test_acc: 1.0
--------------------------
Epoch 312, loss: 0.014884120202623308
Test_acc: 1.0
--------------------------
Epoch 313, loss: 0.014876263216137886
Test_acc: 1.0
--------------------------
Epoch 314, loss: 0.014868438942357898
Test_acc: 1.0
--------------------------
Epoch 315, loss: 0.014860654715448618
Test_acc: 1.0
--------------------------
Epoch 316, loss: 0.014852941036224365
Test_acc: 1.0
--------------------------
Epoch 317, loss: 0.014845242723822594
Test_acc: 1.0
--------------------------
Epoch 318, loss: 0.014837592374533415
Test_acc: 1.0
--------------------------
Epoch 319, loss: 0.01482999639119953
Test_acc: 1.0
--------------------------
Epoch 320, loss: 0.014822438824921846
Test_acc: 1.0
--------------------------
Epoch 321, loss: 0.014814927708357573
Test_acc: 1.0
--------------------------
Epoch 322, loss: 0.014807443832978606
Test_acc: 1.0
--------------------------
Epoch 323, loss: 0.014800029923208058
Test_acc: 1.0
--------------------------
Epoch 324, loss: 0.014792609727010131
Test_acc: 1.0
--------------------------
Epoch 325, loss: 0.01478527463041246
Test_acc: 1.0
--------------------------
Epoch 326, loss: 0.014777959324419498
Test_acc: 1.0
--------------------------
Epoch 327, loss: 0.014770696870982647
Test_acc: 1.0
--------------------------
Epoch 328, loss: 0.014763465849682689
Test_acc: 1.0
--------------------------
Epoch 329, loss: 0.014756261254660785
Test_acc: 1.0
--------------------------
Epoch 330, loss: 0.0147491164971143
Test_acc: 1.0
--------------------------
Epoch 331, loss: 0.014742008643224835
Test_acc: 1.0
--------------------------
Epoch 332, loss: 0.014734935597516596
Test_acc: 1.0
--------------------------
Epoch 333, loss: 0.014727884205058217
Test_acc: 1.0
--------------------------
Epoch 334, loss: 0.014720908482559025
Test_acc: 1.0
--------------------------
Epoch 335, loss: 0.01471394021064043
Test_acc: 1.0
--------------------------
Epoch 336, loss: 0.014707002672366798
Test_acc: 1.0
--------------------------
Epoch 337, loss: 0.014700138824991882
Test_acc: 1.0
--------------------------
Epoch 338, loss: 0.014693273231387138
Test_acc: 1.0
--------------------------
Epoch 339, loss: 0.014686470851302147
Test_acc: 1.0
--------------------------
Epoch 340, loss: 0.014679691521450877
Test_acc: 1.0
--------------------------
Epoch 341, loss: 0.014672958292067051
Test_acc: 1.0
--------------------------
Epoch 342, loss: 0.014666252071037889
Test_acc: 1.0
--------------------------
Epoch 343, loss: 0.014659582288004458
Test_acc: 1.0
--------------------------
Epoch 344, loss: 0.014652953832410276
Test_acc: 1.0
--------------------------
Epoch 345, loss: 0.014646349591203034
Test_acc: 1.0
--------------------------
Epoch 346, loss: 0.014639788772910833
Test_acc: 1.0
--------------------------
Epoch 347, loss: 0.014633252052590251
Test_acc: 1.0
--------------------------
Epoch 348, loss: 0.014626753982156515
Test_acc: 1.0
--------------------------
Epoch 349, loss: 0.014620293397456408
Test_acc: 1.0
--------------------------
Epoch 350, loss: 0.014613865874707699
Test_acc: 1.0
--------------------------
Epoch 351, loss: 0.014607474207878113
Test_acc: 1.0
--------------------------
Epoch 352, loss: 0.014601112343370914
Test_acc: 1.0
--------------------------
Epoch 353, loss: 0.014594777021557093
Test_acc: 1.0
--------------------------
Epoch 354, loss: 0.014588481397368014
Test_acc: 1.0
--------------------------
Epoch 355, loss: 0.014582217670977116
Test_acc: 1.0
--------------------------
Epoch 356, loss: 0.014575970824807882
Test_acc: 1.0
--------------------------
Epoch 357, loss: 0.014569798018783331
Test_acc: 1.0
--------------------------
Epoch 358, loss: 0.014563607517629862
Test_acc: 1.0
--------------------------
Epoch 359, loss: 0.014557461603544652
Test_acc: 1.0
--------------------------
Epoch 360, loss: 0.014551374129951
Test_acc: 1.0
--------------------------
Epoch 361, loss: 0.014545290730893612
Test_acc: 1.0
--------------------------
Epoch 362, loss: 0.01453923317603767
Test_acc: 1.0
--------------------------
Epoch 363, loss: 0.01453323953319341
Test_acc: 1.0
--------------------------
Epoch 364, loss: 0.01452724460978061
Test_acc: 1.0
--------------------------
Epoch 365, loss: 0.014521293342113495
Test_acc: 1.0
--------------------------
Epoch 366, loss: 0.014515362679958344
Test_acc: 1.0
--------------------------
Epoch 367, loss: 0.014509474509395659
Test_acc: 1.0
--------------------------
Epoch 368, loss: 0.01450361032038927
Test_acc: 1.0
--------------------------
Epoch 369, loss: 0.014497770695015788
Test_acc: 1.0
--------------------------
Epoch 370, loss: 0.014491959009319544
Test_acc: 1.0
--------------------------
Epoch 371, loss: 0.014486184227280319
Test_acc: 1.0
--------------------------
Epoch 372, loss: 0.014480439480394125
Test_acc: 1.0
--------------------------
Epoch 373, loss: 0.01447470486164093
Test_acc: 1.0
--------------------------
Epoch 374, loss: 0.014469023328274488
Test_acc: 1.0
--------------------------
Epoch 375, loss: 0.014463347615674138
Test_acc: 1.0
--------------------------
Epoch 376, loss: 0.014457702403888106
Test_acc: 1.0
--------------------------
Epoch 377, loss: 0.014452096307650208
Test_acc: 1.0
--------------------------
Epoch 378, loss: 0.014446516055613756
Test_acc: 1.0
--------------------------
Epoch 379, loss: 0.014440959668718278
Test_acc: 1.0
--------------------------
Epoch 380, loss: 0.014435420278459787
Test_acc: 1.0
--------------------------
Epoch 381, loss: 0.014429924311116338
Test_acc: 1.0
--------------------------
Epoch 382, loss: 0.014424445922486484
Test_acc: 1.0
--------------------------
Epoch 383, loss: 0.014418993960134685
Test_acc: 1.0
--------------------------
Epoch 384, loss: 0.014413557713851333
Test_acc: 1.0
--------------------------
Epoch 385, loss: 0.014408168848603964
Test_acc: 1.0
--------------------------
Epoch 386, loss: 0.014402793603949249
Test_acc: 1.0
--------------------------
Epoch 387, loss: 0.014397447230294347
Test_acc: 1.0
--------------------------
Epoch 388, loss: 0.014392116339877248
Test_acc: 1.0
--------------------------
Epoch 389, loss: 0.014386835391633213
Test_acc: 1.0
--------------------------
Epoch 390, loss: 0.014381547924131155
Test_acc: 1.0
--------------------------
Epoch 391, loss: 0.014376316918060184
Test_acc: 1.0
--------------------------
Epoch 392, loss: 0.014371089520864189
Test_acc: 1.0
--------------------------
Epoch 393, loss: 0.014365885173901916
Test_acc: 1.0
--------------------------
Epoch 394, loss: 0.014360723085701466
Test_acc: 1.0
--------------------------
Epoch 395, loss: 0.014355575549416244
Test_acc: 1.0
--------------------------
Epoch 396, loss: 0.014350440818816423
Test_acc: 1.0
--------------------------
Epoch 397, loss: 0.014345347648486495
Test_acc: 1.0
--------------------------
Epoch 398, loss: 0.014340281370095909
Test_acc: 1.0
--------------------------
Epoch 399, loss: 0.014335206942632794
Test_acc: 1.0
--------------------------
Epoch 400, loss: 0.01433018024545163
Test_acc: 1.0
--------------------------
Epoch 401, loss: 0.01432518067304045
Test_acc: 1.0
--------------------------
Epoch 402, loss: 0.014320191810838878
Test_acc: 1.0
--------------------------
Epoch 403, loss: 0.014315234962850809
Test_acc: 1.0
--------------------------
Epoch 404, loss: 0.014310291735455394
Test_acc: 1.0
--------------------------
Epoch 405, loss: 0.014305362477898598
Test_acc: 1.0
--------------------------
Epoch 406, loss: 0.014300492941401899
Test_acc: 1.0
--------------------------
Epoch 407, loss: 0.014295608852989972
Test_acc: 1.0
--------------------------
Epoch 408, loss: 0.014290761551819742
Test_acc: 1.0
--------------------------
Epoch 409, loss: 0.01428593136370182
Test_acc: 1.0
--------------------------
Epoch 410, loss: 0.01428113307338208
Test_acc: 1.0
--------------------------
Epoch 411, loss: 0.014276345609687269
Test_acc: 1.0
--------------------------
Epoch 412, loss: 0.014271591557189822
Test_acc: 1.0
--------------------------
Epoch 413, loss: 0.014266852289438248
Test_acc: 1.0
--------------------------
Epoch 414, loss: 0.014262123266234994
Test_acc: 1.0
--------------------------
Epoch 415, loss: 0.014257429749704897
Test_acc: 1.0
--------------------------
Epoch 416, loss: 0.014252759981900454
Test_acc: 1.0
--------------------------
Epoch 417, loss: 0.014248088235035539
Test_acc: 1.0
--------------------------
Epoch 418, loss: 0.01424346084240824
Test_acc: 1.0
--------------------------
Epoch 419, loss: 0.014238850213587284
Test_acc: 1.0
--------------------------
Epoch 420, loss: 0.014234240516088903
Test_acc: 1.0
--------------------------
Epoch 421, loss: 0.014229685300961137
Test_acc: 1.0
--------------------------
Epoch 422, loss: 0.014225112274289131
Test_acc: 1.0
--------------------------
Epoch 423, loss: 0.014220598270185292
Test_acc: 1.0
--------------------------
Epoch 424, loss: 0.014216073090210557
Test_acc: 1.0
--------------------------
Epoch 425, loss: 0.014211573405191302
Test_acc: 1.0
--------------------------
Epoch 426, loss: 0.014207117143087089
Test_acc: 1.0
--------------------------
Epoch 427, loss: 0.014202660764567554
Test_acc: 1.0
--------------------------
Epoch 428, loss: 0.014198215096257627
Test_acc: 1.0
--------------------------
Epoch 429, loss: 0.014193806797266006
Test_acc: 1.0
--------------------------
Epoch 430, loss: 0.01418940513394773
Test_acc: 1.0
--------------------------
Epoch 431, loss: 0.014185025705955923
Test_acc: 1.0
--------------------------
Epoch 432, loss: 0.014180665253661573
Test_acc: 1.0
--------------------------
Epoch 433, loss: 0.014176331809721887
Test_acc: 1.0
--------------------------
Epoch 434, loss: 0.014172009890899062
Test_acc: 1.0
--------------------------
Epoch 435, loss: 0.014167708111926913
Test_acc: 1.0
--------------------------
Epoch 436, loss: 0.01416342135053128
Test_acc: 1.0
--------------------------
Epoch 437, loss: 0.014159158803522587
Test_acc: 1.0
--------------------------
Epoch 438, loss: 0.014154908247292042
Test_acc: 1.0
--------------------------
Epoch 439, loss: 0.01415068015921861
Test_acc: 1.0
--------------------------
Epoch 440, loss: 0.014146470814011991
Test_acc: 1.0
--------------------------
Epoch 441, loss: 0.014142273808829486
Test_acc: 1.0
--------------------------
Epoch 442, loss: 0.014138103928416967
Test_acc: 1.0
--------------------------
Epoch 443, loss: 0.014133934048004448
Test_acc: 1.0
--------------------------
Epoch 444, loss: 0.014129796763882041
Test_acc: 1.0
--------------------------
Epoch 445, loss: 0.014125673333182931
Test_acc: 1.0
--------------------------
Epoch 446, loss: 0.01412156573496759
Test_acc: 1.0
--------------------------
Epoch 447, loss: 0.01411747164092958
Test_acc: 1.0
--------------------------
Epoch 448, loss: 0.014113409677520394
Test_acc: 1.0
--------------------------
Epoch 449, loss: 0.01410935865715146
Test_acc: 1.0
--------------------------
Epoch 450, loss: 0.014105314621701837
Test_acc: 1.0
--------------------------
Epoch 451, loss: 0.01410129270516336
Test_acc: 1.0
--------------------------
Epoch 452, loss: 0.014097294886596501
Test_acc: 1.0
--------------------------
Epoch 453, loss: 0.01409330649767071
Test_acc: 1.0
--------------------------
Epoch 454, loss: 0.0140893270727247
Test_acc: 1.0
--------------------------
Epoch 455, loss: 0.014085383154451847
Test_acc: 1.0
--------------------------
Epoch 456, loss: 0.014081442495808005
Test_acc: 1.0
--------------------------
Epoch 457, loss: 0.014077521162107587
Test_acc: 1.0
--------------------------
Epoch 458, loss: 0.014073614031076431
Test_acc: 1.0
--------------------------
Epoch 459, loss: 0.014069720171391964
Test_acc: 1.0
--------------------------
Epoch 460, loss: 0.014065856579691172
Test_acc: 1.0
--------------------------
Epoch 461, loss: 0.014061981113627553
Test_acc: 1.0
--------------------------
Epoch 462, loss: 0.014058157452382147
Test_acc: 1.0
--------------------------
Epoch 463, loss: 0.014054326340556145
Test_acc: 1.0
--------------------------
Epoch 464, loss: 0.014050501631572843
Test_acc: 1.0
--------------------------
Epoch 465, loss: 0.01404671953059733
Test_acc: 1.0
--------------------------
Epoch 466, loss: 0.014042934169992805
Test_acc: 1.0
--------------------------
Epoch 467, loss: 0.014039159985259175
Test_acc: 1.0
--------------------------
Epoch 468, loss: 0.014035420725122094
Test_acc: 1.0
--------------------------
Epoch 469, loss: 0.014031685655936599
Test_acc: 1.0
--------------------------
Epoch 470, loss: 0.01402796688489616
Test_acc: 1.0
--------------------------
Epoch 471, loss: 0.014024262549355626
Test_acc: 1.0
--------------------------
Epoch 472, loss: 0.014020572765730321
Test_acc: 1.0
--------------------------
Epoch 473, loss: 0.014016890665516257
Test_acc: 1.0
--------------------------
Epoch 474, loss: 0.01401323382742703
Test_acc: 1.0
--------------------------
Epoch 475, loss: 0.014009584672749043
Test_acc: 1.0
--------------------------
Epoch 476, loss: 0.014005956589244306
Test_acc: 1.0
--------------------------
Epoch 477, loss: 0.014002344571053982
Test_acc: 1.0
--------------------------
Epoch 478, loss: 0.013998741749674082
Test_acc: 1.0
--------------------------
Epoch 479, loss: 0.013995146146044135
Test_acc: 1.0
--------------------------
Epoch 480, loss: 0.013991578598506749
Test_acc: 1.0
--------------------------
Epoch 481, loss: 0.013988004415296018
Test_acc: 1.0
--------------------------
Epoch 482, loss: 0.013984471908770502
Test_acc: 1.0
--------------------------
Epoch 483, loss: 0.013980937073938549
Test_acc: 1.0
--------------------------
Epoch 484, loss: 0.013977412017993629
Test_acc: 1.0
--------------------------
Epoch 485, loss: 0.013973913039080799
Test_acc: 1.0
--------------------------
Epoch 486, loss: 0.013970404514111578
Test_acc: 1.0
--------------------------
Epoch 487, loss: 0.013966937316581607
Test_acc: 1.0
--------------------------
Epoch 488, loss: 0.013963470933958888
Test_acc: 1.0
--------------------------
Epoch 489, loss: 0.013960018288344145
Test_acc: 1.0
--------------------------
Epoch 490, loss: 0.013956583105027676
Test_acc: 1.0
--------------------------
Epoch 491, loss: 0.01395316724665463
Test_acc: 1.0
--------------------------
Epoch 492, loss: 0.013949738815426826
Test_acc: 1.0
--------------------------
Epoch 493, loss: 0.013946352060884237
Test_acc: 1.0
--------------------------
Epoch 494, loss: 0.013942966354079545
Test_acc: 1.0
--------------------------
Epoch 495, loss: 0.013939589727669954
Test_acc: 1.0
--------------------------
Epoch 496, loss: 0.013936241273768246
Test_acc: 1.0
--------------------------
Epoch 497, loss: 0.01393288210965693
Test_acc: 1.0
--------------------------
Epoch 498, loss: 0.01392954622860998
Test_acc: 1.0
--------------------------
Epoch 499, loss: 0.01392624038271606
Test_acc: 1.0
--------------------------
total_time 60.744953870773315

在这里插入图片描述

在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值