【TensorFlow自学1】张量生成+常用函数+神经网络分类实例

0、基本介绍

人工智能:机器具备人的思维和意识

行为主义:基于控制论构建感知-动作控制系统

符号主义:基于算数逻辑表达式,可用公式描述、可解释的理性思维(专家系统)

连接主义:神经网络、实现感性思维(模拟神经元)

准备数据(特征/标签对)–搭建网络–优化结构(获得最佳参数)–应用(输入新数据通过前向传播输出)

学习率:梯度下降过程参数沿梯度方向更新的步长(学习率过小,需要的更新次数多;学习率过大时,会在最优值左右来回跳动)

import tensorflow as tf

w=tf.Variable(tf.constant(5,dtype=tf.float32))
lr=0.2
epoch=40

for epoch in range(epoch): #对数据集的循环次数
    with tf.GradientTape() as tape: # with...grads是loss函数对w求梯度的过程
        loss=tf.square(w+1) #损失函数为$L=(w+1)^2$
    grads=tape.gradient(loss,w) # .gradient函数用来说明A对B求导

    w.assign_sub(lr*grads) #参数自己减梯度(向负梯度方向前进)
    print("After %s epoch, w is %f, loss is %f" %(epoch,w.numpy(),loss))

# 更该学习率(0.001/0.999)查看不同结果
Metal device set to: Apple M1

systemMemory: 8.00 GB
maxCacheSize: 2.67 GB

After 0 epoch, w is 2.600000, loss is 36.000000
After 1 epoch, w is 1.160000, loss is 12.959999
After 2 epoch, w is 0.296000, loss is 4.665599
After 3 epoch, w is -0.222400, loss is 1.679616
After 4 epoch, w is -0.533440, loss is 0.604662
After 5 epoch, w is -0.720064, loss is 0.217678
After 6 epoch, w is -0.832038, loss is 0.078364
After 7 epoch, w is -0.899223, loss is 0.028211
After 8 epoch, w is -0.939534, loss is 0.010156
After 9 epoch, w is -0.963720, loss is 0.003656
After 10 epoch, w is -0.978232, loss is 0.001316
After 11 epoch, w is -0.986939, loss is 0.000474
After 12 epoch, w is -0.992164, loss is 0.000171
After 13 epoch, w is -0.995298, loss is 0.000061
After 14 epoch, w is -0.997179, loss is 0.000022
After 15 epoch, w is -0.998307, loss is 0.000008
After 16 epoch, w is -0.998984, loss is 0.000003
After 17 epoch, w is -0.999391, loss is 0.000001
After 18 epoch, w is -0.999634, loss is 0.000000
After 19 epoch, w is -0.999781, loss is 0.000000
After 20 epoch, w is -0.999868, loss is 0.000000
After 21 epoch, w is -0.999921, loss is 0.000000
After 22 epoch, w is -0.999953, loss is 0.000000
After 23 epoch, w is -0.999972, loss is 0.000000
After 24 epoch, w is -0.999983, loss is 0.000000
After 25 epoch, w is -0.999990, loss is 0.000000
After 26 epoch, w is -0.999994, loss is 0.000000
After 27 epoch, w is -0.999996, loss is 0.000000
After 28 epoch, w is -0.999998, loss is 0.000000
After 29 epoch, w is -0.999999, loss is 0.000000
After 30 epoch, w is -0.999999, loss is 0.000000
After 31 epoch, w is -1.000000, loss is 0.000000
After 32 epoch, w is -1.000000, loss is 0.000000
After 33 epoch, w is -1.000000, loss is 0.000000
After 34 epoch, w is -1.000000, loss is 0.000000
After 35 epoch, w is -1.000000, loss is 0.000000
After 36 epoch, w is -1.000000, loss is 0.000000
After 37 epoch, w is -1.000000, loss is 0.000000
After 38 epoch, w is -1.000000, loss is 0.000000
After 39 epoch, w is -1.000000, loss is 0.000000


2022-05-17 09:38:33.511795: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-05-17 09:38:33.512117: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)

1.1、张量生成

Tensor:张量–多维数组(列表) 张量的维数为阶,1阶张量是向量,2阶张量是矩阵…

数据类型:
tf.int32, tf.float32, tf.float64
tf.bool–tf.constant([Ture,False])
tf.string–tf.constant(“Hello, world!”)

创建一个张量:

tf.constant(张量内容, dtype=数据类型)
import tensorflow as tf
a=tf.constant([1,5],dtype=tf.int64)
print(a) #输出所有信息
print(a.dtype)
print(a.shape)
tf.Tensor([1 5], shape=(2,), dtype=int64)
<dtype: 'int64'>
(2,)

很多时候的数据是由numpy格式给出的,将numpy的数据类型转化成tensor数据类型

tf.conveert_to_tensor(数据名,dtype=数据类型)
import numpy as np
a=np.arange(0,5)
b=tf.convert_to_tensor(a,dtype=tf.int64)
print(a)
print(b)
[0 1 2 3 4]
tf.Tensor([0 1 2 3 4], shape=(5,), dtype=int64)

创建全0张量:

tf.zeros(维度)

创建全1张量:

tf.ones(维度)

创建全为指定值的张量:

tf.fill(维度,指定值)

维度:用一个n维向量表示每一维的具体大小

a=tf.zeros([2,3])
b=tf.ones(4)
c=tf.fill([2,2],9)
print(a)
print(b)
print(c)
tf.Tensor(
[[0. 0. 0.]
 [0. 0. 0.]], shape=(2, 3), dtype=float32)
tf.Tensor([1. 1. 1. 1.], shape=(4,), dtype=float32)
tf.Tensor(
[[9 9]
 [9 9]], shape=(2, 2), dtype=int32)

生成正态分布的随机数,默认均值为0,方差为1:

tf.random.normal(维度,mean=均值,stddev=标准差)

生成截断式正态分布的随机数:

tf.random.truncated_normal(维度,mean=均值,stddev=标准差)

可以使生成的随机数在 ( μ − 2 σ , μ + 2 σ ) (\mu-2\sigma,\mu+2\sigma) (μ2σ,μ+2σ)

生成均匀分布随机数【minval,maxval):

tf.random.uniform(维度,minival=最小值,maxval=最大值)
d=tf.random.normal([2,2],mean=0.5,stddev=1)
print(d)

e=tf.random.truncated_normal([2,2],mean=0.5,stddev=1)
print(e)

f=tf.random.uniform([2,2],minval=0,maxval=1)
print(f)
tf.Tensor(
[[1.1372004  0.34099764]
 [0.67257375 0.646933  ]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[ 0.8715459  -0.37803584]
 [-0.9432651  -0.22481954]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[0.64691687 0.2477417 ]
 [0.6576574  0.9563998 ]], shape=(2, 2), dtype=float32)

1.2、常用函数

强制类型转换:

tf.cast(张量名,dtype=数据类型)

计算张量维度上元素的最小值:

tf.reduce_min(张量名)

计算张量维度上的最大值:

tf.reduce_max(张量名)
x1=tf.constant([1.,2.,3.],dtype=tf.float64)
print(x1)

x2=tf.cast(x1,tf.int32)
print(x2)

print(tf.reduce_min(x2),tf.reduce_max(x2))
tf.Tensor([1. 2. 3.], shape=(3,), dtype=float64)
tf.Tensor([1 2 3], shape=(3,), dtype=int32)
tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32)

axis用来指定操作的方向:

axis=0 代表跨行运算(纵向操作,处理的是某一列)
axis=1 代表跨列运算(横向操作,处理的是某一行)

计算张量沿某一维度的平均值:

tf.reduce_mean(张量名,axis=0/1)

求和:

tf.reduce_sum(张量名,axis=0/1)

当不标记axis的值,默认处理所有元素

x=tf.constant([[1,2,3],[2,2,3]])
print(x)
print(tf.reduce_mean(x))
print(tf.reduce_sum(x,axis=1))
tf.Tensor(
[[1 2 3]
 [2 2 3]], shape=(2, 3), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor([6 7], shape=(2,), dtype=int32)

将变量标记为可训练的,被标记的变量(训练参数)会在反向传播中记录梯度信息–tf.Variable(初始值)

#初始化训练参数
#w=tf.Variable(tf.random.mormal([2,2],mean=0,stddev=1))

加减乘除:

tf.add tf.subtract tf.multiply tf.divide(tensor1,tensor2)

这里的加减乘除都是对应元素,因此两个张量必须维度相同

平方、次方、开方:

tf.square(tensor) tf.pow(tensor,n次方) tf.sqrt(tensor)

矩阵乘法:

tf.matmul(矩阵1,矩阵2)
a=tf.ones([1,3])
b=tf.fill([1,3],3.)
print(a)
print(b)
print(tf.add(a,b))
print(tf.subtract(a,b))
print(tf.multiply(a,b))
print(tf.divide(a,b))

c=tf.fill([1,2],3.)
print(c)
print(tf.pow(c,3))
print(tf.square(c))
print(tf.sqrt(c))

d=tf.ones([3,2])
e=tf.fill([2,3],3.)
print(tf.matmul(d,e))
tf.Tensor([[1. 1. 1.]], shape=(1, 3), dtype=float32)
tf.Tensor([[3. 3. 3.]], shape=(1, 3), dtype=float32)
tf.Tensor([[4. 4. 4.]], shape=(1, 3), dtype=float32)
tf.Tensor([[-2. -2. -2.]], shape=(1, 3), dtype=float32)
tf.Tensor([[3. 3. 3.]], shape=(1, 3), dtype=float32)
tf.Tensor([[0.33333334 0.33333334 0.33333334]], shape=(1, 3), dtype=float32)
tf.Tensor([[3. 3.]], shape=(1, 2), dtype=float32)
tf.Tensor([[26.999992 26.999992]], shape=(1, 2), dtype=float32)
tf.Tensor([[9. 9.]], shape=(1, 2), dtype=float32)
tf.Tensor([[1.7320508 1.7320508]], shape=(1, 2), dtype=float32)
tf.Tensor(
[[6. 6. 6.]
 [6. 6. 6.]
 [6. 6. 6.]], shape=(3, 3), dtype=float32)

神经网络在输入层数据要以(特征,标签)对输入网络,该函数实现将特征矩阵(每行向量)与标签向量(每行数字)进行匹配(Numpy和Tensor格式都可以读入):

data=tf.data.Dataset.from_tensor_slices((输入特征,标签))
features=tf.constant([[12,23,10],[1,2,3],[5,6,7],[9,10,11]])
labels=tf.constant([0,1,1,0])
dataset=tf.data.Dataset.from_tensor_slices((features,labels))
print(dataset)
for element in dataset:
    print(element)
<TensorSliceDataset element_spec=(TensorSpec(shape=(3,), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None))>
(<tf.Tensor: shape=(3,), dtype=int32, numpy=array([12, 23, 10], dtype=int32)>, <tf.Tensor: shape=(), dtype=int32, numpy=0>)
(<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 2, 3], dtype=int32)>, <tf.Tensor: shape=(), dtype=int32, numpy=1>)
(<tf.Tensor: shape=(3,), dtype=int32, numpy=array([5, 6, 7], dtype=int32)>, <tf.Tensor: shape=(), dtype=int32, numpy=1>)
(<tf.Tensor: shape=(3,), dtype=int32, numpy=array([ 9, 10, 11], dtype=int32)>, <tf.Tensor: shape=(), dtype=int32, numpy=0>)

通过with结构记录某函数对参数的求导运算,gradient函数求张量梯度–tf.GradientTape

with tf.GradientTape() as tape:
    loss=xxxx
    其他计算过程balabala(可以多个)
grad=tape.gradient(函数,对谁求导)
with tf.GradientTape() as tape:
    w=tf.Variable(tf.constant(3.0))
    loss=tf.pow(w,2)
grad=tape.gradient(loss,w)
print(grad)
tf.Tensor(5.9999995, shape=(), dtype=float32)

enumerate(list)–枚举,可以遍历每个元素(列表、元组、字符串)及其索引,常在for循环内使用

seq=['one','two','three']
for i,element in enumerate(seq):
    print(i,element)
0 one
1 two
2 three


tf.one_hot

独热编码(one-hot encoding)在分类问题中,常用独热编码作为标签,尤其是多分类问题,在处理图/文本等非结构化数据时,独热编码常作为节点/单词的初始特征

tf.one_hot()函数可以将待转换数据直接转换成为独热编码形式

tf.one-hot(待转换数据,depth=几分类)
classes=3
labels=tf.constant([1,0,2]) #输入的元素最大是2,最小是0
output=tf.one_hot(labels,depth=classes)
print(output)
tf.Tensor(
[[0. 1. 0.]
 [1. 0. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)

tf.nn.softmax–将普通数组转换成符合概率分布的数组(和为1)

对于n分类的问题,输出层是一个n维向量,每个元素都是通过神经网络中线性或是非线性的连接计算后得到的,我们需要用Softmax函数把这n个数据转换成为n分类对应的输出概率

y=tf.constant([1.01,2.01,-0.66])
y_pro=tf.nn.softmax(y)
print("After softmax y_pro is: ",y_pro)
After softmax y_pro is:  tf.Tensor([0.2559817  0.69583046 0.04818781], shape=(3,), dtype=float32)

assign_sub–参数自更新

赋值操作,更新参数的值并返回

调用assign_sub前,需要将自更新的参数w用tf.Variable定义为可训练

w.assign_sub(w要自减的项)

w=tf.Variable(4)
w.assign_sub(3)
print(w)
<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=1>

返回最大值的索引:

tf.argmax(tensor,axis=0/1)
test=np.array([[1,2,3],[2,3,4],[5,4,3],[8,7,2]]) #四行三列矩阵
print(test)
print(tf.argmax(test,axis=0)) #每列最大值的索引
print(tf.argmax(test,axis=1)) #每行最大值的索引
[[1 2 3]
 [2 3 4]
 [5 4 3]
 [8 7 2]]
tf.Tensor([3 3 1], shape=(3,), dtype=int64)
tf.Tensor([2 2 0 0], shape=(4,), dtype=int64)

1.3、数据集读入

以鸢尾花数据集(Iris)为例:

该数据集共有150组,每组包括花萼长、花萼宽、花瓣长、花瓣宽4个输入特征,并对应的鸢尾花类别标签(0,1,2)

通过sklearn包datasets直接读入数据集

from sklearn.datasets import load_iris
x_data=datasets.load_iris().data #返回数据集所有输入特征
y_data=datasets.load_iris().target #返回iris数据集所有标签labels
from sklearn import datasets
from pandas import DataFrame
import pandas as pd

x_data=datasets.load_iris().data #导入特征features
y_data=datasets.load_iris().target #导入标签labels
print("x_data from datasets: \n",x_data)
print("y_data from datasets: \n",y_data)

x_data=DataFrame(x_data,columns=['花萼长度','花萼宽度','花瓣长度','花瓣宽度'])
pd.set_option('display.unicode.east_asian_width',True) #设置列名对齐
print("x_data add index: \n",x_data)

x_data['类别']=y_data #新加一列,列标签为“类别”
print("x_data add a column: \n",x_data)
x_data from datasets: 
 [[5.1 3.5 1.4 0.2]
 [4.9 3.  1.4 0.2]
 [4.7 3.2 1.3 0.2]
 [4.6 3.1 1.5 0.2]
 [5.  3.6 1.4 0.2]
 [5.4 3.9 1.7 0.4]
 [4.6 3.4 1.4 0.3]
 [5.  3.4 1.5 0.2]
 [4.4 2.9 1.4 0.2]
 [4.9 3.1 1.5 0.1]
 [5.4 3.7 1.5 0.2]
 [4.8 3.4 1.6 0.2]
 [4.8 3.  1.4 0.1]
 [4.3 3.  1.1 0.1]
 [5.8 4.  1.2 0.2]
 [5.7 4.4 1.5 0.4]
 [5.4 3.9 1.3 0.4]
 [5.1 3.5 1.4 0.3]
 [5.7 3.8 1.7 0.3]
 [5.1 3.8 1.5 0.3]
 [5.4 3.4 1.7 0.2]
 [5.1 3.7 1.5 0.4]
 [4.6 3.6 1.  0.2]
 [5.1 3.3 1.7 0.5]
 [4.8 3.4 1.9 0.2]
 [5.  3.  1.6 0.2]
 [5.  3.4 1.6 0.4]
 [5.2 3.5 1.5 0.2]
 [5.2 3.4 1.4 0.2]
 [4.7 3.2 1.6 0.2]
 [4.8 3.1 1.6 0.2]
 [5.4 3.4 1.5 0.4]
 [5.2 4.1 1.5 0.1]
 [5.5 4.2 1.4 0.2]
 [4.9 3.1 1.5 0.2]
 [5.  3.2 1.2 0.2]
 [5.5 3.5 1.3 0.2]
 [4.9 3.6 1.4 0.1]
 [4.4 3.  1.3 0.2]
 [5.1 3.4 1.5 0.2]
 [5.  3.5 1.3 0.3]
 [4.5 2.3 1.3 0.3]
 [4.4 3.2 1.3 0.2]
 [5.  3.5 1.6 0.6]
 [5.1 3.8 1.9 0.4]
 [4.8 3.  1.4 0.3]
 [5.1 3.8 1.6 0.2]
 [4.6 3.2 1.4 0.2]
 [5.3 3.7 1.5 0.2]
 [5.  3.3 1.4 0.2]
 [7.  3.2 4.7 1.4]
 [6.4 3.2 4.5 1.5]
 [6.9 3.1 4.9 1.5]
 [5.5 2.3 4.  1.3]
 [6.5 2.8 4.6 1.5]
 [5.7 2.8 4.5 1.3]
 [6.3 3.3 4.7 1.6]
 [4.9 2.4 3.3 1. ]
 [6.6 2.9 4.6 1.3]
 [5.2 2.7 3.9 1.4]
 [5.  2.  3.5 1. ]
 [5.9 3.  4.2 1.5]
 [6.  2.2 4.  1. ]
 [6.1 2.9 4.7 1.4]
 [5.6 2.9 3.6 1.3]
 [6.7 3.1 4.4 1.4]
 [5.6 3.  4.5 1.5]
 [5.8 2.7 4.1 1. ]
 [6.2 2.2 4.5 1.5]
 [5.6 2.5 3.9 1.1]
 [5.9 3.2 4.8 1.8]
 [6.1 2.8 4.  1.3]
 [6.3 2.5 4.9 1.5]
 [6.1 2.8 4.7 1.2]
 [6.4 2.9 4.3 1.3]
 [6.6 3.  4.4 1.4]
 [6.8 2.8 4.8 1.4]
 [6.7 3.  5.  1.7]
 [6.  2.9 4.5 1.5]
 [5.7 2.6 3.5 1. ]
 [5.5 2.4 3.8 1.1]
 [5.5 2.4 3.7 1. ]
 [5.8 2.7 3.9 1.2]
 [6.  2.7 5.1 1.6]
 [5.4 3.  4.5 1.5]
 [6.  3.4 4.5 1.6]
 [6.7 3.1 4.7 1.5]
 [6.3 2.3 4.4 1.3]
 [5.6 3.  4.1 1.3]
 [5.5 2.5 4.  1.3]
 [5.5 2.6 4.4 1.2]
 [6.1 3.  4.6 1.4]
 [5.8 2.6 4.  1.2]
 [5.  2.3 3.3 1. ]
 [5.6 2.7 4.2 1.3]
 [5.7 3.  4.2 1.2]
 [5.7 2.9 4.2 1.3]
 [6.2 2.9 4.3 1.3]
 [5.1 2.5 3.  1.1]
 [5.7 2.8 4.1 1.3]
 [6.3 3.3 6.  2.5]
 [5.8 2.7 5.1 1.9]
 [7.1 3.  5.9 2.1]
 [6.3 2.9 5.6 1.8]
 [6.5 3.  5.8 2.2]
 [7.6 3.  6.6 2.1]
 [4.9 2.5 4.5 1.7]
 [7.3 2.9 6.3 1.8]
 [6.7 2.5 5.8 1.8]
 [7.2 3.6 6.1 2.5]
 [6.5 3.2 5.1 2. ]
 [6.4 2.7 5.3 1.9]
 [6.8 3.  5.5 2.1]
 [5.7 2.5 5.  2. ]
 [5.8 2.8 5.1 2.4]
 [6.4 3.2 5.3 2.3]
 [6.5 3.  5.5 1.8]
 [7.7 3.8 6.7 2.2]
 [7.7 2.6 6.9 2.3]
 [6.  2.2 5.  1.5]
 [6.9 3.2 5.7 2.3]
 [5.6 2.8 4.9 2. ]
 [7.7 2.8 6.7 2. ]
 [6.3 2.7 4.9 1.8]
 [6.7 3.3 5.7 2.1]
 [7.2 3.2 6.  1.8]
 [6.2 2.8 4.8 1.8]
 [6.1 3.  4.9 1.8]
 [6.4 2.8 5.6 2.1]
 [7.2 3.  5.8 1.6]
 [7.4 2.8 6.1 1.9]
 [7.9 3.8 6.4 2. ]
 [6.4 2.8 5.6 2.2]
 [6.3 2.8 5.1 1.5]
 [6.1 2.6 5.6 1.4]
 [7.7 3.  6.1 2.3]
 [6.3 3.4 5.6 2.4]
 [6.4 3.1 5.5 1.8]
 [6.  3.  4.8 1.8]
 [6.9 3.1 5.4 2.1]
 [6.7 3.1 5.6 2.4]
 [6.9 3.1 5.1 2.3]
 [5.8 2.7 5.1 1.9]
 [6.8 3.2 5.9 2.3]
 [6.7 3.3 5.7 2.5]
 [6.7 3.  5.2 2.3]
 [6.3 2.5 5.  1.9]
 [6.5 3.  5.2 2. ]
 [6.2 3.4 5.4 2.3]
 [5.9 3.  5.1 1.8]]
y_data from datasets: 
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2]
x_data add index: 
      花萼长度  花萼宽度  花瓣长度  花瓣宽度
0         5.1       3.5       1.4       0.2
1         4.9       3.0       1.4       0.2
2         4.7       3.2       1.3       0.2
3         4.6       3.1       1.5       0.2
4         5.0       3.6       1.4       0.2
..        ...       ...       ...       ...
145       6.7       3.0       5.2       2.3
146       6.3       2.5       5.0       1.9
147       6.5       3.0       5.2       2.0
148       6.2       3.4       5.4       2.3
149       5.9       3.0       5.1       1.8

[150 rows x 4 columns]
x_data add a column: 
      花萼长度  花萼宽度  花瓣长度  花瓣宽度  类别
0         5.1       3.5       1.4       0.2     0
1         4.9       3.0       1.4       0.2     0
2         4.7       3.2       1.3       0.2     0
3         4.6       3.1       1.5       0.2     0
4         5.0       3.6       1.4       0.2     0
..        ...       ...       ...       ...   ...
145       6.7       3.0       5.2       2.3     2
146       6.3       2.5       5.0       1.9     2
147       6.5       3.0       5.2       2.0     2
148       6.2       3.4       5.4       2.3     2
149       5.9       3.0       5.1       1.8     2

[150 rows x 5 columns]

1.4、神经网络实现鸢尾花分类

1)准备数据:

数据集读入、数据集乱序、生成训练集与测试集( x t r a i n / y t r a i n x_{train}/y_{train} xtrain/ytrain x t e s t / y t e s t x_{test}/y_{test} xtest/ytest)、配成(输入特征,标签)对,每次读入一小撮(batch)

2)搭建网络:

定义神经网络中所有可训练参数

3)参数优化:

嵌套循环迭代,with结构更新参数,显示当前loss

4)测试效果:

计算每次遍历后的准确率,显示当前acc/画出acc和loss变化图,进行可视化

伪代码:

#数据集读入
from sklearn.datasets import datasets
x_data=datasets.load_iris().data
y_data=datasets.load_iris().target

#数据集乱序(由于人们认识世界时接受信息是杂乱无章的)
np.random.seed(116) #使用相同的seed,使得乱序后的的特征与标签仍一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)
tf.random.set_seed(116)

#数据集分离成不相交的训练集与测试集
x_train=x_data[:-30] #前120组数据
y_train=y_data[:-30]
x_test=x_data[-30:] #后30组数据
y_test=y_data[-30:]

#将feature与label配对后,每32组数据打包成一个batch喂入神经网络
train_db=tf.data.Dataset.from_tensor_slices((x_train,y_train)).batch(32)
test_db=tf.data.Dataset.from_tensor_slices((x_test,y_test)).batch(32)

#定义神经网络中的所有可训练参数
w1=tf.Variable(tf.random.truncated_normal([4,3],stddev=0.1,seed=1)) #输入特征数是4,因此第一层的参数阵是4*3的
b1=tf.Variable(tf.random.truncated_normal([3],stddev=0.1,seed=1)) #三分类问题,b是3维向量

#嵌套循环迭代,with结构更新参数,显示当前loss
for epoch in range(epoch):
    for step,(x_train,y_train)in enumerate(train_db):
        with tf.GradientTape() as tape:
            #前向传播计算y
            #计算总loss
        grads=tape.gradient(loss,[w1,b1])
        w1.assign_sub(lr*grads[0]) #参数自更新
        b1.assign_sub(lr*grads[1])
    print ("Epoch{}, loss: {}".format(epoch,loss_all/4))

#计算当前参数向前传播后的准确率,显示当前acc
for x_test,y_test in test_db:
    y=tf.matmul(h,w)+b #y为预测结果
    y=tf.nn.softmax(y) #y符合概率分布
    pred=tf.argmax(y,axis=1) #返回列中最大概率的索引(预测的分类)
    pred=tf.cast(pred,dtype=y_test.dtype) #调整数据与标签一致
    correct=tf.cast(tf.equal(pred,y_test),dtype=tf.int32)
    correct=tf.reduce_sum(correct) #将每个batch的correct数都加起来
    total_correct+=int(correct) #将所有batch中的correct数都加起来
    total_number+=x_test.shape[0] #numpy格式的数
acc=total_correct/total_number
print("test_acc: ",acc)

# acc/loss可视化
plt.title('Acc Curve') #图片标题
plt.xlabel('Epoch') #x轴名称
plt.ylabel('Acc') #y轴名称
plt.plot(test_acc,label='$Accuracy$') #逐点画出test_acc的值并连线
plt.legend()
plt.show()
#导入所需模块
import tensorflow as tf
from sklearn import datasets
from matplotlib import pyplot as plt
import numpy as np

#导入数据
x_data=datasets.load_iris().data
y_data=datasets.load_iris().target

#数据集乱序(由于人们认识世界时接受信息是杂乱无章的)
np.random.seed(116) #使用相同的seed,使得乱序后的的特征与标签仍一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)
tf.random.set_seed(116)

#数据集分离成不相交的训练集与测试集
x_train=x_data[:-30] #前120组数据
y_train=y_data[:-30]
x_test=x_data[-30:] #后30组数据
y_test=y_data[-30:]

#转换数据类型以防后续报错
x_train=tf.cast(x_train,tf.float32)
x_test=tf.cast(x_test,tf.float32)

#将feature与label配对后,每32组数据打包成一个batch喂入神经网络
train_db=tf.data.Dataset.from_tensor_slices((x_train,y_train)).batch(32)
test_db=tf.data.Dataset.from_tensor_slices((x_test,y_test)).batch(32)

#定义神经网络中的所有可训练参数
w1=tf.Variable(tf.random.truncated_normal([4,3],stddev=0.1,seed=1)) #输入特征数是4,因此第一层的参数阵是4*3的
b1=tf.Variable(tf.random.truncated_normal([3],stddev=0.1,seed=1)) #三分类问题,b是3维向量

#定义初始量
lr=0.1 #学习率(步长)
train_loss_results=[] #将每轮的loss记录在此列表中,为后续可视化做准备
test_acc=[] #将每轮(epoch)的acc记录下来,后续画图比较方便
epoch=500 #循环500轮 每一轮运行4个batch,每个batch25组数据,一轮就是120组数据(训练集的大小)
loss_all=0 #每轮要进行4个batch的运行,记录4次的loss总和

#训练部分
for epoch in range(epoch): #嵌套循环迭代,每个epoch循环一次数据集
    for step,(x_train,y_train)in enumerate(train_db): #batch级循环,每个step计算一次loss
        with tf.GradientTape() as tape:
            y=tf.matmul(x_train,w1)+b1 #前向传播计算y
            y=tf.nn.softmax(y) #使输出符合概率分布
            y_=tf.one_hot(y_train,depth=3) #将原label转换成独热编码
            loss=tf.reduce_mean(tf.square(y_-y)) #采用均方误差作为损失函数
            loss_all+=loss.numpy() #将每个step的loss累加,为后续求平均值做准备
        grads=tape.gradient(loss,[w1,b1]) #计算梯度
        w1.assign_sub(lr*grads[0]) #参数自更新
        b1.assign_sub(lr*grads[1])
    print ("Epoch{}, loss: {}".format(epoch,loss_all/4))
    train_loss_results.append(loss_all/4) #存储每一轮的loss
    loss_all=0 #将loss_all归0后再进入下一轮

    #测试部分(依然在循环内)
    total_correct,total_number=0,0 #将预测正确的样本个数和测试的总样本数初始化为0
    for x_test,y_test in test_db:
        y=tf.matmul(x_test,w1)+b1 #y为预测结果
        y=tf.nn.softmax(y) #y符合概率分布
        pred=tf.argmax(y,axis=1) #返回列中最大概率的索引(预测的分类)
        pred=tf.cast(pred,dtype=y_test.dtype) #调整预测数据与标签一致
        correct=tf.cast(tf.equal(pred,y_test),dtype=tf.int32) #若预测正确,correct=1,将bool型数据转化成int
        correct=tf.reduce_sum(correct) #将每个batch的correct数都加起来
        total_correct+=int(correct) #将所有batch中的correct数都加起来
        total_number+=x_test.shape[0] #numpy格式的数
    acc=total_correct/total_number
    test_acc.append(acc)
    print("Test_acc: ",acc)
    print ("------------------------------")

#Loss可视化
plt.title('Loss Function Curve') #图片标题
plt.xlabel('Epoch') #x轴名称
plt.ylabel('Loss') #y轴名称
plt.plot(train_loss_results,label='$loss$') #逐点画出值并连线
plt.legend() #曲线图标
plt.show() #画出图像

#Accuracy可视化
plt.title('Acc Curve') #图片标题
plt.xlabel('Epoch') #x轴名称
plt.ylabel('Acc') #y轴名称
plt.plot(test_acc,label='$Accuracy$') #逐点画出test_acc的值并连线
plt.legend()
plt.show()
Epoch0, loss: 0.2821311131119728
Test_acc:  0.16666666666666666
------------------------------
Epoch1, loss: 0.25459615886211395
Test_acc:  0.16666666666666666
------------------------------
Epoch2, loss: 0.22570250555872917
Test_acc:  0.16666666666666666
------------------------------
Epoch3, loss: 0.21028399467468262
Test_acc:  0.16666666666666666
------------------------------
Epoch4, loss: 0.1994226537644863
Test_acc:  0.16666666666666666
------------------------------
Epoch5, loss: 0.18873638659715652
Test_acc:  0.5
------------------------------
Epoch6, loss: 0.17851299792528152
Test_acc:  0.5333333333333333
------------------------------
Epoch7, loss: 0.16922874748706818
Test_acc:  0.5333333333333333
------------------------------
Epoch8, loss: 0.16107672825455666
Test_acc:  0.5333333333333333
------------------------------
Epoch9, loss: 0.15404684469103813
Test_acc:  0.5333333333333333
------------------------------
Epoch10, loss: 0.14802726358175278
Test_acc:  0.5333333333333333
------------------------------
Epoch11, loss: 0.14287303388118744
Test_acc:  0.5333333333333333
------------------------------
Epoch12, loss: 0.13844140991568565
Test_acc:  0.5333333333333333
------------------------------
Epoch13, loss: 0.13460607267916203
Test_acc:  0.5333333333333333
------------------------------
Epoch14, loss: 0.1312607266008854
Test_acc:  0.5333333333333333
------------------------------
Epoch15, loss: 0.12831822223961353
Test_acc:  0.5333333333333333
------------------------------
Epoch16, loss: 0.1257079541683197
Test_acc:  0.5333333333333333
------------------------------
Epoch17, loss: 0.12337299063801765
Test_acc:  0.5333333333333333
------------------------------
Epoch18, loss: 0.12126746028661728
Test_acc:  0.5333333333333333
------------------------------
Epoch19, loss: 0.11935432814061642
Test_acc:  0.5333333333333333
------------------------------
Epoch20, loss: 0.11760355159640312
Test_acc:  0.5333333333333333
------------------------------
Epoch21, loss: 0.11599067971110344
Test_acc:  0.5333333333333333
------------------------------
Epoch22, loss: 0.11449568346142769
Test_acc:  0.5333333333333333
------------------------------
Epoch23, loss: 0.11310207843780518
Test_acc:  0.5333333333333333
------------------------------
Epoch24, loss: 0.11179621145129204
Test_acc:  0.5333333333333333
------------------------------
Epoch25, loss: 0.11056671477854252
Test_acc:  0.5333333333333333
------------------------------
Epoch26, loss: 0.1094040721654892
Test_acc:  0.5333333333333333
------------------------------
Epoch27, loss: 0.10830028168857098
Test_acc:  0.5333333333333333
------------------------------
Epoch28, loss: 0.10724855214357376
Test_acc:  0.5333333333333333
------------------------------
Epoch29, loss: 0.10624312795698643
Test_acc:  0.5333333333333333
------------------------------
Epoch30, loss: 0.1052790954709053
Test_acc:  0.5333333333333333
------------------------------
Epoch31, loss: 0.10435221344232559
Test_acc:  0.5333333333333333
------------------------------
Epoch32, loss: 0.10345886088907719
Test_acc:  0.5333333333333333
------------------------------
Epoch33, loss: 0.10259587876498699
Test_acc:  0.5333333333333333
------------------------------
Epoch34, loss: 0.10176052153110504
Test_acc:  0.5333333333333333
------------------------------
Epoch35, loss: 0.10095041431486607
Test_acc:  0.5333333333333333
------------------------------
Epoch36, loss: 0.10016347654163837
Test_acc:  0.5333333333333333
------------------------------
Epoch37, loss: 0.09939784556627274
Test_acc:  0.5333333333333333
------------------------------
Epoch38, loss: 0.0986519306898117
Test_acc:  0.5333333333333333
------------------------------
Epoch39, loss: 0.09792428463697433
Test_acc:  0.5333333333333333
------------------------------
Epoch40, loss: 0.09721364825963974
Test_acc:  0.5333333333333333
------------------------------
Epoch41, loss: 0.09651889279484749
Test_acc:  0.5333333333333333
------------------------------
Epoch42, loss: 0.09583901055157185
Test_acc:  0.5333333333333333
------------------------------
Epoch43, loss: 0.09517310746014118
Test_acc:  0.5333333333333333
------------------------------
Epoch44, loss: 0.09452036023139954
Test_acc:  0.5333333333333333
------------------------------
Epoch45, loss: 0.0938800685107708
Test_acc:  0.5333333333333333
------------------------------
Epoch46, loss: 0.0932515598833561
Test_acc:  0.5333333333333333
------------------------------
Epoch47, loss: 0.09263424947857857
Test_acc:  0.5333333333333333
------------------------------
Epoch48, loss: 0.09202759899199009
Test_acc:  0.5333333333333333
------------------------------
Epoch49, loss: 0.09143111482262611
Test_acc:  0.5333333333333333
------------------------------
Epoch50, loss: 0.09084435924887657
Test_acc:  0.5666666666666667
------------------------------
Epoch51, loss: 0.09026693180203438
Test_acc:  0.5666666666666667
------------------------------
Epoch52, loss: 0.08969846554100513
Test_acc:  0.5666666666666667
------------------------------
Epoch53, loss: 0.08913860283792019
Test_acc:  0.6
------------------------------
Epoch54, loss: 0.08858704753220081
Test_acc:  0.6
------------------------------
Epoch55, loss: 0.08804350905120373
Test_acc:  0.6
------------------------------
Epoch56, loss: 0.08750772662460804
Test_acc:  0.6
------------------------------
Epoch57, loss: 0.086979441344738
Test_acc:  0.6
------------------------------
Epoch58, loss: 0.08645843155682087
Test_acc:  0.6
------------------------------
Epoch59, loss: 0.08594448305666447
Test_acc:  0.6
------------------------------
Epoch60, loss: 0.08543741330504417
Test_acc:  0.6
------------------------------
Epoch61, loss: 0.08493701927363873
Test_acc:  0.6
------------------------------
Epoch62, loss: 0.08444312773644924
Test_acc:  0.6333333333333333
------------------------------
Epoch63, loss: 0.08395559526979923
Test_acc:  0.6333333333333333
------------------------------
Epoch64, loss: 0.08347425423562527
Test_acc:  0.6333333333333333
------------------------------
Epoch65, loss: 0.08299897611141205
Test_acc:  0.6333333333333333
------------------------------
Epoch66, loss: 0.08252960629761219
Test_acc:  0.6333333333333333
------------------------------
Epoch67, loss: 0.08206603489816189
Test_acc:  0.6333333333333333
------------------------------
Epoch68, loss: 0.0816081315279007
Test_acc:  0.6333333333333333
------------------------------
Epoch69, loss: 0.08115577884018421
Test_acc:  0.6333333333333333
------------------------------
Epoch70, loss: 0.08070887252688408
Test_acc:  0.6333333333333333
------------------------------
Epoch71, loss: 0.08026730827987194
Test_acc:  0.6333333333333333
------------------------------
Epoch72, loss: 0.07983099296689034
Test_acc:  0.6666666666666666
------------------------------
Epoch73, loss: 0.07939980924129486
Test_acc:  0.6666666666666666
------------------------------
Epoch74, loss: 0.0789736956357956
Test_acc:  0.6666666666666666
------------------------------
Epoch75, loss: 0.07855254039168358
Test_acc:  0.7
------------------------------
Epoch76, loss: 0.078136270865798
Test_acc:  0.7
------------------------------
Epoch77, loss: 0.07772480137646198
Test_acc:  0.7
------------------------------
Epoch78, loss: 0.07731806300580502
Test_acc:  0.7
------------------------------
Epoch79, loss: 0.07691597007215023
Test_acc:  0.7
------------------------------
Epoch80, loss: 0.07651844806969166
Test_acc:  0.7
------------------------------
Epoch81, loss: 0.07612543739378452
Test_acc:  0.7333333333333333
------------------------------
Epoch82, loss: 0.07573685608804226
Test_acc:  0.7333333333333333
------------------------------
Epoch83, loss: 0.07535264641046524
Test_acc:  0.7333333333333333
------------------------------
Epoch84, loss: 0.0749727413058281
Test_acc:  0.7333333333333333
------------------------------
Epoch85, loss: 0.0745970755815506
Test_acc:  0.7666666666666667
------------------------------
Epoch86, loss: 0.07422558777034283
Test_acc:  0.7666666666666667
------------------------------
Epoch87, loss: 0.07385822385549545
Test_acc:  0.7666666666666667
------------------------------
Epoch88, loss: 0.0734949205070734
Test_acc:  0.7666666666666667
------------------------------
Epoch89, loss: 0.07313561625778675
Test_acc:  0.7666666666666667
------------------------------
Epoch90, loss: 0.07278025802224874
Test_acc:  0.7666666666666667
------------------------------
Epoch91, loss: 0.07242879457771778
Test_acc:  0.7666666666666667
------------------------------
Epoch92, loss: 0.07208116911351681
Test_acc:  0.7666666666666667
------------------------------
Epoch93, loss: 0.07173733692616224
Test_acc:  0.8
------------------------------
Epoch94, loss: 0.07139723561704159
Test_acc:  0.8
------------------------------
Epoch95, loss: 0.07106081303209066
Test_acc:  0.8
------------------------------
Epoch96, loss: 0.07072803284972906
Test_acc:  0.8
------------------------------
Epoch97, loss: 0.07039883639663458
Test_acc:  0.8
------------------------------
Epoch98, loss: 0.07007317431271076
Test_acc:  0.8333333333333334
------------------------------
Epoch99, loss: 0.06975101120769978
Test_acc:  0.8666666666666667
------------------------------
Epoch100, loss: 0.069432289339602
Test_acc:  0.8666666666666667
------------------------------
Epoch101, loss: 0.06911696679890156
Test_acc:  0.8666666666666667
------------------------------
Epoch102, loss: 0.06880500353872776
Test_acc:  0.8666666666666667
------------------------------
Epoch103, loss: 0.06849634647369385
Test_acc:  0.8666666666666667
------------------------------
Epoch104, loss: 0.06819095462560654
Test_acc:  0.8666666666666667
------------------------------
Epoch105, loss: 0.06788879260420799
Test_acc:  0.8666666666666667
------------------------------
Epoch106, loss: 0.06758981663733721
Test_acc:  0.8666666666666667
------------------------------
Epoch107, loss: 0.0672939820215106
Test_acc:  0.9
------------------------------
Epoch108, loss: 0.06700124312192202
Test_acc:  0.9
------------------------------
Epoch109, loss: 0.06671155989170074
Test_acc:  0.9
------------------------------
Epoch110, loss: 0.06642490439116955
Test_acc:  0.9
------------------------------
Epoch111, loss: 0.06614122912287712
Test_acc:  0.9
------------------------------
Epoch112, loss: 0.06586050242185593
Test_acc:  0.9
------------------------------
Epoch113, loss: 0.06558267772197723
Test_acc:  0.9
------------------------------
Epoch114, loss: 0.06530772242695093
Test_acc:  0.9
------------------------------
Epoch115, loss: 0.06503560207784176
Test_acc:  0.9
------------------------------
Epoch116, loss: 0.06476627476513386
Test_acc:  0.9
------------------------------
Epoch117, loss: 0.06449969857931137
Test_acc:  0.9333333333333333
------------------------------
Epoch118, loss: 0.06423585396260023
Test_acc:  0.9333333333333333
------------------------------
Epoch119, loss: 0.06397469341754913
Test_acc:  0.9333333333333333
------------------------------
Epoch120, loss: 0.06371619179844856
Test_acc:  0.9333333333333333
------------------------------
Epoch121, loss: 0.06346030440181494
Test_acc:  0.9333333333333333
------------------------------
Epoch122, loss: 0.06320701353251934
Test_acc:  0.9333333333333333
------------------------------
Epoch123, loss: 0.06295626517385244
Test_acc:  0.9333333333333333
------------------------------
Epoch124, loss: 0.06270804442465305
Test_acc:  0.9333333333333333
------------------------------
Epoch125, loss: 0.062462310306727886
Test_acc:  0.9333333333333333
------------------------------
Epoch126, loss: 0.06221903208643198
Test_acc:  0.9333333333333333
------------------------------
Epoch127, loss: 0.061978185549378395
Test_acc:  0.9333333333333333
------------------------------
Epoch128, loss: 0.0617397278547287
Test_acc:  0.9333333333333333
------------------------------
Epoch129, loss: 0.06150363851338625
Test_acc:  0.9333333333333333
------------------------------
Epoch130, loss: 0.06126988027244806
Test_acc:  0.9333333333333333
------------------------------
Epoch131, loss: 0.061038427986204624
Test_acc:  0.9333333333333333
------------------------------
Epoch132, loss: 0.06080925278365612
Test_acc:  0.9333333333333333
------------------------------
Epoch133, loss: 0.06058232951909304
Test_acc:  0.9333333333333333
------------------------------
Epoch134, loss: 0.060357616282999516
Test_acc:  0.9333333333333333
------------------------------
Epoch135, loss: 0.06013510562479496
Test_acc:  0.9333333333333333
------------------------------
Epoch136, loss: 0.05991474352777004
Test_acc:  0.9333333333333333
------------------------------
Epoch137, loss: 0.05969652719795704
Test_acc:  0.9333333333333333
------------------------------
Epoch138, loss: 0.05948041193187237
Test_acc:  0.9333333333333333
------------------------------
Epoch139, loss: 0.05926638841629028
Test_acc:  0.9333333333333333
------------------------------
Epoch140, loss: 0.05905440542846918
Test_acc:  0.9333333333333333
------------------------------
Epoch141, loss: 0.05884446017444134
Test_acc:  0.9333333333333333
------------------------------
Epoch142, loss: 0.058636520989239216
Test_acc:  0.9333333333333333
------------------------------
Epoch143, loss: 0.05843055807054043
Test_acc:  0.9333333333333333
------------------------------
Epoch144, loss: 0.058226559311151505
Test_acc:  0.9333333333333333
------------------------------
Epoch145, loss: 0.05802448000758886
Test_acc:  0.9333333333333333
------------------------------
Epoch146, loss: 0.05782430339604616
Test_acc:  0.9333333333333333
------------------------------
Epoch147, loss: 0.057626024819910526
Test_acc:  0.9333333333333333
------------------------------
Epoch148, loss: 0.057429587468504906
Test_acc:  0.9333333333333333
------------------------------
Epoch149, loss: 0.057234992273151875
Test_acc:  0.9333333333333333
------------------------------
Epoch150, loss: 0.057042213156819344
Test_acc:  0.9333333333333333
------------------------------
Epoch151, loss: 0.0568512175232172
Test_acc:  0.9333333333333333
------------------------------
Epoch152, loss: 0.05666199326515198
Test_acc:  0.9333333333333333
------------------------------
Epoch153, loss: 0.056474512442946434
Test_acc:  0.9333333333333333
------------------------------
Epoch154, loss: 0.056288762018084526
Test_acc:  0.9333333333333333
------------------------------
Epoch155, loss: 0.05610471311956644
Test_acc:  0.9333333333333333
------------------------------
Epoch156, loss: 0.05592234246432781
Test_acc:  0.9333333333333333
------------------------------
Epoch157, loss: 0.05574163142591715
Test_acc:  0.9333333333333333
------------------------------
Epoch158, loss: 0.05556256044656038
Test_acc:  0.9333333333333333
------------------------------
Epoch159, loss: 0.055385113693773746
Test_acc:  0.9333333333333333
------------------------------
Epoch160, loss: 0.0552092706784606
Test_acc:  0.9333333333333333
------------------------------
Epoch161, loss: 0.05503500625491142
Test_acc:  0.9333333333333333
------------------------------
Epoch162, loss: 0.05486230365931988
Test_acc:  0.9333333333333333
------------------------------
Epoch163, loss: 0.05469114426523447
Test_acc:  0.9333333333333333
------------------------------
Epoch164, loss: 0.05452151410281658
Test_acc:  0.9666666666666667
------------------------------
Epoch165, loss: 0.0543533768504858
Test_acc:  0.9666666666666667
------------------------------
Epoch166, loss: 0.054186733439564705
Test_acc:  0.9666666666666667
------------------------------
Epoch167, loss: 0.05402155593037605
Test_acc:  0.9666666666666667
------------------------------
Epoch168, loss: 0.05385783314704895
Test_acc:  0.9666666666666667
------------------------------
Epoch169, loss: 0.05369554925709963
Test_acc:  0.9666666666666667
------------------------------
Epoch170, loss: 0.05353467911481857
Test_acc:  0.9666666666666667
------------------------------
Epoch171, loss: 0.053375208750367165
Test_acc:  0.9666666666666667
------------------------------
Epoch172, loss: 0.05321711488068104
Test_acc:  0.9666666666666667
------------------------------
Epoch173, loss: 0.053060390055179596
Test_acc:  0.9666666666666667
------------------------------
Epoch174, loss: 0.05290502030402422
Test_acc:  0.9666666666666667
------------------------------
Epoch175, loss: 0.052750980481505394
Test_acc:  0.9666666666666667
------------------------------
Epoch176, loss: 0.052598259411752224
Test_acc:  0.9666666666666667
------------------------------
Epoch177, loss: 0.05244683846831322
Test_acc:  0.9666666666666667
------------------------------
Epoch178, loss: 0.052296703681349754
Test_acc:  0.9666666666666667
------------------------------
Epoch179, loss: 0.05214784853160381
Test_acc:  0.9666666666666667
------------------------------
Epoch180, loss: 0.05200024414807558
Test_acc:  0.9666666666666667
------------------------------
Epoch181, loss: 0.05185387749224901
Test_acc:  0.9666666666666667
------------------------------
Epoch182, loss: 0.05170874670147896
Test_acc:  0.9666666666666667
------------------------------
Epoch183, loss: 0.05156483221799135
Test_acc:  0.9666666666666667
------------------------------
Epoch184, loss: 0.051422109827399254
Test_acc:  0.9666666666666667
------------------------------
Epoch185, loss: 0.051280577667057514
Test_acc:  1.0
------------------------------
Epoch186, loss: 0.05114021338522434
Test_acc:  1.0
------------------------------
Epoch187, loss: 0.05100100580602884
Test_acc:  1.0
------------------------------
Epoch188, loss: 0.05086294375360012
Test_acc:  1.0
------------------------------
Epoch189, loss: 0.05072601046413183
Test_acc:  1.0
------------------------------
Epoch190, loss: 0.050590197555720806
Test_acc:  1.0
------------------------------
Epoch191, loss: 0.05045549012720585
Test_acc:  1.0
------------------------------
Epoch192, loss: 0.050321873277425766
Test_acc:  1.0
------------------------------
Epoch193, loss: 0.050189330242574215
Test_acc:  1.0
------------------------------
Epoch194, loss: 0.05005785822868347
Test_acc:  1.0
------------------------------
Epoch195, loss: 0.04992745257914066
Test_acc:  1.0
------------------------------
Epoch196, loss: 0.04979808162897825
Test_acc:  1.0
------------------------------
Epoch197, loss: 0.049669742584228516
Test_acc:  1.0
------------------------------
Epoch198, loss: 0.04954242706298828
Test_acc:  1.0
------------------------------
Epoch199, loss: 0.04941611550748348
Test_acc:  1.0
------------------------------
Epoch200, loss: 0.04929080605506897
Test_acc:  1.0
------------------------------
Epoch201, loss: 0.04916647542268038
Test_acc:  1.0
------------------------------
Epoch202, loss: 0.049043113365769386
Test_acc:  1.0
------------------------------
Epoch203, loss: 0.04892072267830372
Test_acc:  1.0
------------------------------
Epoch204, loss: 0.048799289390444756
Test_acc:  1.0
------------------------------
Epoch205, loss: 0.048678794875741005
Test_acc:  1.0
------------------------------
Epoch206, loss: 0.04855923820286989
Test_acc:  1.0
------------------------------
Epoch207, loss: 0.048440597020089626
Test_acc:  1.0
------------------------------
Epoch208, loss: 0.04832286760210991
Test_acc:  1.0
------------------------------
Epoch209, loss: 0.04820604529231787
Test_acc:  1.0
------------------------------
Epoch210, loss: 0.048090110532939434
Test_acc:  1.0
------------------------------
Epoch211, loss: 0.04797505587339401
Test_acc:  1.0
------------------------------
Epoch212, loss: 0.04786087665706873
Test_acc:  1.0
------------------------------
Epoch213, loss: 0.04774755518883467
Test_acc:  1.0
------------------------------
Epoch214, loss: 0.047635093331336975
Test_acc:  1.0
------------------------------
Epoch215, loss: 0.04752346687018871
Test_acc:  1.0
------------------------------
Epoch216, loss: 0.04741267580538988
Test_acc:  1.0
------------------------------
Epoch217, loss: 0.0473027229309082
Test_acc:  1.0
------------------------------
Epoch218, loss: 0.04719358030706644
Test_acc:  1.0
------------------------------
Epoch219, loss: 0.04708523955196142
Test_acc:  1.0
------------------------------
Epoch220, loss: 0.046977704390883446
Test_acc:  1.0
------------------------------
Epoch221, loss: 0.046870963647961617
Test_acc:  1.0
------------------------------
Epoch222, loss: 0.04676500055938959
Test_acc:  1.0
------------------------------
Epoch223, loss: 0.046659816056489944
Test_acc:  1.0
------------------------------
Epoch224, loss: 0.046555389650166035
Test_acc:  1.0
------------------------------
Epoch225, loss: 0.04645173065364361
Test_acc:  1.0
------------------------------
Epoch226, loss: 0.04634882137179375
Test_acc:  1.0
------------------------------
Epoch227, loss: 0.04624664783477783
Test_acc:  1.0
------------------------------
Epoch228, loss: 0.046145214699208736
Test_acc:  1.0
------------------------------
Epoch229, loss: 0.04604450147598982
Test_acc:  1.0
------------------------------
Epoch230, loss: 0.04594451654702425
Test_acc:  1.0
------------------------------
Epoch231, loss: 0.045845238491892815
Test_acc:  1.0
------------------------------
Epoch232, loss: 0.04574666079133749
Test_acc:  1.0
------------------------------
Epoch233, loss: 0.0456487825140357
Test_acc:  1.0
------------------------------
Epoch234, loss: 0.045551598072052
Test_acc:  1.0
------------------------------
Epoch235, loss: 0.0454550925642252
Test_acc:  1.0
------------------------------
Epoch236, loss: 0.04535926319658756
Test_acc:  1.0
------------------------------
Epoch237, loss: 0.045264096930623055
Test_acc:  1.0
------------------------------
Epoch238, loss: 0.04516960773617029
Test_acc:  1.0
------------------------------
Epoch239, loss: 0.04507576581090689
Test_acc:  1.0
------------------------------
Epoch240, loss: 0.04498256742954254
Test_acc:  1.0
------------------------------
Epoch241, loss: 0.04489001911133528
Test_acc:  1.0
------------------------------
Epoch242, loss: 0.04479810409247875
Test_acc:  1.0
------------------------------
Epoch243, loss: 0.04470681492239237
Test_acc:  1.0
------------------------------
Epoch244, loss: 0.04461614973843098
Test_acc:  1.0
------------------------------
Epoch245, loss: 0.04452609829604626
Test_acc:  1.0
------------------------------
Epoch246, loss: 0.04443667083978653
Test_acc:  1.0
------------------------------
Epoch247, loss: 0.04434784036129713
Test_acc:  1.0
------------------------------
Epoch248, loss: 0.044259605929255486
Test_acc:  1.0
------------------------------
Epoch249, loss: 0.044171969406306744
Test_acc:  1.0
------------------------------
Epoch250, loss: 0.04408490937203169
Test_acc:  1.0
------------------------------
Epoch251, loss: 0.04399843793362379
Test_acc:  1.0
------------------------------
Epoch252, loss: 0.043912540189921856
Test_acc:  1.0
------------------------------
Epoch253, loss: 0.04382721148431301
Test_acc:  1.0
------------------------------
Epoch254, loss: 0.04374244716018438
Test_acc:  1.0
------------------------------
Epoch255, loss: 0.04365824535489082
Test_acc:  1.0
------------------------------
Epoch256, loss: 0.04357459209859371
Test_acc:  1.0
------------------------------
Epoch257, loss: 0.0434914818033576
Test_acc:  1.0
------------------------------
Epoch258, loss: 0.04340892471373081
Test_acc:  1.0
------------------------------
Epoch259, loss: 0.04332689754664898
Test_acc:  1.0
------------------------------
Epoch260, loss: 0.043245403096079826
Test_acc:  1.0
------------------------------
Epoch261, loss: 0.043164441362023354
Test_acc:  1.0
------------------------------
Epoch262, loss: 0.04308399464935064
Test_acc:  1.0
------------------------------
Epoch263, loss: 0.04300406947731972
Test_acc:  1.0
------------------------------
Epoch264, loss: 0.04292465001344681
Test_acc:  1.0
------------------------------
Epoch265, loss: 0.04284574277698994
Test_acc:  1.0
------------------------------
Epoch266, loss: 0.04276734031736851
Test_acc:  1.0
------------------------------
Epoch267, loss: 0.04268944077193737
Test_acc:  1.0
------------------------------
Epoch268, loss: 0.042612018063664436
Test_acc:  1.0
------------------------------
Epoch269, loss: 0.04253509175032377
Test_acc:  1.0
------------------------------
Epoch270, loss: 0.04245864786207676
Test_acc:  1.0
------------------------------
Epoch271, loss: 0.04238268546760082
Test_acc:  1.0
------------------------------
Epoch272, loss: 0.042307197116315365
Test_acc:  1.0
------------------------------
Epoch273, loss: 0.04223217815160751
Test_acc:  1.0
------------------------------
Epoch274, loss: 0.04215762671083212
Test_acc:  1.0
------------------------------
Epoch275, loss: 0.04208353813737631
Test_acc:  1.0
------------------------------
Epoch276, loss: 0.04200989659875631
Test_acc:  1.0
------------------------------
Epoch277, loss: 0.04193671979010105
Test_acc:  1.0
------------------------------
Epoch278, loss: 0.04186399094760418
Test_acc:  1.0
------------------------------
Epoch279, loss: 0.0417917026206851
Test_acc:  1.0
------------------------------
Epoch280, loss: 0.04171986132860184
Test_acc:  1.0
------------------------------
Epoch281, loss: 0.0416484409943223
Test_acc:  1.0
------------------------------
Epoch282, loss: 0.04157747607678175
Test_acc:  1.0
------------------------------
Epoch283, loss: 0.041506923735141754
Test_acc:  1.0
------------------------------
Epoch284, loss: 0.041436806321144104
Test_acc:  1.0
------------------------------
Epoch285, loss: 0.041367108933627605
Test_acc:  1.0
------------------------------
Epoch286, loss: 0.04129782132804394
Test_acc:  1.0
------------------------------
Epoch287, loss: 0.041228958405554295
Test_acc:  1.0
------------------------------
Epoch288, loss: 0.04116049408912659
Test_acc:  1.0
------------------------------
Epoch289, loss: 0.04109244979918003
Test_acc:  1.0
------------------------------
Epoch290, loss: 0.04102479852735996
Test_acc:  1.0
------------------------------
Epoch291, loss: 0.0409575505182147
Test_acc:  1.0
------------------------------
Epoch292, loss: 0.04089070484042168
Test_acc:  1.0
------------------------------
Epoch293, loss: 0.040824239142239094
Test_acc:  1.0
------------------------------
Epoch294, loss: 0.04075816925615072
Test_acc:  1.0
------------------------------
Epoch295, loss: 0.04069248307496309
Test_acc:  1.0
------------------------------
Epoch296, loss: 0.04062717780470848
Test_acc:  1.0
------------------------------
Epoch297, loss: 0.04056225065141916
Test_acc:  1.0
------------------------------
Epoch298, loss: 0.04049770440906286
Test_acc:  1.0
------------------------------
Epoch299, loss: 0.04043352045118809
Test_acc:  1.0
------------------------------
Epoch300, loss: 0.04036971740424633
Test_acc:  1.0
------------------------------
Epoch301, loss: 0.040306275710463524
Test_acc:  1.0
------------------------------
Epoch302, loss: 0.04024319164454937
Test_acc:  1.0
------------------------------
Epoch303, loss: 0.04018046800047159
Test_acc:  1.0
------------------------------
Epoch304, loss: 0.040118108969181776
Test_acc:  1.0
------------------------------
Epoch305, loss: 0.04005609452724457
Test_acc:  1.0
------------------------------
Epoch306, loss: 0.03999443864449859
Test_acc:  1.0
------------------------------
Epoch307, loss: 0.03993312641978264
Test_acc:  1.0
------------------------------
Epoch308, loss: 0.03987215319648385
Test_acc:  1.0
------------------------------
Epoch309, loss: 0.039811530616134405
Test_acc:  1.0
------------------------------
Epoch310, loss: 0.03975125076249242
Test_acc:  1.0
------------------------------
Epoch311, loss: 0.039691295474767685
Test_acc:  1.0
------------------------------
Epoch312, loss: 0.039631683845072985
Test_acc:  1.0
------------------------------
Epoch313, loss: 0.03957239631563425
Test_acc:  1.0
------------------------------
Epoch314, loss: 0.03951343009248376
Test_acc:  1.0
------------------------------
Epoch315, loss: 0.039454796351492405
Test_acc:  1.0
------------------------------
Epoch316, loss: 0.03939648438245058
Test_acc:  1.0
------------------------------
Epoch317, loss: 0.039338483940809965
Test_acc:  1.0
------------------------------
Epoch318, loss: 0.03928081365302205
Test_acc:  1.0
------------------------------
Epoch319, loss: 0.0392234455794096
Test_acc:  1.0
------------------------------
Epoch320, loss: 0.03916640253737569
Test_acc:  1.0
------------------------------
Epoch321, loss: 0.0391096668317914
Test_acc:  1.0
------------------------------
Epoch322, loss: 0.03905322588980198
Test_acc:  1.0
------------------------------
Epoch323, loss: 0.038997091352939606
Test_acc:  1.0
------------------------------
Epoch324, loss: 0.038941271137446165
Test_acc:  1.0
------------------------------
Epoch325, loss: 0.03888573683798313
Test_acc:  1.0
------------------------------
Epoch326, loss: 0.038830497302114964
Test_acc:  1.0
------------------------------
Epoch327, loss: 0.03877556277438998
Test_acc:  1.0
------------------------------
Epoch328, loss: 0.03872091695666313
Test_acc:  1.0
------------------------------
Epoch329, loss: 0.03866655891761184
Test_acc:  1.0
------------------------------
Epoch330, loss: 0.038612485863268375
Test_acc:  1.0
------------------------------
Epoch331, loss: 0.038558699656277895
Test_acc:  1.0
------------------------------
Epoch332, loss: 0.03850520122796297
Test_acc:  1.0
------------------------------
Epoch333, loss: 0.03845197660848498
Test_acc:  1.0
------------------------------
Epoch334, loss: 0.0383990416303277
Test_acc:  1.0
------------------------------
Epoch335, loss: 0.03834636602550745
Test_acc:  1.0
------------------------------
Epoch336, loss: 0.038293974939733744
Test_acc:  1.0
------------------------------
Epoch337, loss: 0.03824185114353895
Test_acc:  1.0
------------------------------
Epoch338, loss: 0.03819000534713268
Test_acc:  1.0
------------------------------
Epoch339, loss: 0.03813841287046671
Test_acc:  1.0
------------------------------
Epoch340, loss: 0.03808710025623441
Test_acc:  1.0
------------------------------
Epoch341, loss: 0.038036040030419827
Test_acc:  1.0
------------------------------
Epoch342, loss: 0.037985255010426044
Test_acc:  1.0
------------------------------
Epoch343, loss: 0.03793471213430166
Test_acc:  1.0
------------------------------
Epoch344, loss: 0.03788444213569164
Test_acc:  1.0
------------------------------
Epoch345, loss: 0.0378344152122736
Test_acc:  1.0
------------------------------
Epoch346, loss: 0.03778464673087001
Test_acc:  1.0
------------------------------
Epoch347, loss: 0.037735139951109886
Test_acc:  1.0
------------------------------
Epoch348, loss: 0.037685861345380545
Test_acc:  1.0
------------------------------
Epoch349, loss: 0.037636841647326946
Test_acc:  1.0
------------------------------
Epoch350, loss: 0.037588059436529875
Test_acc:  1.0
------------------------------
Epoch351, loss: 0.037539536133408546
Test_acc:  1.0
------------------------------
Epoch352, loss: 0.037491245195269585
Test_acc:  1.0
------------------------------
Epoch353, loss: 0.037443195935338736
Test_acc:  1.0
------------------------------
Epoch354, loss: 0.03739538649097085
Test_acc:  1.0
------------------------------
Epoch355, loss: 0.0373478177934885
Test_acc:  1.0
------------------------------
Epoch356, loss: 0.037300472147762775
Test_acc:  1.0
------------------------------
Epoch357, loss: 0.037253367714583874
Test_acc:  1.0
------------------------------
Epoch358, loss: 0.037206497974693775
Test_acc:  1.0
------------------------------
Epoch359, loss: 0.03715984337031841
Test_acc:  1.0
------------------------------
Epoch360, loss: 0.0371134364977479
Test_acc:  1.0
------------------------------
Epoch361, loss: 0.03706723917275667
Test_acc:  1.0
------------------------------
Epoch362, loss: 0.03702126955613494
Test_acc:  1.0
------------------------------
Epoch363, loss: 0.03697552718222141
Test_acc:  1.0
------------------------------
Epoch364, loss: 0.0369300008751452
Test_acc:  1.0
------------------------------
Epoch365, loss: 0.03688469482585788
Test_acc:  1.0
------------------------------
Epoch366, loss: 0.036839607171714306
Test_acc:  1.0
------------------------------
Epoch367, loss: 0.03679474210366607
Test_acc:  1.0
------------------------------
Epoch368, loss: 0.036750090308487415
Test_acc:  1.0
------------------------------
Epoch369, loss: 0.03670565038919449
Test_acc:  1.0
------------------------------
Epoch370, loss: 0.03666141629219055
Test_acc:  1.0
------------------------------
Epoch371, loss: 0.03661739453673363
Test_acc:  1.0
------------------------------
Epoch372, loss: 0.036573585588485
Test_acc:  1.0
------------------------------
Epoch373, loss: 0.03652998199686408
Test_acc:  1.0
------------------------------
Epoch374, loss: 0.036486584693193436
Test_acc:  1.0
------------------------------
Epoch375, loss: 0.03644338482990861
Test_acc:  1.0
------------------------------
Epoch376, loss: 0.03640039265155792
Test_acc:  1.0
------------------------------
Epoch377, loss: 0.036357596050947905
Test_acc:  1.0
------------------------------
Epoch378, loss: 0.0363150117918849
Test_acc:  1.0
------------------------------
Epoch379, loss: 0.036272620782256126
Test_acc:  1.0
------------------------------
Epoch380, loss: 0.03623042721301317
Test_acc:  1.0
------------------------------
Epoch381, loss: 0.03618842409923673
Test_acc:  1.0
------------------------------
Epoch382, loss: 0.0361466184258461
Test_acc:  1.0
------------------------------
Epoch383, loss: 0.03610499855130911
Test_acc:  1.0
------------------------------
Epoch384, loss: 0.036063572857528925
Test_acc:  1.0
------------------------------
Epoch385, loss: 0.036022345535457134
Test_acc:  1.0
------------------------------
Epoch386, loss: 0.03598129749298096
Test_acc:  1.0
------------------------------
Epoch387, loss: 0.03594044130295515
Test_acc:  1.0
------------------------------
Epoch388, loss: 0.03589977277442813
Test_acc:  1.0
------------------------------
Epoch389, loss: 0.035859286319464445
Test_acc:  1.0
------------------------------
Epoch390, loss: 0.03581898845732212
Test_acc:  1.0
------------------------------
Epoch391, loss: 0.0357788666151464
Test_acc:  1.0
------------------------------
Epoch392, loss: 0.03573892870917916
Test_acc:  1.0
------------------------------
Epoch393, loss: 0.035699171014130116
Test_acc:  1.0
------------------------------
Epoch394, loss: 0.03565958933904767
Test_acc:  1.0
------------------------------
Epoch395, loss: 0.035620186012238264
Test_acc:  1.0
------------------------------
Epoch396, loss: 0.03558096196502447
Test_acc:  1.0
------------------------------
Epoch397, loss: 0.035541907884180546
Test_acc:  1.0
------------------------------
Epoch398, loss: 0.03550302889198065
Test_acc:  1.0
------------------------------
Epoch399, loss: 0.03546431940048933
Test_acc:  1.0
------------------------------
Epoch400, loss: 0.035425789188593626
Test_acc:  1.0
------------------------------
Epoch401, loss: 0.03538742894306779
Test_acc:  1.0
------------------------------
Epoch402, loss: 0.03534922283142805
Test_acc:  1.0
------------------------------
Epoch403, loss: 0.035311208572238684
Test_acc:  1.0
------------------------------
Epoch404, loss: 0.03527333587408066
Test_acc:  1.0
------------------------------
Epoch405, loss: 0.035235646180808544
Test_acc:  1.0
------------------------------
Epoch406, loss: 0.035198112949728966
Test_acc:  1.0
------------------------------
Epoch407, loss: 0.0351607371121645
Test_acc:  1.0
------------------------------
Epoch408, loss: 0.035123543813824654
Test_acc:  1.0
------------------------------
Epoch409, loss: 0.03508649580180645
Test_acc:  1.0
------------------------------
Epoch410, loss: 0.03504961123690009
Test_acc:  1.0
------------------------------
Epoch411, loss: 0.035012893844395876
Test_acc:  1.0
------------------------------
Epoch412, loss: 0.03497632406651974
Test_acc:  1.0
------------------------------
Epoch413, loss: 0.034939907025545835
Test_acc:  1.0
------------------------------
Epoch414, loss: 0.03490365482866764
Test_acc:  1.0
------------------------------
Epoch415, loss: 0.03486756049096584
Test_acc:  1.0
------------------------------
Epoch416, loss: 0.034831615164875984
Test_acc:  1.0
------------------------------
Epoch417, loss: 0.03479581978172064
Test_acc:  1.0
------------------------------
Epoch418, loss: 0.03476017853245139
Test_acc:  1.0
------------------------------
Epoch419, loss: 0.03472468676045537
Test_acc:  1.0
------------------------------
Epoch420, loss: 0.034689349588006735
Test_acc:  1.0
------------------------------
Epoch421, loss: 0.034654160495847464
Test_acc:  1.0
------------------------------
Epoch422, loss: 0.03461912041530013
Test_acc:  1.0
------------------------------
Epoch423, loss: 0.03458422375842929
Test_acc:  1.0
------------------------------
Epoch424, loss: 0.03454947378486395
Test_acc:  1.0
------------------------------
Epoch425, loss: 0.03451486770063639
Test_acc:  1.0
------------------------------
Epoch426, loss: 0.0344804092310369
Test_acc:  1.0
------------------------------
Epoch427, loss: 0.034446089062839746
Test_acc:  1.0
------------------------------
Epoch428, loss: 0.03441191231831908
Test_acc:  1.0
------------------------------
Epoch429, loss: 0.03437789250165224
Test_acc:  1.0
------------------------------
Epoch430, loss: 0.034343999810516834
Test_acc:  1.0
------------------------------
Epoch431, loss: 0.03431024681776762
Test_acc:  1.0
------------------------------
Epoch432, loss: 0.034276632592082024
Test_acc:  1.0
------------------------------
Epoch433, loss: 0.03424314595758915
Test_acc:  1.0
------------------------------
Epoch434, loss: 0.034209809731692076
Test_acc:  1.0
------------------------------
Epoch435, loss: 0.03417660854756832
Test_acc:  1.0
------------------------------
Epoch436, loss: 0.03414354287087917
Test_acc:  1.0
------------------------------
Epoch437, loss: 0.03411061130464077
Test_acc:  1.0
------------------------------
Epoch438, loss: 0.034077814780175686
Test_acc:  1.0
------------------------------
Epoch439, loss: 0.03404515143483877
Test_acc:  1.0
------------------------------
Epoch440, loss: 0.03401261428371072
Test_acc:  1.0
------------------------------
Epoch441, loss: 0.03398020751774311
Test_acc:  1.0
------------------------------
Epoch442, loss: 0.03394793579354882
Test_acc:  1.0
------------------------------
Epoch443, loss: 0.03391578467562795
Test_acc:  1.0
------------------------------
Epoch444, loss: 0.03388377372175455
Test_acc:  1.0
------------------------------
Epoch445, loss: 0.03385189175605774
Test_acc:  1.0
------------------------------
Epoch446, loss: 0.033820133190602064
Test_acc:  1.0
------------------------------
Epoch447, loss: 0.03378849336877465
Test_acc:  1.0
------------------------------
Epoch448, loss: 0.03375698858872056
Test_acc:  1.0
------------------------------
Epoch449, loss: 0.033725603483617306
Test_acc:  1.0
------------------------------
Epoch450, loss: 0.03369435202330351
Test_acc:  1.0
------------------------------
Epoch451, loss: 0.033663210924714804
Test_acc:  1.0
------------------------------
Epoch452, loss: 0.03363219974562526
Test_acc:  1.0
------------------------------
Epoch453, loss: 0.03360130311921239
Test_acc:  1.0
------------------------------
Epoch454, loss: 0.033570537343621254
Test_acc:  1.0
------------------------------
Epoch455, loss: 0.03353988844901323
Test_acc:  1.0
------------------------------
Epoch456, loss: 0.03350936435163021
Test_acc:  1.0
------------------------------
Epoch457, loss: 0.03347894921898842
Test_acc:  1.0
------------------------------
Epoch458, loss: 0.03344865795224905
Test_acc:  1.0
------------------------------
Epoch459, loss: 0.033418482169508934
Test_acc:  1.0
------------------------------
Epoch460, loss: 0.033388420939445496
Test_acc:  1.0
------------------------------
Epoch461, loss: 0.033358474262058735
Test_acc:  1.0
------------------------------
Epoch462, loss: 0.03332865610718727
Test_acc:  1.0
------------------------------
Epoch463, loss: 0.033298942260444164
Test_acc:  1.0
------------------------------
Epoch464, loss: 0.033269339706748724
Test_acc:  1.0
------------------------------
Epoch465, loss: 0.03323985077440739
Test_acc:  1.0
------------------------------
Epoch466, loss: 0.03321048151701689
Test_acc:  1.0
------------------------------
Epoch467, loss: 0.03318122262135148
Test_acc:  1.0
------------------------------
Epoch468, loss: 0.03315207455307245
Test_acc:  1.0
------------------------------
Epoch469, loss: 0.03312302893027663
Test_acc:  1.0
------------------------------
Epoch470, loss: 0.03309410251677036
Test_acc:  1.0
------------------------------
Epoch471, loss: 0.03306528367102146
Test_acc:  1.0
------------------------------
Epoch472, loss: 0.033036579843610525
Test_acc:  1.0
------------------------------
Epoch473, loss: 0.033007978461682796
Test_acc:  1.0
------------------------------
Epoch474, loss: 0.03297948418185115
Test_acc:  1.0
------------------------------
Epoch475, loss: 0.03295109001919627
Test_acc:  1.0
------------------------------
Epoch476, loss: 0.03292280761525035
Test_acc:  1.0
------------------------------
Epoch477, loss: 0.03289462951943278
Test_acc:  1.0
------------------------------
Epoch478, loss: 0.0328665585257113
Test_acc:  1.0
------------------------------
Epoch479, loss: 0.03283859044313431
Test_acc:  1.0
------------------------------
Epoch480, loss: 0.032810717821121216
Test_acc:  1.0
------------------------------
Epoch481, loss: 0.0327829672023654
Test_acc:  1.0
------------------------------
Epoch482, loss: 0.032755316235125065
Test_acc:  1.0
------------------------------
Epoch483, loss: 0.03272775746881962
Test_acc:  1.0
------------------------------
Epoch484, loss: 0.03270029369741678
Test_acc:  1.0
------------------------------
Epoch485, loss: 0.03267294354736805
Test_acc:  1.0
------------------------------
Epoch486, loss: 0.032645683735609055
Test_acc:  1.0
------------------------------
Epoch487, loss: 0.032618537079542875
Test_acc:  1.0
------------------------------
Epoch488, loss: 0.03259147936478257
Test_acc:  1.0
------------------------------
Epoch489, loss: 0.032564531080424786
Test_acc:  1.0
------------------------------
Epoch490, loss: 0.032537666615098715
Test_acc:  1.0
------------------------------
Epoch491, loss: 0.03251091297715902
Test_acc:  1.0
------------------------------
Epoch492, loss: 0.032484245020896196
Test_acc:  1.0
------------------------------
Epoch493, loss: 0.03245767392218113
Test_acc:  1.0
------------------------------
Epoch494, loss: 0.03243120713159442
Test_acc:  1.0
------------------------------
Epoch495, loss: 0.03240483580157161
Test_acc:  1.0
------------------------------
Epoch496, loss: 0.032378551084548235
Test_acc:  1.0
------------------------------
Epoch497, loss: 0.032352369744330645
Test_acc:  1.0
------------------------------
Epoch498, loss: 0.0323262638412416
Test_acc:  1.0
------------------------------
Epoch499, loss: 0.03230027575045824
Test_acc:  1.0
------------------------------


在这里插入图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值