吴恩达编程作业L2W3-tensorflow

作业里面使用的是tensorflow1.x,我使用的是tensorflow2.x
code:

import tensorflow as tf
from PIL import Image
from tf_utils import *
import matplotlib.pyplot as plt
print(tf.__version__)

# 加载数据
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
print(X_train_orig.shape)
print(Y_train_orig.shape)

# 将数据归一化
X_train_orig=X_train_orig/255.0
X_test_orig=X_test_orig/255.0
# 转换标签的形状
Y_train_orig=Y_train_orig.T
print(X_train_orig.shape)
Y_train_orig=np.squeeze(Y_train_orig)
print(Y_train_orig.shape)

# 显示图像
# plt.figure(figsize=(10,10))
# for i in range(25):
#     plt.subplot(5,5,i+1)
#     plt.xticks([])
#     plt.yticks([])
#     plt.grid(False)
#     plt.imshow(train_images[i])
#     plt.xlabel(train_labels[i])
# plt.show()

# 创建模型
model=tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(64,64,3)))
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(6))

# 编译模型
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),optimizer='adam',metrics=["accuracy"])

# 训练模型
model.fit(X_train_orig,Y_train_orig,epochs=100,verbose=2)
# 保存模型
model.save("model.h5",overwrite=True,include_optimizer=True)

训练过程:

Epoch 1/100
34/34 - 1s - loss: 7.8560 - accuracy: 0.1898 - 808ms/epoch - 24ms/step
Epoch 2/100
34/34 - 0s - loss: 1.9918 - accuracy: 0.2815 - 126ms/epoch - 4ms/step
Epoch 3/100
34/34 - 0s - loss: 1.8489 - accuracy: 0.3185 - 141ms/epoch - 4ms/step
Epoch 4/100
34/34 - 0s - loss: 1.5677 - accuracy: 0.3880 - 109ms/epoch - 3ms/step
Epoch 5/100
34/34 - 0s - loss: 1.3832 - accuracy: 0.4806 - 108ms/epoch - 3ms/step
Epoch 6/100
34/34 - 0s - loss: 1.4084 - accuracy: 0.4907 - 97ms/epoch - 3ms/step
...
...
...
Epoch 90/100
34/34 - 0s - loss: 0.1295 - accuracy: 0.9704 - 93ms/epoch - 3ms/step
Epoch 91/100
34/34 - 0s - loss: 0.1611 - accuracy: 0.9519 - 88ms/epoch - 3ms/step
Epoch 92/100
34/34 - 0s - loss: 0.3682 - accuracy: 0.8741 - 89ms/epoch - 3ms/step
Epoch 93/100
34/34 - 0s - loss: 0.3198 - accuracy: 0.8852 - 95ms/epoch - 3ms/step
Epoch 94/100
34/34 - 0s - loss: 0.1567 - accuracy: 0.9537 - 88ms/epoch - 3ms/step
Epoch 95/100
34/34 - 0s - loss: 0.1511 - accuracy: 0.9546 - 97ms/epoch - 3ms/step
Epoch 96/100
34/34 - 0s - loss: 0.1987 - accuracy: 0.9324 - 94ms/epoch - 3ms/step
Epoch 97/100
34/34 - 0s - loss: 0.2134 - accuracy: 0.9222 - 88ms/epoch - 3ms/step
Epoch 98/100
34/34 - 0s - loss: 0.2781 - accuracy: 0.8963 - 89ms/epoch - 3ms/step
Epoch 99/100
34/34 - 0s - loss: 0.2169 - accuracy: 0.9269 - 92ms/epoch - 3ms/step
Epoch 100/100
34/34 - 0s - loss: 0.1431 - accuracy: 0.9667 - 85ms/epoch - 3ms/step

测试集上的表现:

model=tf.keras.models.load_model("model.h5")

# 评估模型
loss,acc=model.evaluate(X_test_orig,Y_test_orig,verbose=2)
print("测试集准确性:",acc)
4/4 - 1s - loss: 0.3861 - accuracy: 0.8833 - 538ms/epoch - 134ms/step
测试集准确性: 0.8833333253860474

训练集上的准确性达到了96%,测试集上的准确性为88%,很明显过拟合了,可以尝试改变模型的参数

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值