tensorflow2.0学习笔记:卷积神经网络(CNN)

1. 卷积神经网络
a.(卷积层 + 池化层(可选)) * N + 全连接层 * M
b.分类任务(主要用于图像识别)、回归任务

2. 全卷积神经网络
a.(卷积层 + 池化层(可选)) * N + 反卷积层 * M
b.物体分割(全卷积神经网络输入和输出一样大)

3. 卷积操作
a.局部连接:对于图像问题,存在很强的区域性,相近的像素值相近
b.参数共享:图像特征和位置无关
c.输出size = 输入size - 卷积核size + 1, (从左到右,从上到下滑动卷积核,点积运算)

4. 池化操作(一般设卷积核和步长一致)
a.最大值池化
b.平均值池化
c.不重叠、不补零(常用)
d.一定程度的平移鲁棒性

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf

from tensorflow import keras

print(tf.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data()
x_valid,x_train = x_train_all[:5000],x_train_all[5000:]
y_valid,y_train = y_train_all[:5000],y_train_all[5000:]

print(x_valid.shape,y_valid.shape)
print(x_train.shape,y_train.shape)
print(x_test.shape,y_test.shape)
#归一化:x = (x-mu)/std 均值为0,方差为1
from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()

x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1,1)).reshape(-1,28,28,1)
x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1,1)).reshape(-1,28,28,1)
x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1,1)).reshape(-1,28,28,1)

print(np.max(x_train_scaled),np.min(x_train_scaled))

卷积网络

# padding = 'same',补零,使得输入输出大小一样
# 激化函数selu具有归一化的效果
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(filters=32,kernel_size=3,padding='same',activation='selu',input_shape=(28,28,1)))
model.add(keras.layers.Conv2D(filters=32,kernel_size=3,padding='same',activation='selu'))
# pooling 层,一般设卷积核和步长一致,只需一个参数
model.add(keras.layers.MaxPool2D(pool_size=2))

model.add(keras.layers.Conv2D(filters=64,kernel_size=3,padding='same',activation='selu'))
model.add(keras.layers.Conv2D(filters=64,kernel_size=3,padding='same',activation='selu'))
model.add(keras.layers.MaxPool2D(pool_size=2))

model.add(keras.layers.Conv2D(filters=128,kernel_size=3,padding='same',activation='selu'))
model.add(keras.layers.Conv2D(filters=128,kernel_size=3,padding='same',activation='selu'))
model.add(keras.layers.MaxPool2D(pool_size=2))

# 连接全连接层之前,需要Flatten()
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128,activation='selu'))
model.add(keras.layers.Dense(10,activation='softmax'))

model.compile(loss="sparse_categorical_crossentropy",
             optimizer = "adam",
             metrics = ["accuracy"])
model.summary()
#callbacks: Tensorboard , earlystopping , ModelCheckpoint
#设置一个文件夹
logdir = os.path.join('./cnn_callbacks')
if not os.path.exists(logdir):
    os.mkdir(logdir)
output_model_file = os.path.join(logdir,"fashion_mnist_model.h5")

callbacks = [
    keras.callbacks.ModelCheckpoint(output_model_file,save_best_only = True),
    keras.callbacks.EarlyStopping(monitor="val_loss",patience=5,min_delta=1e-3)
]

history = model.fit(x_train_scaled,y_train,epochs=10,
                    validation_data=(x_valid_scaled,y_valid))
def plot_learning_curve(history):
    pd.DataFrame(history.history).plot(figsize=(8,5))
    plt.grid(True)
    plt.gca().set_ylim(0,3)
    plt.show()
#DateFrame 数据类型的作图
plot_learning_curve(history)
y = model.evaluate(x_test_scaled,y_test)
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值