keras 使用dropout 正则化 解决过拟合问题

import keras 
from keras.datasets import mnist
from keras.utils import np_utils
from keras.layers import Dense
from keras.layers import Dropout
from keras.regularizers import l2#正则化
from keras.models import Sequential
from keras.optimizers import SGD
import numpy as np
import matplotlib.pyplot as plt

(x_train, y_train), (x_test, y_test) = mnist.load_data() #60000 28 28


x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_train.reshape(x_test.shape[0],-1)/255.0

y_train = np_utils.to_categorical(y_train, num_classes=10)
y_test = np_utils.to_categorical(y_test, num_classes=10)


model = Sequential([
    #kernel_regularizer=None,
    #bias_regularizer=None,偏置值
    #activity_regularizer=None, 激活函数正则化
    Dense(units = 200, input_dim = 784, bias_initializer = 'one', activation='tanh',kernel_regularizer=l2(0.0003)),#正则化
   Dropout(0.4),#让40%的神经元不工作
    Dense(units = 100, bias_initializer = 'one', activation='tanh'),#softmax一般放在输出层
    Dropout(0.4),#让40%的神经元不工作
    Dense(units = 10, bias_initializer = 'one', activation='softmax')#softmax一般放在输出层
])

sgd = SGD(lr = 0.2)

model.compile(optimizer=sgd, loss = 'categorical_crossentropy', metrics= ['accuracy'])

model.fit(x_train,y_train, batch_size= 32, epochs=8)#每次训练32个

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值