再拖两天再补理论,实验室没有工程氛围啊。。。。自己来,不想考试,只想调参什么心理。。。
import numpy as np
np.random.seed(1337)
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense,Activation,Dropout
from keras.optimizers import RMSprop
(X_train,Y_train),(X_test,Y_test) = mnist.load_data()
#normal
X_train = X_train.reshape(X_train.shape[0],-1)/255
X_test = X_test.reshape(X_test.shape[0],-1)/255
Y_train = np_utils.to_categorical(Y_train,10)
Y_test = np_utils.to_categorical(Y_test,10)
## build model
model = Sequential()
model.add(Dense(128,input_dim=784))
model.add(Activation('relu'))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
## optimizer
rmsprop = RMSprop(lr=0.01,rho=0.9,epsilon=1e-08,decay=0.0)
## compile
model.compile(optimizer='rmsprop',
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
## fit
model.fit(X_train,Y_train,epochs=10,batch_size=256)
loss,accuracy = model.evaluate(X_test,Y_test)
print("the test loss:%s,the test accuracy:%s"%(loss,accuracy))
简单的几层,自己又随意添加了两层,发现没什么卵用,batch越小下降速率越慢,但是精确度好像有点高
如果没有过拟合不用使用Dropout,我好像正确率挺低的,学习率得自己调,一般来说用Adam优化器要好点。