Keras_深度学习_MNIST数据集手写数字识别之各种调参(转)

Keras_深度学习_MNIST数据集手写数字识别之各种调参

注:这里的代码是听台大李宏毅老师的ML课程敲的相应代码。
 
  • 先各种import
  1.  
    import numpy as np
  2.  
    np.random.seed( 1337)
  3.   
  4.  
    # https://keras.io/
  5.  
    !pip install -q keras
  6.  
    import keras
  7.   
  8. from keras.models import Sequential
  9. from keras.layers.core import Dense, Dropout, Activation
  10.  
    from keras.layers import Convolution2D, MaxPooling2D, Flatten
  11.  
    from keras.optimizers import SGD, Adam
  12.  
    from keras.utils import np_utils
  13. from keras.datasets import mnist
  14.  
    #categorical_crossentropy
  • 再定义函数load_data()
  1.  
    def load_data():
  2.  
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
  3.   number = 10000
  4.  
    x_train = x_train[ 0 : number]
  5.  
    y_train = y_train[ 0 : number]
  6.   
  7.  
    x_train = x_train.reshape(number, 28*28)
  8.  
    x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  9.   
  10.  
    x_train = x_train.reshape(number, 28*28)
  11.  
    x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  12.   
  13.  
    x_train = x_train.astype( 'float32')
  14. x_test = x_train.astype( 'float32')
  15.  
  16.  
    #convert class vectors to binary class matrices
  17.   
  18.  
    y_train = np_utils.to_categorical(y_train, 10)
  19. y_test = np_utils.to_categorical(y_test, 10)
  20. x_train = x_train
  21. x_test = x_test
  22.   
  23.  
    #x_test = np.random.normal(x_test)
  24.  
  25. x_train = x_train/ 255
  26. x_test = x_test/ 255
  27.   
  28.  
    return (x_train, y_train), (x_test, y_test)
  • 第一次运行测试 
  1.  
    (x_train, y_train), (x_test, y_test) = load_data()
  2.   
  3.  
    model = Sequential()
  4.   
  5.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'sigmoid'))
  6.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  7.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  8.   
  9.  
    model.add(Dense(units = 10, activation = 'softmax'))
  10.   
  11.  
    model.compile(loss = 'mse', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  12.   
  13.  
    model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  14.   
  15.  
    result = model.evaluate(x_train, y_train, batch_size = 10000)
  16.  
    print( '\nTrain Acc:', result[1])
  17.   
  18.  
    result = model.evaluate(x_test, y_test, batch_size = 10000)
  19.  
    print( '\nTest Acc:', result[1])

运行结果如下:

说明training的时候就没有train好。 

  • 修改1:把loss由mse改为categorical_crossrntropy 
  1.  
    #修改1:把loss由mse改为categorical_crossrntropy
  2.  
  3.  
    (x_train, y_train), (x_test, y_test) = load_data()
  4.   
  5.  
    model = Sequential()
  6.   
  7.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'sigmoid'))
  8. model.add(Dense(units = 689, activation = 'sigmoid'))
  9.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  10.   
  11.  
    model.add(Dense(units = 10, activation = 'softmax'))
  12.   
  13.  
    #这里做了修改
  14.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  15.   
  16.  
    model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  17.   
  18.  
    result = model.evaluate(x_train, y_train, batch_size = 10000)
  19. print( '\nTrain Acc:', result[1])
  20.   
  21. result = model.evaluate(x_test, y_test, batch_size = 10000)
  22.  
    print( '\nTest Acc:', result[1])
     

运行结果如下:

training的结果由11%提升到85%(test acc却更小了)。 

  • 修改2:在修改1的基础上,fit时的batch_size由100调整为10000(这样GPU能发挥它平行运算的特点,会计算的很快)
  1.  
    (x_train, y_train), (x_test, y_test) = load_data()
  2.  
  3.  
    model = Sequential()
  4.   
  5.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'sigmoid'))
  6. model.add(Dense(units = 689, activation = 'sigmoid'))
  7.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  8.   
  9.  
    model.add(Dense(units = 10, activation = 'softmax'))
  10.  
  11.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  12.   
  13.  
    model.fit(x_train, y_train, batch_size = 10000, epochs = 20) #这里做了修改
  14.   
  15.  
    result = model.evaluate(x_train, y_train, batch_size = 10000)
  16.  
    print( '\nTrain Acc:', result[1])
  17.   
  18.  
    result = model.evaluate(x_test, y_test, batch_size = 10000)
  19.  
    print( '\nTest Acc:', result[1])

运行结果如下:

performance很差。 

  • 修改3:在lossfunc用crossentropy的基础上,fit时的batch_size由10000调整为1(这样GPU就不能发挥它平行运算的效能,会计算的很慢)
  1.  
    (x_train, y_train), (x_test, y_test) = load_data()
  2.   
  3.  
    model = Sequential()
  4.   
  5.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'sigmoid'))
  6.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  7.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  8.   
  9.  
    model.add(Dense(units = 10, activation = 'softmax'))
  10.   
  11.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  12.   
  13. model.fit(x_train, y_train, batch_size = 1, epochs = 20) #这里做了修改
  14.   
  15. result = model.evaluate(x_train, y_train, batch_size = 10000)
  16.  
    print( '\nTrain Acc:', result[1])
  17.   
  18.  
    result = model.evaluate(x_test, y_test, batch_size = 10000)
  19. print( '\nTest Acc:', result[1])  

没有等到运行结果出来…… 

  • 修改4:在修改1的基础上,改成deep,再加10层 
  1.  
    #修改4:现在改成deep,再加10层
  2.   
  3.  
    (x_train, y_train), (x_test, y_test) = load_data()
  4.   
  5.  
    model = Sequential()
  6.   
  7.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'sigmoid'))
  8.  
    model.add(Dense(units = 689, activation = 'sigmoid'))
  9. model.add(Dense(units = 689, activation = 'sigmoid'))
  10.   
  11. #这里做了修改
  12.  
    for i in range(10):
  13. model.add(Dense(units = 689, activation = 'sigmoid'))
  14.   
  15.  
    model.add(Dense(units = 10, activation = 'softmax'))
  16.   
  17.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  18.  
  19.  
    model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  20.  
  21. result = model.evaluate(x_train, y_train, batch_size = 10000)
  22. print( '\nTrain Acc:', result[1])
  23.   
  24.  
    result = model.evaluate(x_test, y_test, batch_size = 10000)
  25.  
    print( '\n Test Acc:', result[1])
     

运行结果如下:

还是training就很差。 

  • 修改5:在修改4的基础上,把sigmoid都改成relu 
  1.  
    #修改5:在修改4的基础上,把sigmoid都改成relu
  2.   
  3.  
    (x_train, y_train), (x_test, y_test) = load_data()
  4.   
  5. model = Sequential()
  6.  
  7.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'relu'))
  8. model.add(Dense(units = 689, activation = 'relu'))
  9. model.add(Dense(units = 689, activation = 'relu'))
  10.   
  11.  
    for i in range(10):
  12. model.add(Dense(units = 689, activation = 'relu'))
  13.   
  14.  
    model.add(Dense(units = 10, activation = 'softmax'))
  15.   
  16.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  17.   
  18.  
    model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  19.   
  20.  
    result = model.evaluate(x_train, y_train, batch_size = 10000)
  21.  
    print( '\nTrain Acc:', result[1])
  22.   
  23.  
    result = model.evaluate(x_test, y_test, batch_size = 10000)
  24. print( '\n Test Acc:', result[1])
 运行结果如下:

training效果非常好,但是test效果依然很差劲。 

  • 修改6:在修改5的基础上。load_data函数中,第26行和第27行,是除以255做normalize,如果把这两行注释掉:
  1.  
    # 修改6:load_data函数中,第26行和第27行,是除以255做normalize,如果把这两行注释掉,会发现又做不起来了
  2.  
    def load_data():
  3.  
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
  4.   
  5.  
    number = 10000
  6.  
    x_train = x_train[ 0 : number]
  7.  
    y_train = y_train[ 0 : number]
  8.   
  9.  
    x_train = x_train.reshape(number, 28*28)
  10.  
    x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  11.  
  12.  
    x_train = x_train.reshape(number, 28*28)
  13. x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  14.  
  15. x_train = x_train.astype( 'float32')
  16.  
    x_test = x_train.astype( 'float32')
  17.   
  18.  
    #convert class vectors to binary class matrices
  19.   
  20.  
    y_train = np_utils.to_categorical(y_train, 10)
  21.  
    y_test = np_utils.to_categorical(y_test, 10)
  22. x_train = x_train
  23.  
    x_test = x_test
  24.  
  25. #x_test = np.random.normal(x_test)
  26.   
  27.  
    # x_train = x_train/255 #这里做了修改
  28.  
    # x_test = x_test/255
  29.   
  30.  
    return (x_train, y_train), (x_test, y_test)
  31.   
  32.  
  33.   
  34.  
    (x_train, y_train), (x_test, y_test) = load_data()
  35.   
  36. model = Sequential()
  37.   
  38. model.add(Dense(input_dim = 28*28, units = 689, activation = 'relu'))
  39.  
    model.add(Dense(units = 689, activation = 'relu'))
  40.  
    model.add(Dense(units = 689, activation = 'relu'))
  41.   
  42. for i in range(10):
  43. model.add(Dense(units = 689, activation = 'relu'))
  44.  
  45. model.add(Dense(units = 10, activation = 'softmax'))
  46.  
  47.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  48.   
  49.  
    model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  50.   
  51.  
    result = model.evaluate(x_train, y_train, batch_size = 10000)
  52. print( '\nTrain Acc:', result[1])
  53.  
  54. result = model.evaluate(x_test, y_test, batch_size = 10000)
  55.  
    print( '\n Test Acc:', result[1])
运行结果如下:

training效果又变得很好。 

  • 修改7:取消修改6中的操作,并把添加的10层注释掉再来一次 
  1.  
    # 修改7:取消修改6中的操作,并把添加的10层注释掉再来一次
  2.  
    def load_data():
  3.  
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
  4.  
  5.  
    number = 10000
  6. x_train = x_train[ 0 : number]
  7.  
    y_train = y_train[ 0 : number]
  8.  
  9. x_train = x_train.reshape(number, 28*28)
  10.  
    x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  11.   
  12. x_train = x_train.reshape(number, 28*28)
  13.  
    x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  14.  
  15. x_train = x_train.astype( 'float32')
  16.  
    x_test = x_train.astype( 'float32')
  17.  
  18.  
    #convert class vectors to binary class matrices
  19.   
  20.  
    y_train = np_utils.to_categorical(y_train, 10)
  21.  
    y_test = np_utils.to_categorical(y_test, 10)
  22.  
    x_train = x_train
  23. x_test = x_test
  24.   
  25. #x_test = np.random.normal(x_test)
  26.   
  27. x_train = x_train/ 255
  28. x_test = x_test/ 255
  29.  
  30. return (x_train, y_train), (x_test, y_test)
  31.  
  32.   
  33.  
  34. (x_train, y_train), (x_test, y_test) = load_data()
  35.   
  36.  
    model = Sequential()
  37.   
  38. model.add(Dense(input_dim = 28*28, units = 689, activation = 'relu'))
  39. model.add(Dense(units = 689, activation = 'relu'))
  40. model.add(Dense(units = 689, activation = 'relu'))
  41.  
  42. # for i in range(10):
  43. # model.add(Dense(units = 689, activation = 'relu'))
  44.  
  45. model.add(Dense(units = 10, activation = 'softmax'))
  46.  
  47.  
    model.compile(loss = 'categorical_crossentropy', optimizer = SGD(lr = 0.1), metrics = ['accuracy'])
  48.   
  49. model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  50.  
  51. result = model.evaluate(x_train, y_train, batch_size = 10000)
  52. print( '\nTrain Acc:', result[1])
  53.  
  54. result = model.evaluate(x_test, y_test, batch_size = 10000)
  55.  
    print( '\n Test Acc:', result[1])
     

运行结果如下:

training效果又变得很好。 

  • 修改8: 换一下gradient descent strategy ,把SGD换为adam 
  1.  
    # 修改8: 换一下gradient descent strategy ,把SGD换为adam
  2.   
  3.  
    (x_train, y_train), (x_test, y_test) = load_data()
  4.   
  5.  
    model = Sequential()
  6.   
  7.  
    model.add(Dense(input_dim = 28*28, units = 689, activation = 'relu'))
  8. model.add(Dense(units = 689, activation = 'relu'))
  9. model.add(Dense(units = 689, activation = 'relu'))
  10.  
  11.  
    # for i in range(10):
  12. # model.add(Dense(units = 689, activation = 'relu'))
  13.   
  14. model.add(Dense(units = 10, activation = 'softmax'))
  15.  
  16.  
    model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
  17.   
  18. model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  19.  
  20.  
    result = model.evaluate(x_train, y_train, batch_size = 10000)
  21.  
    print( '\nTrain Acc:', result[1])
  22.   
  23. result = model.evaluate(x_test, y_test, batch_size = 10000)
  24.  
    print( '\n Test Acc:', result[1])
此时虽然最后收敛得差不多,但是最后上升的速度加快了。  
    • 修改9:在testing set每个image故意加上noise。 
 
  1.  
    #修改9:在testing set每个image故意加上noise。 现在效果更差了。。。
  2.  
    def load_data():
  3.  
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
  4.   
  5.  
    number = 10000
  6.  
    x_train = x_train[ 0 : number]
  7. y_train = y_train[ 0 : number]
  8.   
  9.  
    x_train = x_train.reshape(number, 28*28)
  10. x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  11.   
  12. x_train = x_train.reshape(number, 28*28)
  13.  
    x_test = x_test.reshape(x_test.shape[ 0], 28*28)
  14.   
  15.  
    x_train = x_train.astype( 'float32')
  16.  
    x_test = x_train.astype( 'float32')
  17.   
  18. #convert class vectors to binary class matrices
  19.   
  20. y_train = np_utils.to_categorical(y_train, 10)
  21. y_test = np_utils.to_categorical(y_test, 10)
  22.  
    x_train = x_train
  23.  
    x_test = x_test
  24.  
  25.  
    #x_test = np.random.normal(x_test)
  26.   
  27.  
    x_train = x_train/ 255
  28.  
    x_test = x_test/ 255
  29.  
  30.  
    x_test = np.random.normal(x_test)
  31.   
  32.  
    return (x_train, y_train), (x_test, y_test)
  33.   
  34.  
    (x_train, y_train), (x_test, y_test) = load_data()
  35.   
  36. model = Sequential()
  37.  
  38. model.add(Dense(input_dim = 28*28, units = 689, activation = 'relu'))
  39. model.add(Dense(units = 689, activation = 'relu'))
  40.  
    model.add(Dense(units = 689, activation = 'relu'))
  41.  
  42.  
    # for i in range(10):
  43.  
    # model.add(Dense(units = 689, activation = 'relu'))
  44.   
  45.  
    model.add(Dense(units = 10, activation = 'softmax'))
  46.   
  47.  
    model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
  48.   
  49. model.fit(x_train, y_train, batch_size = 100, epochs = 20)
  50.  
  51. result = model.evaluate(x_train, y_train, batch_size = 10000)
  52. print( '\nTrain Acc:', result[1])
  53.  
  54. result = model.evaluate(x_test, y_test, batch_size = 10000)
  55. print( '\n Test Acc:', result[1])
运行结果如下:

现在效果更差了。。。

转载于:https://www.cnblogs.com/ciao/articles/10869662.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值