深度学习mnist pythonR对比

先写老师的python 代码

model = Sequential()
x_train=x_train[1:10000]
y_train=y_train[1:10000]

model.add(Dense(input_dim=28*28,units=633,activation='relu'))
for i in range(10):
    model.add(Dense(units=633,activation='relu'))#分类的任务activationfunction 用线性
    model.add(Dropout(0.5))
model.add(Dense(units=10,activation='softmax'))
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=100,epochs=10)
result = model.evaluate(x_test,y_test)
print (result[1])
  • 这里运用的是手写辨识数据。每个图片都是28*28的像素。在输入的时候转成一维的向量输入。input_dim=28*28,units=随便,activation函数选择的是relu 正确率高于sigmoid 函数。Dense代表的fully-connected 神经网络。循环代表10层的神经网络。如果这个网络在训练集上正确率很高为99%或1,可以添加dropout相当于让神经网络负重,在所有test集上成dropout rate。这样可以相应提高test集上的准确率。输出是一个10维向量。模型堆加的时候选择loss函数不同正确率不同。
    下面是R语言代码

之前的数据是npz格式,但是无法导入R里边。今天尝试的时候恰好没有被墙。此外,在kaggle上有了一个competetion就是有一个csv格式刚好就是没一个图片的把pixel转化成了一个数组754.然后直接导入就好啦。但是我对比了一下数据。他需要转化成一个没有列名的矩阵。因为会报错。以后会知道为什么。需要把Y换成二进制矩阵。其他的还好了啦。
- 相应的R语言代码

train.da <- read.csv("mnisttrain.csv",na.strings = F)
test.da <- read.csv("mnisttest.csv")
#数据集准备
set.seed(123456)
sample <- sample(1:42000,40000)
x_train <- as.matrix(train.da[sample,2:785])
y_train <- train.da[sample,1]
x_test <- as.matrix(train.da[-sample,2:785])
y_test <- train.da[-sample,1]
#看一下具体类别所占比例
hist(y_train,breaks = 0:9,freq = F,col = c("deepskyblue" ,"deepskyblue1" ,"deepskyblue2", "deepskyblue3" ,"orangered1" ,"orangered2" ,"orangered3", "orangered4", "violetred2" ,"violetred3"  ),xlim = c(0,9),border = F)
#normalize the training data
x_train <- as.matrix(x_train/255.0)
x_test <-as.matrix(x_test/255.0)
rownames(x_train)<-NULL
colnames(y_train)<-NULL
rownames(x_test)<-NULL
colnames(y_test)<-NULL
#reshape X
#x_train <- array_reshape(x_train, c(nrow(x_train), 784))
#x_test <- array_reshape(x_test, c(nrow(x_test), 784))
#重新编码label
n <- length(y_train)
m <- length(y_test)
train.label <- matrix(0,n,10)
test.label <- matrix(0,m,10)
for (i in 1:n)
{
  train.label[i,y_train[i]+1]<-1
}
for ( i in 1:m)
{
  test.label[i,y_test[i]+1]<-1
}
#y_train <- to_categorical(y_train, 10)
#y_test <- to_categorical(y_test,10)
#define model
model <- keras_model_sequential() 
model %>% 
  layer_dense(units = 256, activation = 'sigmoid', input_shape = c(784)) %>% 
  layer_dropout(rate = 0.4) %>% 
  layer_dense(units = 128, activation = 'sigmoid') %>%
  layer_dropout(rate = 0.3) %>%
  layer_dense(units = 10, activation = 'softmax')
model %>% compile(
  loss = 'categorical_crossentropy',
  optimizer = optimizer_rmsprop(),
  metrics = c('accuracy')
)
history <- model %>% fit(
  x_train,train.label, 
  epochs = 30, batch_size = 128, 
  validation_split = 0.2
)
#检测test.da上的正确率
model %>% evaluate(x_test, test.label)
model %>% predict_classes(x_test)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值