猫狗识别之卷积神经网络

数据 :猫狗各5000

网络 :

def forward(x,train,regularizer):
    #初始化化第一层卷积核W ,B
    conv1_w = get_weight([CONV1_SIZE,CONV1_SIZE,3,CONV1_KERNEL_NUM],regularizer)
    conv1_b = get_bias([CONV1_KERNEL_NUM])
    conv1 = conv2d(x,conv1_w)
    #conv1 = conv2d(conv1,conv1_w)
    #对conv1添加偏执,使用relu激活函数
    relu1 = tf.nn.relu(tf.nn.bias_add(conv1,conv1_b))
    #池化
    pool1 = max_pool_2x2(relu1)
    #64*64*32

    conv2_w = get_weight([CONV2_SIZE, CONV2_SIZE, CONV1_KERNEL_NUM, CONV1_KERNEL_NUM], regularizer)
    conv2_b = get_bias([CONV1_KERNEL_NUM])
    conv2 = conv2d(pool1, conv2_w)
    relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_b))
    pool2= max_pool_2x2(relu2)#第二层卷积的输出
    print(pool2.shape)
    #32*32*32
    conv3_w = get_weight([CONV2_SIZE, CONV2_SIZE, CONV1_KERNEL_NUM, CONV1_KERNEL_NUM], regularizer)
    conv3_b = get_bias([CONV1_KERNEL_NUM])
    conv3 = conv2d(pool2, conv3_w)
    relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_b))
    pool3 = max_pool_2x2(relu3)  # 第三层卷积的输出
    print(pool3.shape)
    #16*16*32
    conv4_w = get_weight([CONV2_SIZE, CONV2_SIZE, CONV1_KERNEL_NUM, CONV2_KERNEL_NUM], regularizer)
    conv4_b = get_bias([CONV2_KERNEL_NUM])
    conv4 = conv2d(pool3, conv4_w)
    relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_b))
    pool4 = max_pool_2x2(relu4)  # 第四层卷积的输出
    #8*8*64

    print(pool4.shape)
    pool_shape = pool4.get_shape().as_list()#得到pool4 输出矩阵的维度,存入list中

    #提取特征的长,宽,深度
    nodes = pool_shape[1]*pool_shape[2]*pool_shape[3]
    print(nodes)
    print(pool_shape[0])
    #pool_shape[0]一个batch的值
    #将pool2 表示成,pool_shape[0]行,nodes列
    print(pool_shape)
    reshaped = tf.reshape(pool4,[pool_shape[0],nodes])
    # 全连接网络
    #第一层
    fc1_w = get_weight([nodes,FC_SIZE],regularizer)
    fc1_b = get_bias([FC_SIZE])
    fc1 = tf.nn.relu(tf.matmul(reshaped,fc1_w)+fc1_b)
    if train:fc1 = tf.nn.dropout(fc1,0.5)
     #第二层
    fc2_w = get_weight([FC_SIZE,OUTPUT_NODE], regularizer)
    fc2_b = get_bias([OUTPUT_NODE])
    y = tf.matmul(fc1,fc2_w)+fc2_b
    return y

     4层卷积池化 + 2层全连接

  BATCH_SIZE = 100   

     800 轮

 

        学习率从0.001 改为0.0001     LOSS 在 1.5 左右震荡

     正确率:0.646

改进 :网络无变化,去掉正则化,去掉滑动平均,去掉指数衰减学习率

       loss 变小了,但是正确率提升不大

      正确率:0.686

GIthub

 

 

 

 

 

 

 

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值