使用tensorflow进行FCN网络训练时出现loss值是负值情况2

继续上一篇的问题,上一篇的训练结果都建立在数据先进行了normalized之后的结果,这一次尝试一下先不进行归一化数据,查看一下训练结果,结果显示如下:

epoch=0,i=14006 of 78989, loss=798.504578
epoch=0,i=14007 of 78989, loss=798.504578
epoch=0,i=14008 of 78989, loss=798.504578
epoch=0,i=14009 of 78989, loss=798.504578
epoch=0,i=14010 of 78989, loss=798.504578

数据归一化前后结果一样,并不起作用。在这之前的训练,使用的batchsize都是1,为了验证是否是batchsize的原因,将batchsize修改为2之后,训练结果如下所示:

epoch=0,i=12020 of 39494, loss=1597.008545
epoch=0,i=12021 of 39494, loss=1597.008545
epoch=0,i=12022 of 39494, loss=1597.008545
epoch=0,i=12023 of 39494, loss=1597.008545
epoch=0,i=12024 of 39494, loss=1597.008545
epoch=0,i=12025 of 39494, loss=1597.008545
epoch=0,i=12026 of 39494, loss=1597.008545
epoch=0,i=12027 of 39494, loss=1597.008545

还是固定值,只是数字变了,使用tensorboard查看,如下

检查数据的时候发现将label也进行了归一化,取消了label的归一化之后,将histogram的输出修改为softmax之后的值,结果如下所示:

loss值也有所变化,正常现象。到此解决loss值固定不动的问题了

下面是一个简单的FCN网络结构的TensorFlow实现示例: ```python import tensorflow as tf def fcn(images, num_classes): # 编码器部分 conv1 = tf.layers.conv2d(images, filters=64, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv2 = tf.layers.conv2d(conv1, filters=64, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) pool1 = tf.layers.max_pooling2d(conv2, pool_size=2, strides=2, padding='same') conv3 = tf.layers.conv2d(pool1, filters=128, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv4 = tf.layers.conv2d(conv3, filters=128, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) pool2 = tf.layers.max_pooling2d(conv4, pool_size=2, strides=2, padding='same') conv5 = tf.layers.conv2d(pool2, filters=256, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv6 = tf.layers.conv2d(conv5, filters=256, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv7 = tf.layers.conv2d(conv6, filters=256, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) pool3 = tf.layers.max_pooling2d(conv7, pool_size=2, strides=2, padding='same') conv8 = tf.layers.conv2d(pool3, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv9 = tf.layers.conv2d(conv8, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv10 = tf.layers.conv2d(conv9, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) pool4 = tf.layers.max_pooling2d(conv10, pool_size=2, strides=2, padding='same') conv11 = tf.layers.conv2d(pool4, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv12 = tf.layers.conv2d(conv11, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv13 = tf.layers.conv2d(conv12, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) pool5 = tf.layers.max_pooling2d(conv13, pool_size=2, strides=2, padding='same') # 解码器部分 conv14 = tf.layers.conv2d(pool5, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv15 = tf.layers.conv2d(conv14, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv16 = tf.layers.conv2d(conv15, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) upconv1 = tf.layers.conv2d_transpose(conv16, filters=512, kernel_size=3, strides=2, padding='same', activation=tf.nn.relu) concat1 = tf.concat([conv13, upconv1], axis=3) conv17 = tf.layers.conv2d(concat1, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv18 = tf.layers.conv2d(conv17, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv19 = tf.layers.conv2d(conv18, filters=512, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) upconv2 = tf.layers.conv2d_transpose(conv19, filters=256, kernel_size=3, strides=2, padding='same', activation=tf.nn.relu) concat2 = tf.concat([conv10, upconv2], axis=3) conv20 = tf.layers.conv2d(concat2, filters=256, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv21 = tf.layers.conv2d(conv20, filters=256, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv22 = tf.layers.conv2d(conv21, filters=256, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) upconv3 = tf.layers.conv2d_transpose(conv22, filters=128, kernel_size=3, strides=2, padding='same', activation=tf.nn.relu) concat3 = tf.concat([conv4, upconv3], axis=3) conv23 = tf.layers.conv2d(concat3, filters=128, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv24 = tf.layers.conv2d(conv23, filters=128, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) upconv4 = tf.layers.conv2d_transpose(conv24, filters=64, kernel_size=3, strides=2, padding='same', activation=tf.nn.relu) concat4 = tf.concat([conv2, upconv4], axis=3) conv25 = tf.layers.conv2d(concat4, filters=64, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) conv26 = tf.layers.conv2d(conv25, filters=64, kernel_size=3, strides=1, padding='same', activation=tf.nn.relu) # 最后一层卷积层输出预测结果 output = tf.layers.conv2d(conv26, filters=num_classes, kernel_size=1, strides=1, padding='same', activation=None) return output ``` 此代码中,我们定义了一个名为`fcn`的函数,它接受两个参数:`images`表示输入的图像,`num_classes`表示分类的类别数。函数中的代码定义了一个标准的FCN网络结构,其中包括编码器和解码器部分。编码器部分包括多个卷积层和池化层,用于提取输入图像的特征。解码器部分包括多个反卷积层和卷积层,用于将特征图还原为原始大小,并输出分类结果。最后一层卷积层输出预测结果,其通道数为分类的类别数。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

banxia1995

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值