升级SpatialPyramidPooling至tensorflow-gpu-2.3的过程,附下载地址

实现多尺度的图片输入,最简单的方式肯定是用到SpatialPyramidPooling,从https://github.com/yhenon/keras-spp下载论文作者的源码后,立即安装使用,发现问题很多,无法正常运行。

这也无奈,因为tensorflow升级至2.0版本以后,很多代码无法正常使用。于是一行行推进改源码,使其符合tf2.3的语法规则。花了整整一天时间,下面地址是升级至tf2.3版本后的代码,下载后安装即可

https://download.csdn.net/download/u011616825/14037923

用mnist测试通过,测试代码:

import tensorflow as tf
from spp.RoiPoolingConv import RoiPoolingConv
from spp.SpatialPyramidPooling import SpatialPyramidPooling


(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data();
  #处理数据
x_train = x_train.reshape(x_train.shape[0],28,28,1).astype('float32')/255.0 #这里直接用的reshape函数给数据增加了一个维度,上篇文章当中是用的tf.expend_dims(x_train,-1)来增加维度的
x_test = x_test.reshape(x_test.shape[0],28,28,1).astype('float32')/255.0
y_train = tf.keras.utils.to_categorical(y_train, 10)
y_test = tf.keras.utils.to_categorical(y_test, 10)


inputs = tf.keras.Input(shape=(None, None, 1))
h1 = tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3), activation='relu', kernel_initializer='uniform')(inputs)
h2 = tf.keras.layers.BatchNormalization()(h1)
h3 = tf.keras.layers.Dropout(0.5)(h2)
h4 = tf.keras.layers.Dense(64, activation='relu')(h3)
h5 = SpatialPyramidPooling([1, 4, 6])(h4)
# h5 = tf.keras.layers.Flatten()(h4)
outputs = tf.keras.layers.Dense(10, activation='softmax')(h5)
model = tf.keras.Model(inputs=inputs, outputs=outputs)

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
his = model.fit(x_train, y_train, batch_size=128, epochs=5)
model.summary()
loss, accuracy = model.evaluate(x_test, y_test)
print("Accuracy:"+str(accuracy))

测试结果:

 3/469 [..............................] - ETA: 8s - loss: 3.7811 - accuracy: 0.1615WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0065s vs `on_train_batch_end` time: 0.0156s). Check your callbacks.
469/469 [==============================] - 12s 26ms/step - loss: 0.3204 - accuracy: 0.9026
Epoch 2/5
469/469 [==============================] - 12s 26ms/step - loss: 0.0933 - accuracy: 0.9709
Epoch 3/5
469/469 [==============================] - 12s 26ms/step - loss: 0.0727 - accuracy: 0.9770
Epoch 4/5
469/469 [==============================] - 12s 26ms/step - loss: 0.0632 - accuracy: 0.9803
Epoch 5/5
469/469 [==============================] - 12s 26ms/step - loss: 0.0573 - accuracy: 0.9822
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, None, None, 1)]   0         
_________________________________________________________________
conv2d (Conv2D)              (None, None, None, 128)   1280      
_________________________________________________________________
batch_normalization (BatchNo (None, None, None, 128)   512       
_________________________________________________________________
dropout (Dropout)            (None, None, None, 128)   0         
_________________________________________________________________
dense (Dense)                (None, None, None, 64)    8256      
_________________________________________________________________
spatial_pyramid_pooling (Spa (None, 3392)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                33930     
=================================================================
Total params: 43,978
Trainable params: 43,722
Non-trainable params: 256
_________________________________________________________________
313/313 [==============================] - 1s 4ms/step - loss: 0.0552 - accuracy: 0.9828
Accuracy:0.9828000068664551

Process finished with exit code 0
 

 

关于SPP的原理,其实比较简单,就放一张论文的原图吧:

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值