net = tf_util.max_pool2d(net, [num_point,1],
padding='VALID', scope='tmaxpool')
看下具体实现
def max_pool2d(inputs,
kernel_size,
scope,
stride=[2, 2],
padding='VALID'):
""" 2D max pooling.
Args:
inputs: 4-D tensor BxHxWxC
kernel_size: a list of 2 ints
stride: a list of 2 ints
Returns:
Variable tensor
"""
with tf.variable_scope(scope) as sc:
kernel_h, kernel_w = kernel_size
stride_h, stride_w = stride
outputs = tf.nn.max_pool(inputs,
ksize=[1, kernel_h, kernel_w, 1],
strides=[1, stride_h, stride_w, 1],
padding=padding,
name=sc.name)
return outputs
其实是对tf.nn.max_pool进行了封装,输入的input是第三层输出的net(shape[32,1024,1,1024]),stride设置为2,2即沿h和w方向步幅为2步,直接对1024,1进行池化,且步幅是2,直接将1024个点进行池化得到最大点的池化值,最后得到[32,1,1,1024],目前不知道为什么这么操作