最近一直在使用tensorlayer,想总结一下使用过程中存在的问题
1.tensorlayer的tl.iterate.minibatches只产生batch_size整除的样本,最后剩下的不足batch_size大小的样本没有用上
以下是源码:
def minibatches(inputs=None, targets=None, batch_size=None, shuffle=False):
assert len(inputs) == len(targets)
if shuffle:
indices = np.arange(len(inputs))
np.random.shuffle(indices)
for start_idx in range(0, len(inputs) - batch_size + 1, batch_size):
if shuffle:
excerpt = indices[start_idx:start_idx + batch_size]
else:
excerpt = slice(start_idx, start_idx + batch_size)
yield inputs[excerpt], targets[excerpt]
2.tensorlayer 在使用saver保存模型之后,恢复模型的时候,有个特别不好用的layer参数就是:DropoutLayer。
DropoutLayer是在内部使用了placeholder的,在恢复模型之后,总是让我去给placeholder feed_dict,刚开始一直找不到哪里还有placeholder,直到发现DropoutLayer
class DropoutLayer(Layer):
def __init__(
self,
layer = None,
keep = 0.5,
is_fix = False,
is_train = True,
seed = None,
name = 'dropout_layer',
):
Layer.__init__(self, name=name)
if is_train is False:
print(" [TL] skip DropoutLayer")
self.outputs = layer.outputs
self.all_layers = list(layer.all_layers)
self.all_params = list(layer.all_params)
self.all_drop = dict(layer.all_drop)
else:
self.inputs = layer.outputs
print(" [TL] DropoutLayer %s: keep:%f is_fix:%s" % (self.name, keep, is_fix))
# The name of placeholder for keep_prob is the same with the name
# of the Layer.
if is_fix:
self.outputs = tf.nn.dropout(self.inputs, keep, seed=seed, name=name)
else:
set_keep[name] = tf.placeholder(tf.float32)
self.outputs = tf.nn.dropout(self.inputs, set_keep[name], seed=seed, name=name) # 1.2
self.all_layers = list(layer.all_layers)
self.all_params = list(layer.all_params)
self.all_drop = dict(layer.all_drop)
if is_fix is False:
self.all_drop.update( {set_keep[name]: keep} )
self.all_layers.extend( [self.outputs] )