tf.nn.dropout()
- keep_prob = tf.placeholder(tf.float32)
- hidden_layer = tf.add(tf.matmul(features,weights[0],biases[0]))
- hidden_layer = tf.nn.relu(hidden_layer)
- hidden_layer = tf.nn.dropout(hidden_layer,keep_prob)
- logits = tf.add(tf.matmul(hidden_layer,weights[1]),biases[1])
tf.nn.dropout()函数有两个参数
- 1、hidden_layer:你要应用dropout的tensor
- 2、keep_prob:任何一个给定单元的留存率
- 为了补偿被丢弃的单元,tf.nn.dropout()把所有保留下来的单元*1/keep_prob
- 测试时,把keep_prob值设为1.0,这样保留所有的单元,最大化模型的能力
import tensorflow as tf
hidden_layer_weights = [
[0.1, 0.2, 0.4],
[0.4, 0.6, 0.6],
[0.5, 0.9,