神经网络
- input layer
- hidden layer
- output layer
怎样被训练的?
需要很多数据样本进行训练,对比预测答案和真实答案的差别,改变模型一点点;
激励函数
梯度下降
这个个优化问题(optimization)
cost function,就是误差方程。
Cost = ( predicted - real) 2 = ( Wx - y ) 2
神经网络求得的局部最优解已足够优秀
- 局部最优
- 全局最优
卷积神经网络(tensorflow)
kernel/filter
初始化
在初始化共享权重矩阵(kernel/filter)时,一开始将stddev设置成了1(stddev=1),导致模型训练后的精度仍低至0.1;自己核对代码,将stddev调整至0.1后(stddev=0.1)后,训练后的模型精度就提高至0.99了。
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
tf.truncated_normal 截尾正态分布
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
就是生成一个指定均值和标准差的正太分布,然后只截取均值左右两个标准差之内的值。
-
Args:
-
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution, before truncation.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
Returns:
A tensor of the specified shape filled with random truncated normal values.