深度学习——全连接网络实现

深度学习——全连接网络实现

tf.Variable():初始化函数
tf.Variable(initializer,name),
参数initializer是初始化参数,name是可自定义的变量名称

tf.truncated_normal():让正态分布取值在给定范围
truncated_normal(
shape,
mean=0.0,
stddev=1.0,
dtype=tf.float32,
seed=None,
name=None
)
产生截断正态分布随机数,取值范围为 [ mean - 2 * stddev, mean + 2 * stddev ]。

tf.reduce_sum()
用于计算张量tensor沿着某一维度的和,可以在求和后降维。
tf.reduce_sum(
input_tensor,
axis=None,
keepdims=None,
name=None,
reduction_indices=None,
keep_dims=None)
input_tensor:待求和的tensor;
axis:指定的维,如果不指定,则计算所有元素的总和;
keepdims:是否保持原有张量的维度,设置为True,结果保持输入tensor的形状,设置为False,结果会降低维度,如果不传入这个参数,则系统默认为False;
name:操作的名称;
reduction_indices:在以前版本中用来指定轴,已弃用;
keep_dims:在以前版本中用来设置是否保持原张量的维度,已弃用

tf.pow(x,y)
这个操作为:x和y中的 对应元素 计算x ^y

tf.add_to_collection()
tf.add_to_collection(’‘list_name’‘, element):将元素element添加到列表list_name中

tf.conrtib.layers.12_regularizer(lambda):
函数可以计算L2正则化项的值 ,权重的平方再加和
lambda参数表示了正则化项的权重,也就是公式J(θ)+λR(w)中的λ

tf.train.AdamOptimizer(0.01).minimize(loss)
这是一个优化器,minimize()用来指定最小化的目标
前面的0.01代表学习率
loss是我们需要最小化的目标

错误:


regularizer = tf.contrib.layers.l2_regularizer(0.01)
regularization = regularizer(weight1)+
regularizer(weight2)+regularizer(weight3)


解决:

regularzation = regularzation_rate * tf.nn.l2_loss(weight1) + 
regularzation_rate * tf.nn.l2_loss(weight2)+
regularzation_rate*tf.nn.l2_loss(weight3)

替换:

regularizer = tf.contrib.layers.l2_regularizer(0.01)
regularization = regularizer(weight1)+
regularizer(weight2)+regularizer(weight3)

错误:

InvalidArgumentError: You must feed a value for 
placeholder tensor 'x-input_7' with dtype 
float and shape [?,2]
	 [[{{node x-input_7}}]]

解决:

x = tf.placeholder(tf.float32, shape=(None, 2),
name="x-input")
y_= tf.placeholder(tf.float32, shape=(None, 1),
name="y-output")

前加:

tf.reset_default_graph()

附:
完整可顺利运行的代码:

import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
import numpy as np
training_steps = 30000

data = []
label = []
for i in range(200):
    x1 = np.random.uniform(-1,1)
    x2 = np.random.uniform(0,2)
    if x1**2+x2**2<= 1:
        data.append([np.random.normal(x1,0.1),np.random.normal(x2,0.1)])
        label.append(0)
    else:
        data.append([np.random.normal(x1,0.1),np.random.normal(x2, 0.1)])
        label.append(1)
data = np.hstack(data).reshape(-1,2)
label = np.hstack(label).reshape(-1,1)
def hidden_layer(input_tensor, weight1, bias1, weight2, bias2, weight3, bias3):
    layer1=tf.nn.relu(tf.matmul(input_tensor,weight1)+bias1)
    layer2=tf.nn.relu(tf.matmul(layer1, weight2)+bias2)
    return tf.matmul(layer2,weight3)+bias3
tf.reset_default_graph()
x = tf.placeholder(tf.float32, shape=(None, 2),name="x-input")
y_= tf.placeholder(tf.float32, shape=(None, 1),name="y-output")

weight1 = tf.Variable(tf.truncated_normal([2,10],stddev=0.1))
bias1 = tf.Variable(tf.constant(0.1, shape=[10]))
weight2 =tf.Variable(tf.truncated_normal([10,10],stddev=0.1))
bias2 = tf.Variable(tf.constant(0.1, shape=[10]))
weight3 = tf.Variable(tf.truncated_normal([10,1],stddev=0.1))
bias3 = tf.Variable(tf.constant(0.1, shape=[1]))
sample_size = len(data)

y= hidden_layer(x,weight1,bias1,weight2,bias2,weight3,bias3)
error_loss = tf.reduce_sum(tf.pow(y_ - y,2))
tf.add_to_collection("losses", error_loss)
regularization = 0.01*tf.nn.l2_loss(weight1)+0.01*tf.nn.l2_loss(weight2)+0.01*tf.nn.l2_loss(weight3)
tf.add_to_collection("losses",regularization)
loss = tf.add_n(tf.get_collection("losses"))
train_op = tf.train.AdamOptimizer(0.01).minimize(loss)

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    for i in range(training_steps):
        sess.run(train_op, feed_dict={x: data, y_: label})
        if i % 2000 ==0:
            loss_value = sess.run(loss, feed_dict={x: data, y_:label})
            print("After %d steps, mse_loss: %f" % (i,loss_value))
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值