tensorflow学习笔记--class建立神经网络

2019.9.22

使用class定义神经网络

一、要点总结:

a.定义时别忘记定义optimizer和optimizer的操作,sess.run时也需要指明对optimizer的操作,否则没有优化,loss不会降

b.提取tensor a的shape需要使用函数tf.shape(a),不能直接a.shape()

c.定义网络结构时,如果用tf定义好的层,要注意dense,conv2d等不是module,而是class或function,因此import时要注意

# 错误的import
import tensorflow.layers.Dense as Dense
# 正确的import
from tensorflow.layers import Dense as Dense

二、实验代码

import tensorflow as tf
from tensorflow.layers import Dense as Dense
from tensorflow.losses import mean_squared_error as mse
import numpy as np

class MLP(object):

	def __init__(self, name, layers, inputs, labels):
		self.name = name
		self.layers = layers
		self.inputs = inputs
		self.optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
		self.loss = 0
		print('build nerual network')
		self._build()
		print('build done')
		self._loss(labels)
		self.opt_op = self.optimizer.minimize(self.loss)

	def _build(self):
		input_size = tf.shape(self.inputs)[1]
		print('	layers summary:')
		with tf.compat.v1.variable_scope(self.name) as vs:
			h = Dense(self.layers[0], activation='relu', use_bias=True, 
				kernel_initializer='glorot_uniform',bias_initializer='zeros')(self.inputs)
			print(h)
			for i in range(1,len(self.layers)-1):
				h = Dense(self.layers[i], activation='relu', use_bias=True, 
					kernel_initializer='glorot_uniform', bias_initializer='zeros')(h)
				print(h)
			self.output = Dense(self.layers[-1], activation='relu', use_bias=True, 
				kernel_initializer='glorot_uniform', bias_initializer='zeros')(h)
			print(self.output)

	def _loss(self, labels):
		self.loss += mse(labels, self.output)




if __name__=='__main__':
	x = tf.placeholder(tf.float32, [None,6], name='edge_attr')
	labels = tf.placeholder(tf.float32, [None,6], name='edge_attr')
	model = MLP(name='edg_func', layers=[6,12,24,12,6], inputs=x, labels=labels)

	train_data = np.random.random([32,6])
	train_labels = np.ones((32,6))
	with tf.compat.v1.Session() as sess:
		sess.run(tf.global_variables_initializer())
		for epoch in range(20):
			outputs = sess.run([model.opt_op, model.loss], feed_dict={x:train_data, labels:train_labels})
			print(outputs)

输出:

[None, 0.9066148]
[None, 0.8933821]
[None, 0.879875]
[None, 0.8664134]
[None, 0.8531418]
[None, 0.84034824]
[None, 0.8276992]
[None, 0.8152035]
[None, 0.80269957]
[None, 0.7901166]
[None, 0.7773823]
[None, 0.76448506]
[None, 0.7514027]
[None, 0.7381938]
[None, 0.72484916]
[None, 0.71132785]
[None, 0.6975357]
[None, 0.6835093]
[None, 0.6693341]
[None, 0.6549617]

如果sess.run时没有指定优化器的行为model.opt_op,则不会优化模型,loss不会下降

# 更改上面代码中的outputs语句为下面这一句,即计算图仅仅计算loss
outputs = sess.run([model.loss], feed_dict={x:train_data, labels:train_labels})

此时输出为:

[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]
[0.965892]

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

KunB在学习

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值