文章目录
0. name_scope和variable_scope
- name_scope will add scope as a prefix to all operations
- variable_scope will add scope as a prefix to all variables and operations
初始化变量:
- tf.Variable() constructer prefixes variable name with current name_scope and variable_scope
- tf.get_variable() constructor ignores name_scope and only prefixes name with the current variable_scope
使用tf.variable_scope来定义共享变量。
The easiest way to do reuse variables is to use the reuse_variables() as shown below
with tf.variable_scope("scope"):
var1 = tf.get_variable("variable1",[1])
tf.get_variable_scope().reuse_variables()
var2=tf.get_variable("variable1",[1])
assert var1 == var2
https://programming.vip/docs/5d4e8ed7f3666.html
1. placeholder(占位符)
tf.placeholder(
dtype, #常用的是tf.float32
shape=None,默认是一维的,如果设成[None, 4],即为二维的,行数未知
name=None
)
2. fetch和feed
2.1 fetch可以在一个会话中同时执行多个op
input1 = tf.constant(3.0)
input2 = tf.constant(2.0)
input3 = tf.constant(5.0)
add = tf.add(input2,input3)
mul = tf.multiply(input1,add)
with tf.Session() as sess:
result = sess.run([mul, add])
print((result))
2.2 feed可以每次给图传入不同的数据
# 创建占位符
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.multiply(input1, input2)
with tf.Session() as sess:
# feed数据以字典形式传入
print(sess.run(output, feed_dict={input1:[2.0],input2:[6.0]}))
3. 数学运算相关的API
3.1 tf.nn.embedding_lookup
以NLP的词嵌入为例,tf.nn.embedding_lookup(word_embedding, input_word),其中word_embedding的维度是vocab_size(词汇的总个数) * embedding_size(词嵌入的维度),而input_word从一维和二维分别进行说明。
假设input_word为一维,它的维度为word_num,则结果的维度为word_num * embedding_size,也就是取出了word_num个数个词汇。
假设input_word为二维,维度是sentence_num(句子的个数) * word_num(词的个数)。则结果的维度为sentence_num * word_num * embedding_size。也就是有sentence_num个句子,其中每个句子为word_num个词,每个词为embedding_size。
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
embedding = tf.Variable(np.identity(6, dtype=np.int32))
input_ids = tf.placeholder(dtype=tf.int32, shape=[None,3])
input_embedding = tf.nn.embedding_lookup(embedding, input_ids)
sess.run(tf.global_variables_initializer())
print (sess.run(embedding))
print ('sepeate paragraph')
print (sess.run(input_embedding, feed_dict={input_ids: [[0, 0, 0], [0, 1, 0]]}))
4. 卷积
4.1 一维卷积
import tensorflow as tf
inputs = tf.placeholder(dtype = 'float', shape=[None, 5, 2])
data = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])
data = np.expand_dims(data, axis=0)
print(data.shape)
out = tf.layers.conv1d(inputs, 5, 3) #5个filters,每个filter的windows是3
#print(out)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(out)
print(sess.run(out, feed_dict={inputs:data}))
结果为
(1, 5, 2)
Tensor("conv1d_32/BiasAdd:0", shape=(?, 3, 5), dtype=float32)
[[[ 6.1485815 -1.2904111 8.131317 1.8840454 0.31307578]
[ 8.398123 -0.84522295 11.939692 3.5162792 0.36305082]
[10.647663 -0.4000342 15.748065 5.148513 0.41302562]]]