tf.nn.embedding_lookup即在给定的范围内做映射
下面直接看例子
样例1
t = np.asarray([1,2,3,0])
params = tf.constant([10,20,30,40])
embedded_inputs = tf.nn.embedding_lookup(params, t)
with tf.Session() as sess:
print(sess.run(embedded_inputs))
结果
[20 30 40 10]
可见结果即是按照t的顺序,将params中查找出来
样例2
t = np.asarray([1,2,3,0])
with tf.variable_scope('test', reuse=tf.AUTO_REUSE):
t = tf.convert_to_tensor(t)
embedding_table = tf.Variable([10,20,30,40],
name='phone_embedding', shape=[4],
dtype=tf.float32)
embedded_inputs = tf.nn.embedding_lookup(embedding_table, t)
init_ops = [tf.global_variables_initializer(),
tf.local_variables_initializer()]
with tf.Session() as sess:
sess.run(init_ops)
print(sess.run(t))
print(sess.run(embedding_table))
print(sess.run(embedded_inputs))
结果
[1 2 3 0]
[10. 20. 30. 40.]
[20. 30. 40. 10.]
样例3
索引值为矩阵
#Index t is a 2D array
t = np.asarray([1,2,3,0]).reshape([2,2])
with tf.variable_scope('test', reuse=tf.AUTO_REUSE):
t = tf.convert_to_tensor(t)
embedding_table = tf.get_variable(
'phone_embedding', [4, 3],
dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.5))
embedded_inputs = tf.nn.embedding_lookup(embedding_table, t)
init_ops = [tf.global_variables_initializer(),
tf.local_variables_initializer()]
with tf.Session() as sess:
sess.run(init_ops)
print('t = ', sess.run(t))
print('embedding_table = ', sess.run(embedding_table))
print('embedded_inputs = ', sess.run(embedded_inputs))
结果
t = [[1 2]
[3 0]]
embedding_table = [[ 0.19044022 -0.9089187 -0.71793836]
[-0.01651155 0.24362208 -0.02951453]
[ 0.7932888 0.68590254 0.14450389]
[ 0.4447717 -0.28524947 -0.79486984]]
embedded_inputs = [[[-0.01651155 0.24362208 -0.02951453]
[ 0.7932888 0.68590254 0.14450389]]
[[ 0.4447717 -0.28524947 -0.79486984]
[ 0.19044022 -0.9089187 -0.71793836]]]
t的序列即在embedding_table的第0维中查找。
[1,2]即为[ embedding_table[1] , embedding_table[2] ]
[3,0]即为[ embedding_table[3] , embedding_table[0] ]