Embedding 对象 embedding(input) 是读取第 input 词的词向量

问题:不知道embedding(input)是什么意思,因为weight和input的维度完全没法做乘法运算

from torch.autograd import Variable as V
import torch as t
from torch import nn



t.manual_seed(1000)
input=V(t.randn(2,3,4))

# 一个LSTMCell对应的层数只能是一层
lstm=nn.LSTMCell(4,3)
hx=V(t.randn(3,3))
cx=V(t.randn(3,3))
out=[]
print(input)
for i_ in input:
    hx,cx=lstm(i_,(hx,cx))
    print("i_\n",i_)
    out.append(hx)
print(t.stack(out))

# 词向量在自然语言中应用十分广泛,pytorch同样提供了Embedding层

# 有4个词,每一词用5维的向量表示
embedding=nn.Embedding(4,5)
print("embedding=\n",embedding)
print(embedding.weight)
# 可以用预训练好的词向量初始化embedding
embedding.weight.data=t.arange(0,20).view(4,5)
print("embedding.weight\n",embedding.weight)

print(embedding.weight.data)
with t.no_grad():
   input=V(t.arange(3,0,-1)).long()
   print(input)
   print(embedding.weight.data)
   output = embedding(input)
   print(output)

# print("input=",input[:])


print(output)

 

/home/wangbin/anaconda3/envs/deep_learning/bin/python3.7 /home/wangbin/anaconda3/envs/deep_learning/project/main.py
tensor([[[-0.5306, -1.1300, -0.6734, -0.7669],
         [-0.7029,  0.9896, -0.4482,  0.8927],
         [-0.6043,  1.0726,  1.0481,  1.0527]],

        [[-0.6424, -1.2234, -1.0794, -0.6037],
         [-0.7926, -0.1414, -1.0225, -0.0482],
         [ 0.6610, -0.8908,  1.4793, -0.3934]]])
i_
 tensor([[-0.5306, -1.1300, -0.6734, -0.7669],
        [-0.7029,  0.9896, -0.4482,  0.8927],
        [-0.6043,  1.0726,  1.0481,  1.0527]])
i_
 tensor([[-0.6424, -1.2234, -1.0794, -0.6037],
        [-0.7926, -0.1414, -1.0225, -0.0482],
        [ 0.6610, -0.8908,  1.4793, -0.3934]])
tensor([[[-0.3610, -0.1643,  0.1631],
         [-0.0613, -0.4937, -0.1642],
         [ 0.5080, -0.4175,  0.2502]],

        [[-0.0703, -0.0393, -0.0429],
         [ 0.2085, -0.3005, -0.2686],
         [ 0.1482, -0.4728,  0.1425]]], grad_fn=<StackBackward>)
embedding=
 Embedding(4, 5)
Parameter containing:
tensor([[-0.2297, -0.7947,  0.1204,  0.6523, -0.2653],
        [-0.7661, -0.2536,  0.2986,  0.2507, -0.8575],
        [-0.2504, -0.7584,  1.5276, -0.2307,  2.2975],
        [-1.5437,  1.2492, -0.5840, -0.8269,  0.0587]], requires_grad=True)
embedding.weight
 Parameter containing:
tensor([[ 0,  1,  2,  3,  4],
        [ 5,  6,  7,  8,  9],
        [10, 11, 12, 13, 14],
        [15, 16, 17, 18, 19]], requires_grad=True)
tensor([[ 0,  1,  2,  3,  4],
        [ 5,  6,  7,  8,  9],
        [10, 11, 12, 13, 14],
        [15, 16, 17, 18, 19]])
tensor([3, 2, 1])
tensor([[ 0,  1,  2,  3,  4],
        [ 5,  6,  7,  8,  9],
        [10, 11, 12, 13, 14],
        [15, 16, 17, 18, 19]])
tensor([[15, 16, 17, 18, 19],
        [10, 11, 12, 13, 14],
        [ 5,  6,  7,  8,  9]])
tensor([[15, 16, 17, 18, 19],
        [10, 11, 12, 13, 14],
        [ 5,  6,  7,  8,  9]])

Process finished with exit code 0

 

资料:

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

程序猿的探索之路

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值