PyTorch自用笔记(第六周-实战2)

十一、循环神经网络RNN&LSTM

11.1 时间序列表示方法

[seq_len, feature_len]:[序列长度, 特征长度/维度/表示方法]
文本信息:
1.one-hot编码:
特定的位置编码为1,其余为0
缺点:稀疏
2.[words, words_vec]

Batch:
[word num, b, word vec]
[b, word num, word vec]

编码方式:
word2vec vs GloVe

from torcchnlp.word_to_vector import GloVe
vectors = GloVe()

vectors['hello']
-1.7494
0.6242
...
-0.6202
20.928
[torch.FloatTensor of size 100]

11.2 RNN

特征:
1.权值共享
2.持续记忆单元(保存语境信息)
PyTorch实现
在这里插入图片描述
nn.RNN:
__init__
(input_size, hidden_size, num_layers)
input_size:输入的单词向量的维度
hidden_size:memory_size
num_layers:默认为1
forward
out, ht = forward(x, h0)
x:[seq len, b, word vec]
h0/ht:[num layers, b, h dim]
out:[seq len, b, h dim]
单层RNN
在这里插入图片描述
注:out_size不会发生变化;h表示最后一个时间戳下memory的状态
2层RNN-shape验证
在这里插入图片描述
nn.RNNCell:
__init__:与RNN完全相同
forward
ht = rnncell(xt, ht_1)
xt:[b, word vec]
ht_1/ht:[num layers, b, h dim]
out = torch.stack([h1, h2, …, ht])
在这里插入图片描述

11.3 时间序列预测实战

import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from matplotlib import pyplot as plt

num_time_steps = 50
input_size = 1
hidden_size = 16
output_size = 1
lr = 0.01


class Net(nn.Module):
    def __init__(self, ):
        super(Net, self).__init__()

        self.rnn = nn.RNN(
            input_size=input_size,
            hidden_size=hidden_size,
            num_layers=1,
            batch_first=True,  # [b, seq, feature]
        )
        for p in self.rnn.parameters():
            nn.init.normal_(p, mean=0.0, std=0.001)

        self.linear = nn.Linear(hidden_size, output_size)

    def forward(self, x, hidden_prev):  # (self, x, h0)

        out, hidden_prev = self.rnn(x, hidden_prev)
        # [1, seq, h] => [seq, h]
        out = out.view(-1, hidden_size)
        out = self.linear(out)  # [seq,h] => [seq, 1]
        out = out.unsqueeze(dim=0)  # => [1, seq, 1]
        return out, hidden_prev


# Train
model = Net()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr)

hidden_prev = torch.zeros(1, 1, hidden_size)  # h0

for iter in range(6000):
    start = np.random.randint(3, size=1)[0]	 # 0~3
    time_steps = np.linspace(start, start + 10, num_time_steps)
    data = np.sin(time_steps)
    data = data.reshape(num_time_steps, 1)
    x = torch.tensor(data[:-1]).float().view(1, num_time_steps - 1, 1)
    y = torch.tensor(data[1:]).float().view(1, num_time_steps - 1, 1)

    output, hidden_prev = model(x, hidden_prev)
    hidden_prev = hidden_prev.detach()

    loss = criterion(output, y)
    model.zero_grad()
    loss.backward()
    # for p in model.parameters():
    #   print(p.grad.norm())
    # torch.nn.utils.clip_grad_norm(p, 10)
    optimizer.step()

    if iter % 100 == 0:
        print("Iteration: {} loss {} ".format(iter, loss.item()))

start = np.random.randint(3, size=1)[0]
time_steps = np.linspace(start, start + 10, num_time_steps)
data = np.sin(time_steps)
data = data.reshape(num_time_steps, 1)
x = torch.tensor(data[:-1]).float().view(1, num_time_steps - 1, 1)
y = torch.tensor(data[1:]).float().view(1, num_time_steps - 1, 1)

predictions = []
input = x[:, 0, :]
for _ in range(x.shape[1]):
    input = input.view(1, 1, 1)
    (pred, hidden_prev) = model(input, hidden_prev)
    input = pred
    predictions.append(pred.detach().numpy().ravel()[0])

x = x.data.numpy().ravel()
y = y.data.numpy()
plt.scatter(time_steps[:-1], x.ravel(), s=90)
plt.plot(time_steps[:-1], x.ravel())

plt.scatter(time_steps[1:], predictions)
plt.show()

运行结果:
Iteration: 0 loss 0.5240068435668945
Iteration: 100 loss 0.004781486000865698
Iteration: 200 loss 0.0025698889512568712
Iteration: 300 loss 0.0021712062880396843
Iteration: 400 loss 0.003106305142864585
Iteration: 500 loss 0.006951724644750357
Iteration: 600 loss 0.00876646488904953
Iteration: 700 loss 0.0003261358942836523
Iteration: 800 loss 0.001015920890495181
Iteration: 900 loss 0.003062265692278743
Iteration: 1000 loss 0.0043131341226398945
Iteration: 1100 loss 0.00014511161134578288
Iteration: 1200 loss 0.0009089858504012227
Iteration: 1300 loss 0.0009695018525235355
Iteration: 1400 loss 0.001020518015138805
Iteration: 1500 loss 0.0009882590966299176
Iteration: 1600 loss 0.0004311317461542785
Iteration: 1700 loss 0.0012930548982694745
Iteration: 1800 loss 0.0005156291299499571
Iteration: 1900 loss 0.001561652636155486
Iteration: 2000 loss 0.0007380764000117779
Iteration: 2100 loss 0.0012094884878024459
Iteration: 2200 loss 0.00036121331504546106
Iteration: 2300 loss 0.000719703733921051
Iteration: 2400 loss 9.609026892576367e-05
Iteration: 2500 loss 0.0009065204649232328
Iteration: 2600 loss 0.001319637056440115
Iteration: 2700 loss 0.0006897666025906801
Iteration: 2800 loss 0.00015256847837008536
Iteration: 2900 loss 0.00026130853802897036
Iteration: 3000 loss 8.58261773828417e-05
Iteration: 3100 loss 0.0008577909902669489
Iteration: 3200 loss 0.00030213454738259315
Iteration: 3300 loss 0.00019907366367988288
Iteration: 3400 loss 0.0004966565757058561
Iteration: 3500 loss 0.0009238572674803436
Iteration: 3600 loss 0.00027086795307695866
Iteration: 3700 loss 0.0005339682684279978
Iteration: 3800 loss 0.00024678310728631914
Iteration: 3900 loss 0.0007551009184680879
Iteration: 4000 loss 0.00019022168999072164
Iteration: 4100 loss 0.00015904009342193604
Iteration: 4200 loss 0.00032481923699378967
Iteration: 4300 loss 2.64111440628767e-05
Iteration: 44

  • 2
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值