NNDL 作业9 RNN - SRN

目录

1. 实现SRN

(1)使用Numpy

(2)在1的基础上,增加激活函数tanh

(3)使用nn.RNNCell实现

(4)使用nn.RNN实现

2. 实现“序列到序列”

3. “编码器-解码器”的简单实现

4.简单总结nn.RNNCell、nn.RNN

5.谈一谈对“序列”、“序列到序列”的理解

6.总结本周理论课和作业,写心得体会


1. 实现SRN

(1)使用Numpy

用state_t储存每一轮从隐藏层输出的值,并传入下一轮的输入,和input相加传入隐层

代码:

import numpy as np

inputs = np.array([[1., 1.],
                   [1., 1.],
                   [2., 2.]])
print('inputs is ', inputs)

state_t = np.zeros(2, )
print('state_t is ', state_t)

w1, w2, w3, w4, w5, w6, w7, w8 = 1., 1., 1., 1., 1., 1., 1., 1.
U1, U2, U3, U4 = 1., 1., 1., 1.
print('--------------------------------------')
for input_t in inputs:
    print('inputs is ', input_t)
    print('state_t is ', state_t)
    in_h1 = np.dot([w1, w3], input_t) + np.dot([U2, U4], state_t)
    in_h2 = np.dot([w2, w4], input_t) + np.dot([U1, U3], state_t)
    state_t = in_h1, in_h2
    output_y1 = np.dot([w5, w7], [in_h1, in_h2])
    output_y2 = np.dot([w6, w8], [in_h1, in_h2])
    print('output_y is ', output_y1, output_y2)
    print('---------------')

结果:

 

(2)在1的基础上,增加激活函数tanh

给每一轮隐藏层的输出加上激活函数,改变y值和储存的h值

代码:

 

import numpy as np

inputs = np.array([[1., 1.],
                   [1., 1.],
                   [2., 2.]])
print('inputs is ', inputs)

state_t = np.zeros(2, )
print('state_t is ', state_t)

w1, w2, w3, w4, w5, w6, w7, w8 = 1., 1., 1., 1., 1., 1., 1., 1.
U1, U2, U3, U4 = 1., 1., 1., 1.
print('--------------------------------------')
for input_t in inputs:
    print('inputs is ', input_t)
    print('state_t is ', state_t)
    in_h1 = np.tanh(np.dot([w1, w3], input_t) + np.dot([U2, U4], state_t))
    in_h2 = np.tanh(np.dot([w2, w4], input_t) + np.dot([U1, U3], state_t))
    state_t = in_h1, in_h2
    output_y1 = np.dot([w5, w7], [in_h1, in_h2])
    output_y2 = np.dot([w6, w8], [in_h1, in_h2])
    print('output_y is ', output_y1, output_y2)
    print('---------------')

结果: 

 

(3)使用nn.RNNCell实现
import torch

batch_size = 1
seq_len = 3  # 序列长度
input_size = 2  # 输入序列维度
hidden_size = 2  # 隐藏层维度
output_size = 2  # 输出层维度

# RNNCell
cell = torch.nn.RNNCell(input_size=input_size, hidden_size=hidden_size)
# 初始化参数 https://zhuanlan.zhihu.com/p/342012463
for name, param in cell.named_parameters():
    if name.startswith("weight"):
        torch.nn.init.ones_(param)
    else:
        torch.nn.init.zeros_(param)
# 线性层
liner = torch.nn.Linear(hidden_size, output_size)
liner.weight.data = torch.Tensor([[1, 1], [1, 1]])
liner.bias.data = torch.Tensor([0.0])

seq = torch.Tensor([[[1, 1]],
                    [[1, 1]],
                    [[2, 2]]])
hidden = torch.zeros(batch_size, hidden_size)
output = torch.zeros(batch_size, output_size)

for idx, input in enumerate(seq):
    print('=' * 20, idx, '=' * 20)

    print('Input :', input)
    print('hidden :', hidden)

    hidden = cell(input, hidden)
    output = liner(hidden)
    print('output :', output)

 结果:

(4)使用nn.RNN实现
import torch

batch_size = 1
seq_len = 3
input_size = 2
hidden_size = 2
num_layers = 1
output_size = 2

cell = torch.nn.RNN(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers)
for name, param in cell.named_parameters():  # 初始化参数
    if name.startswith("weight"):
        torch.nn.init.ones_(param)
    else:
        torch.nn.init.zeros_(param)

# 线性层
liner = torch.nn.Linear(hidden_size, output_size)
liner.weight.data = torch.Tensor([[1, 1], [1, 1]])
liner.bias.data = torch.Tensor([0.0])

inputs = torch.Tensor([[[1, 1]],
                       [[1, 1]],
                       [[2, 2]]])
hidden = torch.zeros(num_layers, batch_size, hidden_size)
out, hidden = cell(inputs, hidden)

print('Input :', inputs[0])
print('hidden:', 0, 0)
print('Output:', liner(out[0]))
print('--------------------------------------')
print('Input :', inputs[1])
print('hidden:', out[0])
print('Output:', liner(out[1]))
print('--------------------------------------')
print('Input :', inputs[2])
print('hidden:', out[1])
print('Output:', liner(out[2]))

 

2. 实现“序列到序列”

(1)

代码:

import torch

batch_size=1
seq_len=3
input_size=4
hidden_size=2
num_layers=1

cell=torch.nn.RNN(input_size=input_size,hidden_size=hidden_size,num_layers=num_layers)

inputs=torch.randn(seq_len,batch_size,input_size)
hidden=torch.zeros(num_layers,batch_size,hidden_size)

output,hidden=cell(inputs,hidden)

print("output size:",output.shape)
print("output",output)
print("hidden size:",hidden.shape)
print("hidden:",hidden)

结果:

 

 

(2)hello -> ohlol

1.RNNCell:需要循环序列

import torch

input_size = 4
hidden_size = 4
batch_size = 1

index2char = ['e', 'h', 'l', 'o']
x_data = [1, 0, 2, 2, 3]
y_data = [3, 1, 2, 3, 2]
# one-hot 独热码
one_hot_lookup = [[1, 0, 0, 0],
                  [0, 1, 0, 0],
                  [0, 0, 1, 0],
                  [0, 0, 0, 1]]
x_one_hot = [one_hot_lookup[x] for x in x_data]  # 转化为独热向量,维度(seq, batch, input)
print(x_one_hot)
dataset = torch.Tensor(x_one_hot).view(-1, batch_size,
                                       input_size)  #-1自动设置seq_len长度
labels = torch.LongTensor(y_data).view(-1, 1)  # 每层对应一个类(seqLen,𝟏)计算交叉熵损失时标签不需要我们进行one-hot编码,其内部会自动进行处理

class Model(torch.nn.Module):
    def __init__(self, input_size, hidden_size, batch_size):
        super(Model, self).__init__()
        self.batch_size = batch_size
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.rnncell = torch.nn.RNNCell(input_size=self.input_size, hidden_size=hidden_size)

    def forward(self, input, hidden):
        hidden = self.rnncell(input, hidden)
        return hidden

    def init_hidden(self):  # 只有在初始化隐藏层需要batch_size
        return torch.zeros(self.batch_size, self.hidden_size)


net = Model(input_size, hidden_size, batch_size)

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.1)


for epoch in range(15):
    loss = 0
    optimizer.zero_grad()
    hidden = net.init_hidden()  # 初始化h0
    print('Predicted string:', end='')
    for input, label in zip(dataset, labels):
        hidden = net(input, hidden)
        loss += criterion(hidden, label)  # loss是第一层的一个损失,需要相加求所有的loss
        _, idx = hidden.max(dim=1)  # 取出概率最大的索引
        print(index2char[idx.item()], end='')
    loss.backward()
    optimizer.step()
    print(',Epoch [%d / 15] loss:%.4f' % (epoch + 1, loss.item()))

结果:

 

2.RNN:不需要循环序列

 代码:

import torch

seq_len = 5
input_size = 4
hidden_size = 4
batch_size = 1

index2char = ['e', 'h', 'l', 'o']
x_data = [1, 0, 2, 2, 3]
y_data = [3, 1, 2, 3, 2]
one_hot_lookup = [[1, 0, 0, 0],
                  [0, 1, 0, 0],
                  [0, 0, 1, 0],
                  [0, 0, 0, 1]]
x_one_hot = [one_hot_lookup[x] for x in x_data]
print(x_one_hot)
inputs = torch.Tensor(x_one_hot).view(-1, batch_size, input_size)
labels = torch.LongTensor(y_data)


class Model(torch.nn.Module):
    def __init__(self, input_size, hidden_size, batch_size, num_layers=1):
        super(Model, self).__init__()
        self.batch_size = batch_size
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.num_layers = num_layers #增加layers层
        self.rnn = torch.nn.RNN(input_size=self.input_size, hidden_size=self.hidden_size, num_layers=self.num_layers)

    def forward(self, input):
        hidden = torch.zeros(self.num_layers, #增加
                             self.batch_size,
                             self.hidden_size)
        out, _ = self.rnn(input, hidden)
        return out.view(-1, self.hidden_size) #seqlen * batchsize,hiddensize


net = Model(input_size, hidden_size, batch_size)

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.5)

for epoch in range(15):
    optimizer.zero_grad()
    outputs = net(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()

    _, idx = outputs.max(dim=1)
    idx = idx.data.numpy()
    print('Predicted string: ', ''.join([index2char[x] for x in idx]), end='')
    print(',Epoch [%d / 15] loss:%.4f' % (epoch + 1, loss.item()))

结果:

 

 

3. “编码器-解码器”的简单实现

import torch
import numpy as np
import torch.nn as nn
import torch.utils.data as Data

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# S: Symbol that shows starting of decoding input
# E: Symbol that shows starting of decoding output
# ?: Symbol that will fill in blank sequence if current batch data size is short than n_step

letter = [c for c in 'SE?abcdefghijklmnopqrstuvwxyz']
letter2idx = {n: i for i, n in enumerate(letter)}

seq_data = [['man', 'women'], ['black', 'white'], ['king', 'queen'], ['girl', 'boy'], ['up', 'down'], ['high', 'low']]

# Seq2Seq Parameter
n_step = max([max(len(i), len(j)) for i, j in seq_data])  # max_len(=5)
n_hidden = 128
n_class = len(letter2idx)  # classfication problem
batch_size = 3


def make_data(seq_data):
    enc_input_all, dec_input_all, dec_output_all = [], [], []

    for seq in seq_data:
        for i in range(2):
            seq[i] = seq[i] + '?' * (n_step - len(seq[i]))  # 'man??', 'women'

        enc_input = [letter2idx[n] for n in (seq[0] + 'E')]  # ['m', 'a', 'n', '?', '?', 'E']
        dec_input = [letter2idx[n] for n in ('S' + seq[1])]  # ['S', 'w', 'o', 'm', 'e', 'n']
        dec_output = [letter2idx[n] for n in (seq[1] + 'E')]  # ['w', 'o', 'm', 'e', 'n', 'E']

        enc_input_all.append(np.eye(n_class)[enc_input])
        dec_input_all.append(np.eye(n_class)[dec_input])
        dec_output_all.append(dec_output)  # not one-hot

    # make tensor
    return torch.Tensor(enc_input_all), torch.Tensor(dec_input_all), torch.LongTensor(dec_output_all)


'''
enc_input_all: [6, n_step+1 (because of 'E'), n_class]
dec_input_all: [6, n_step+1 (because of 'S'), n_class]
dec_output_all: [6, n_step+1 (because of 'E')]
'''
enc_input_all, dec_input_all, dec_output_all = make_data(seq_data)


class TranslateDataSet(Data.Dataset):
    def __init__(self, enc_input_all, dec_input_all, dec_output_all):
        self.enc_input_all = enc_input_all
        self.dec_input_all = dec_input_all
        self.dec_output_all = dec_output_all

    def __len__(self):  # return dataset size
        return len(self.enc_input_all)

    def __getitem__(self, idx):
        return self.enc_input_all[idx], self.dec_input_all[idx], self.dec_output_all[idx]


loader = Data.DataLoader(TranslateDataSet(enc_input_all, dec_input_all, dec_output_all), batch_size, True)


# Model
class Seq2Seq(nn.Module):
    def __init__(self):
        super(Seq2Seq, self).__init__()
        self.encoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5)  # encoder
        self.decoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5)  # decoder
        self.fc = nn.Linear(n_hidden, n_class)

    def forward(self, enc_input, enc_hidden, dec_input):
        # enc_input(=input_batch): [batch_size, n_step+1, n_class]
        # dec_inpu(=output_batch): [batch_size, n_step+1, n_class]
        enc_input = enc_input.transpose(0, 1)  # enc_input: [n_step+1, batch_size, n_class]
        dec_input = dec_input.transpose(0, 1)  # dec_input: [n_step+1, batch_size, n_class]

        # h_t : [num_layers(=1) * num_directions(=1), batch_size, n_hidden]
        _, h_t = self.encoder(enc_input, enc_hidden)
        # outputs : [n_step+1, batch_size, num_directions(=1) * n_hidden(=128)]
        outputs, _ = self.decoder(dec_input, h_t)

        model = self.fc(outputs)  # model : [n_step+1, batch_size, n_class]
        return model


model = Seq2Seq().to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

for epoch in range(5000):
    for enc_input_batch, dec_input_batch, dec_output_batch in loader:
        # make hidden shape [num_layers * num_directions, batch_size, n_hidden]
        h_0 = torch.zeros(1, batch_size, n_hidden).to(device)

        (enc_input_batch, dec_intput_batch, dec_output_batch) = (
            enc_input_batch.to(device), dec_input_batch.to(device), dec_output_batch.to(device))
        # enc_input_batch : [batch_size, n_step+1, n_class]
        # dec_intput_batch : [batch_size, n_step+1, n_class]
        # dec_output_batch : [batch_size, n_step+1], not one-hot
        pred = model(enc_input_batch, h_0, dec_intput_batch)
        # pred : [n_step+1, batch_size, n_class]
        pred = pred.transpose(0, 1)  # [batch_size, n_step+1(=6), n_class]
        loss = 0
        for i in range(len(dec_output_batch)):
            # pred[i] : [n_step+1, n_class]
            # dec_output_batch[i] : [n_step+1]
            loss += criterion(pred[i], dec_output_batch[i])
        if (epoch + 1) % 1000 == 0:
            print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()


# Test
def translate(word):
    enc_input, dec_input, _ = make_data([[word, '?' * n_step]])
    enc_input, dec_input = enc_input.to(device), dec_input.to(device)
    # make hidden shape [num_layers * num_directions, batch_size, n_hidden]
    hidden = torch.zeros(1, 1, n_hidden).to(device)
    output = model(enc_input, hidden, dec_input)
    # output : [n_step+1, batch_size, n_class]

    predict = output.data.max(2, keepdim=True)[1]  # select n_class dimension
    decoded = [letter[i] for i in predict]
    translated = ''.join(decoded[:decoded.index('E')])

    return translated.replace('?', '')


print('test')
print('man ->', translate('man'))
print('mans ->', translate('mans'))
print('king ->', translate('king'))
print('black ->', translate('black'))
print('up ->', translate('up'))

结果:

 

4.简单总结nn.RNNCell、nn.RNN

1. nn.RNNCell在循环网络中将序列分开处理,分成不同的时刻,相较于nn.RNN,处理数据比较灵活,但计算麻烦,RNNCell是一个计算单元,不涉及层数的概念

 

RNNCell()只能接受序列中单步的输入,且必须传入隐藏状态, 

参数

  • input_size – 输入 x 中预期特征的数量
  • hidden_​​size – 隐藏状态下的特征数量 h
  • 偏差 – 如果 False ,则该层不使用偏差权重 b_ih 和 b_hh 。默认值: True
  • 非线性 – 使用的非线性。可以是 'tanh' 或 'relu' 。默认: 'tanh'

 

2.nn.RNN

RNN是循环神经网络层,RNN有了layers的概念,可以根据需要设置层数、隐藏状态的维度、激活函数等参数来构建不同规模和功能的RNN模型

nn.RNN 构造时传入的是feature_len和hidden_len,至于有多少个特征(seq_len)、一次输入多少样本(batch)都是可以在运行时候动态决定的,RNN()可以接受一个序列的输入,默认会传入全0的隐藏状态,也可以自己申明隐藏状态传入。

参数

  • input_size – 输入 x 中预期特征的数量
  • hidden_​​size – 隐藏状态下的特征数量 h
  • num_layers – 循环层数。例如,设置 num_layers=2 意味着将两个 RNN 堆叠在一起形成 stacked RNN ,第二个 RNN 接收第一个 RNN 的输出并计算最终结果。默认值:1
  • 非线性 – 使用的非线性。可以是 'tanh' 或 'relu' 。默认: 'tanh'
  • 偏差 – 如果 False ,则该层不使用偏差权重 b_ih 和 b_hh 。默认值: True
  • batch_first – 如果为 True ,则输入和输出张量作为 (batch, seq, feature) 提供。默认值: False
  • dropout – 如果非零,则在除最后一层之外的每个 RNN 层的输出上引入 Dropout 层,dropout 概率等于 dropout 。默认值:0
  • 双向 – 如果是 True ,则成为双向 RNN。默认: False

 (创建nn.RNNCell()需要循环序列处理数据,而nn.RNN直接设置层数)

 

5.谈一谈对“序列”、“序列到序列”的理解

序列:

是一系列具有连续关系的数据的高度抽象,序列数据大多有时序关系,每个数据点的意义往往依赖于其前面或后面的数据,比如自然语言、时间。

序列到序列:

简称Seq2Seq,是一种通过将序列转化为另一种人们需要的序列模型,由编码器和译码器组成,实现序列的转换,。编码器可以将信号进行编制,转换成容易传输的形式,然后解码器将得到的固定长度的向量再还原成对应的序列数据,一般使用和编码器同样的结构,也是一个RNN类的网络。序列到序列广泛应用于机器翻译、文本摘要、对话系统等任务中

6.总结本周理论课和作业,写心得体会
 

了解了基础的循环神经网络模型,输出层是一个全连接层,它的每个节点都和隐藏层的每个节点相连,隐藏层是循环层。全连接网络的隐藏层的值取决于输入的 x,而循环神经网络不仅仅取决于当前这次的输入 x,还取决于上一次隐藏层的值 h,每次循环都需要保存隐藏层的输出,

nn.RNNCell和nn.RNN都是pytorch的计算循环神经网络模型的函数,主要区别是nn.RNNCell需要对单个序列进行循环处理,而调用nn.RNN时直接设置layers层数。nn.RNN是由多个RNNCell组成的多层RNN模型。

然后通过序列到序列、编码器和解码器的实例,学到了自然语言处理的基础,可以通过循环神经网络模型来译码解码,识别不同的数据

  • 18
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值