【从官方案例学框架Tensorflow/Keras】数字加法seq2seq

【从官方案例学框架Tensorflow/Keras】数字加法seq2seq

Keras官方案例链接
Tensorflow官方案例链接
Paddle官方案例链接
Pytorch官方案例链接

注:本系列仅帮助大家快速理解、学习并能独立使用相关框架进行深度学习的研究,理论部分还请自行学习补充,每个框架的官方经典案例写的都非常好,很值得进行学习使用。可以说在完全理解官方经典案例后加以修改便可以解决大多数常见的相关任务。

摘要:【从官方案例学框架Keras】数字加法seq2seq,使用LSTM实现数字加法


I Setup

导入相关库与定义超参数

from tensorflow import keras
from tensorflow.keras import layers
import numpy as np

# Parameters for the model and dataset.
TRAINING_SIZE = 50000
DIGITS = 3
REVERSE = True

# Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
# int is DIGITS.
MAXLEN = DIGITS + 1 + DIGITS

定义训练数据量(TRAINING_SIZE)为50000
数字位数(DIGITS)为最多3位
并考虑a+b与b+a的颠倒加法
定义sequence 长度最大为 3 + 1 + 3,当加数和被加数均为3位数时,sequence长度最大为7:len(‘abc+abc’) = 7,不够最大长度7就用空格填充

II Generate the data

生成训练数据

class CharacterTable:
    """Given a set of characters:
    + Encode them to a one-hot integer representation
    + Decode the one-hot or integer representation to their character output
    + Decode a vector of probabilities to their character output
    """

    def __init__(self, chars):
        """Initialize character table.
        # Arguments
            chars: Characters that can appear in the input.
        """
        self.chars = sorted(set(chars))
        self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
        self.indices_char = dict((i, c) for i, c in enumerate(self.chars))

    def encode(self, C, num_rows):
        """One-hot encode given string C.
        # Arguments
            C: string, to be encoded.
            num_rows: Number of rows in the returned one-hot encoding. This is
                used to keep the # of rows for each data the same.
        """
        x = np.zeros((num_rows, len(self.chars)))
        for i, c in enumerate(C):
            x[i, self.char_indices[c]] = 1
        return x

    def decode(self, x, calc_argmax=True):
        """Decode the given vector or 2D array to their character output.
        # Arguments
            x: A vector or a 2D array of probabilities or one-hot representations;
                or a vector of character indices (used with `calc_argmax=False`).
            calc_argmax: Whether to find the character index with maximum
                probability, defaults to `True`.
        """
        if calc_argmax:
            # np.argmax取一个vocabulary的概率分布最大的那个
            x = x.argmax(axis=-1)
        return "".join(self.indices_char[x] for x in x)

定义数字的编码(encode)与解码(decode)

encode的方式为独热向量编码one-hot encoding,举个例子:
  首先,我们的词典vocabulary为’0123456789+ ’ 11个字符,别忘记用于填充最大长度的空格与加法符号+
  对于输入如 ‘13+25’将会被编码为下面这种形式

词典: 0 1 2 3 4 5 7 8 9 + ‘空格’

    0 1 0 0 0 0 0 0 0 0 0
    0 0 0 1 0 0 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 1 0
    0 0 1 0 0 0 0 0 0 0 0
    0 0 0 0 0 1 0 0 0 0 0

decode的方式则是通过np.argmax取一个vocabulary的概率分布最大的那个

补充:np.argmax(axis=-1),将会取最后一维中最大的数,
如 3 x 4 x 5的array,用np.argmax(axis=-1)将会得到3 x 4 的结果,表示在同axis=0,axis=1但不同axis=2的最值

# All the numbers, plus sign and space for padding.
chars = "0123456789+ "
ctable = CharacterTable(chars)

questions = []
expected = []
seen = set()
print("Generating data...")
while len(questions) < TRAINING_SIZE:
    # int后将001这样的字符转成了1
    # f用于随机生成N位数,N<=(DIGITS=3)
    f = lambda: int(
        "".join(
            np.random.choice(list("0123456789"))
            # range(1)或2,3
            for i in range(np.random.randint(1, DIGITS + 1))
        )
    )
    # 随机生成1-3位数
    a, b = f(), f()
    # Skip any addition questions we've already seen
    # Also skip any such that x+Y == Y+x (hence the sorting).
    key = tuple(sorted((a, b)))
    if key in seen:
        continue
    seen.add(key)
    # Pad the data with spaces such that it is always MAXLEN.
    q = "{}+{}".format(a, b)
    query = q + " " * (MAXLEN - len(q))
    ans = str(a + b)
    # Answers can be of maximum size DIGITS + 1.
    ans += " " * (DIGITS + 1 - len(ans))
    if REVERSE:
        # Reverse the query, e.g., '12+345  ' becomes '  543+21'. (Note the
        # space used for padding.)
        query = query[::-1]
    questions.append(query)
    expected.append(ans)
print("Total questions:", len(questions))

产生训练数据,如(注:为了显示明显,用_代替空格,用|分组)
questions中:‘12+345_’ | ‘1+1___’ | ‘_543+21’
expected中: ‘357’ | ‘2__’ | ‘357’

注意:这里有trick的,设置REVERSE 颠倒question中的query,而不修改ans,注意ans是在REVERSE条件前就一定生成并添加到excepted列表中。就是说’_543+21’的ans会是’357’,只要在最后预测阶段将input[::-1]再颠倒print输出’12+345_‘即可,但模型的输入还是’_543+21’。

Theoretically, sequence order inversion introduces shorter term dependencies between source and target for this problem.

从理论上讲,序列顺序颠倒为seq2seq问题引入了更短的源与目标之间的依赖关系。

参见论文:Sequence to Sequence Learning with Neural Networks
在这里插入图片描述
在这里插入图片描述

III Vectorize the data

print("Vectorization...")
x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
for i, sentence in enumerate(questions):
    x[i] = ctable.encode(sentence, MAXLEN)
for i, sentence in enumerate(expected):
    y[i] = ctable.encode(sentence, DIGITS + 1)

# Shuffle (x, y) in unison as the later parts of x will almost all be larger
# digits.
indices = np.arange(len(y))
np.random.shuffle(indices)
x = x[indices]
y = y[indices]

# Explicitly set apart 10% for validation data that we never train over.
split_at = len(x) - len(x) // 10
(x_train, x_val) = x[:split_at], x[split_at:]
(y_train, y_val) = y[:split_at], y[split_at:]

print("Training Data:")
print(x_train.shape)
print(y_train.shape)

print("Validation Data:")
print(x_val.shape)
print(y_val.shape)

在这里插入图片描述
数字的encoding表示,上面讲过了,不再阐述;
值得注意的是,Keras官网中并没有用 0 1表示one-hot,而是 True False形式

IV Build the model

print("Build model...")
num_layers = 1  # Try to add more LSTM layers!

model = keras.Sequential()
# "Encode" the input sequence using a LSTM, producing an output of size 128.
# Note: In a situation where your input sequences have a variable length,
# use input_shape=(None, num_feature).
model.add(layers.LSTM(128, input_shape=(MAXLEN, len(chars))))
# As the decoder RNN's input, repeatedly provide with the last output of
# RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum
# length of output, e.g., when DIGITS=3, max output is 999+999=1998.
model.add(layers.RepeatVector(DIGITS + 1))
# The decoder RNN could be multiple layers stacked or a single layer.
for _ in range(num_layers):
    # By setting return_sequences to True, return not only the last output but
    # all the outputs so far in the form of (num_samples, timesteps,
    # output_dim). This is necessary as TimeDistributed in the below expects
    # the first dimension to be the timesteps.
    model.add(layers.LSTM(128, return_sequences=True))

# Apply a dense layer to the every temporal slice of an input. For each of step
# of the output sequence, decide which character should be chosen.
model.add(layers.Dense(len(chars), activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.summary()

在这里插入图片描述
LSTM(128, return_sequences=True)中设置 return_sequences=True将会使得输出不仅仅是LSTM的最后一层的一个输出,而是所有timestamp的输出,并能够堆叠LSTM层,这也是对于每个timestamp都要输出一个值的必须保证。如对于ans = ‘531’,将会在第一个timestamp输出5(取预测的vocabulary的概率分布中最大的元素将是),第二个timestamp输出3,第三个输出1。

V Train the model

epochs = 30
batch_size = 64


# Train the model each generation and show predictions against the validation
# dataset.
for epoch in range(1, epochs):
    print()
    print("Iteration", epoch)
    model.fit(
        x_train,
        y_train,
        batch_size=batch_size,
        epochs=1,
        validation_data=(x_val, y_val)
    )
    # Select 10 samples from the validation set at random so we can visualize errors.
    for i in range(10):
        ind = np.random.randint(0, len(x_val))
        # 随机取出x_val中的第ind组
        rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
        preds = np.argmax(model.predict(rowx), axis=-1)
        q = ctable.decode(rowx[0])
        correct = ctable.decode(rowy[0])
        guess = ctable.decode(preds[0], calc_argmax=False)
        print("Q", q[::-1] if REVERSE else q, end=" ")
        print("T", correct, end=" ")
        if correct == guess:
            print("☑ " + guess)
        else:
            print("☒ " + guess)

在这里插入图片描述
验证集大约有99%+的准确率

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

阿芒Aris

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值