对Transformer用于句子翻译的理解

前言:

        RNN有并行化难,以及长程依赖问题。提出一种新的简单网络结构,即Transformer,它完全基于注意机制,不需要递归和卷积。

Model Structure

        Transformers不像LSTM具有处理序列排序的内置机制,它将序列中的每个单词视为彼此独立。 所以使用位置编码来保留有关句子中单词顺序的信息。

        提升了模型对位置信息的感知能力,弥补了Self Attention机制中位置信息的缺失(Self Attention机制中是没有位置信息的,而每个token是有先后顺序的,所以要有位置编码)。

位置编码:

PE_{(pos,2i)}=sin(\frac{pos}{10000^{\frac{2i}{d_{modle}}}})

PE_{(pos,2i+1)}=cos(\frac{pos}{10000^{\frac{2i}{d_{modle}}}})

Multi-Head Attention & Scaled Dot-Product Attention

        编码器的输入首先流过一个self-attention层,该层帮助编码器能够看到输入序列中的其他单词当它编码某个词时。
        self-attention的输出流向一个前向网络,每个输入位置对应的前向网络是独立互不干扰的。
        解码器同样也有这些子层,但是在两个子层间增加了attention层,该层有助于解码器能够关注到输入句子的相关部分,与seq2seq model的Attention作用相似。

代码:


import numpy as np
import torch, time, itertools, os, sys 
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt

# S: 表示开始进行解码输入的符号.
# E: 表示结束进行解码输出的符号.
# P: 当前批次数据大小小于时间步长时将填充空白序列的符号

'''1.数据预处理'''
def pre_process(sentences):
    # P在第一个,方便处理
    src_sequence = ['P']
    src_sequence.extend(sentences[0].split())
    src_list = []

    for word in src_sequence:
        if word not in src_list:
            src_list.append(word)
    src_dict = {w:i for i,w in enumerate(src_list)}
    src_dict_size = len(src_dict)
    src_len = len(sentences[0].split()) # length of source

    # P在第一个,方便处理
    tgt_sequence = ['P']
    tgt_sequence.extend(sentences[1].split()+sentences[2].split())
    tgt_list = []
    
    for word in tgt_sequence:
        if word not in tgt_list:
            tgt_list.append(word)
    tgt_dict = {w:i for i,w in enumerate(tgt_list)}
    number_dict = {i:w for i,w in enumerate(tgt_dict)}
    tgt_dict_size = len(tgt_dict)
    tgt_len = len(sentences[1].split()) # length of target

    return src_dict,src_dict_size,tgt_dict,number_dict,tgt_dict_size,src_len,tgt_len

'''根据句子数据,构建词元的输入向量'''
def make_batch(sentences):
    input_batch = [[src_dict[n] for n in sentences[0].split()]]
    output_batch = [[tgt_dict[n] for n in sentences[1].split()]]
    target_batch = [[tgt_dict[n] for n in sentences[2].split()]]
    input_batch = torch.LongTensor(np.array(input_batch)).to(device)
    output_batch = torch.LongTensor(np.array(output_batch)).to(device)
    target_batch = torch.LongTensor(np.array(target_batch)).to(device)
    # print(input_batch, output_batch,target_batch) # tensor([[0, 1, 2, 3, 4, 5]]) tensor([[3, 1, 0, 2, 4]]) tensor([[1, 0, 2, 4, 5]])
    return input_batch, output_batch,target_batch

def get_position_encoding_table(n_position, d_model): 
    # inputs: (src_len+1, d_model) or (tgt_len+1, d_model)
    pos_table = np.zeros((n_position, d_model))
    for pos in range(n_position):
        for i in range(d_model):
            tmp = pos / np.power(10000, 2*(i//2) / d_model)
            if i % 2 == 0: 
                # 偶数为正弦
                pos_table[pos][i] = np.sin(tmp) # (7 or 6, 512)
            else:
                # 奇数为余弦
                pos_table[pos][i] = np.cos(tmp) # (7 or 6, 512)

    return torch.FloatTensor(pos_table).to(device)

def get_attn_pad_mask(seq_q, seq_k, dict): 
    '''mask大小和(len_q,len_k)一致,
    是为了在点积注意力中,与torch.matmul(Q,K)的大小一致'''
    # (seq_q, seq_k): (dec_inputs, enc_inputs) 
    # dec_inputs:[batch_size, tgt_len] # [1,5]
    # enc_inputs:[batch_size, src_len] # [1,6]
    batch_size, len_q = seq_q.size() # 1,5
    batch_size, len_k = seq_k.size() # 1,6
    
    # eq(zero) is PAD token
    pad_attn_mask = seq_k.data.eq(dict['P']).unsqueeze(1) # 升维 enc: [1,6] -> [1,1,6]
    # 矩阵扩充: enc: pad_attn_mask: [1,1,6] -> [1,5,6]
    return pad_attn_mask.expand(batch_size, len_q, len_k) # batch_size, len_q, len_k

'''Attention = Softmax(Q * K^T) * V '''
def Scaled_Dot_Product_Attention(Q, K, V, attn_mask): 
    # Q_s: [batch_size, n_heads, len_q, d_k] # [1,8,5,64]
    # K_s: [batch_size, n_heads, len_k, d_k] # [1,8,6,64]
    # attn_mask: [batch_size, n_heads, len_q, len_k] # [1,8,5,6]

    # [1,8,5,64] * [1,8,64,6] -> [1,8,5,6]
    # scores : [batch_size, n_heads, len_q, len_k]
    scores = torch.matmul(Q, K.transpose(2,3)) / np.sqrt(d_k) # divided by scale

    scores.masked_fill_(attn_mask, -1e9)

    # 'P'的scores元素值为-1e9, softmax值即为0
    softmax = nn.Softmax(dim=-1) # 求行的softmax
    attn = softmax(scores) # [1,8,6,6]
    # [1,8,6,6] * [1,8,6,64] -> [1,8,6,64]
    context = torch.matmul(attn, V) # [1,8,6,64]
    return context, attn

class MultiHeadAttention(nn.Module):
    # dec_enc_attn(dec_outputs, enc_outputs, enc_outputs, dec_enc_attn_mask)
    def __init__(self):
        super().__init__()
        self.W_Q = nn.Linear(d_model, d_k*n_heads) # (512, 64*8) # d_q必等于d_k
        self.W_K = nn.Linear(d_model, d_k*n_heads) # (512, 64*8) # 保持维度不变
        self.W_V = nn.Linear(d_model, d_v*n_heads) # (512, 64*8)
        self.linear = nn.Linear(n_heads*d_v, d_model)
        self.layer_norm = nn.LayerNorm(d_model)

    def forward(self, Q, K, V, attn_mask):
        # dec_outputs: [batch_size, tgt_len, d_model] # [1,5,512]
        # enc_outputs: [batch_size, src_len, d_model] # [1,6,512]
        # dec_enc_attn_mask: [batch_size, tgt_len, src_len] # [1,5,6]
        # q/k/v: [batch_size, len_q/k/v, d_model]
        residual, batch_size = Q, len(Q)
        
        # Q_s: [batch_size, len_q, n_heads, d_q] # [1,5,8,64]
        # new_Q_s: [batch_size, n_heads, len_q, d_q] # [1,8,5,64]
        Q_s = self.W_Q(Q).view(batch_size, -1, n_heads, d_k).transpose(1,2)  
        # K_s: [batch_size, n_heads, len_k, d_k] # [1,8,6,64]
        K_s = self.W_K(K).view(batch_size, -1, n_heads, d_k).transpose(1,2)  
        # V_s: [batch_size, n_heads, len_k, d_v] # [1,8,6,64]
        V_s = self.W_V(V).view(batch_size, -1, n_heads, d_v).transpose(1,2)  

        # attn_mask : [1,5,6] -> [1,1,5,6] -> [1,8,5,6]
        # attn_mask : [batch_size, n_heads, len_q, len_k]
        attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1) 

        # context: [batch_size, n_heads, len_q, d_v]
        # attn: [batch_size, n_heads, len_q(=len_k), len_k(=len_q)]
        # context: [1,8,5,64] attn: [1,8,5,6]
        context, attn = Scaled_Dot_Product_Attention(Q_s, K_s, V_s, attn_mask)
        
        # context: [1,8,5,64] -> [1,5,512]
        context = context.transpose(1, 2).contiguous().view(batch_size, -1, n_heads * d_v)
        # context: [1,5,512] -> [1,5,512]
        output = self.linear(context)
        
        output = self.layer_norm(output + residual)
        return output, attn 

class Position_wise_Feed_Forward_Networks(nn.Module):
    def __init__(self):
        super().__init__()
        
        # 512 -> 2048
        self.conv1 = nn.Conv1d(in_channels=d_model, out_channels=d_ff, kernel_size=1)
        # 2048 -> 512
        self.conv2 = nn.Conv1d(in_channels=d_ff, out_channels=d_model, kernel_size=1)
        self.layer_norm = nn.LayerNorm(d_model)

    def forward(self, inputs):
        # enc_outputs: [batch_size, source_len, d_model] # [1,6,512]
        residual = inputs 
        relu = nn.ReLU()
        # output: 512 -> 2048 [1,2048,6]
        output = relu(self.conv1(inputs.transpose(1, 2)))
        # output: 2048 -> 512 [1,6,512]
        output = self.conv2(output).transpose(1, 2)
        return self.layer_norm(output + residual)

class EncoderLayer(nn.Module):
    def __init__(self):
        super(EncoderLayer, self).__init__()
        self.enc_attn = MultiHeadAttention()
        self.pos_ffn = Position_wise_Feed_Forward_Networks()

    def forward(self, enc_outputs, enc_attn_mask):
        # enc_attn_mask: [1,6,6]
        # enc_outputs to same Q,K,V
        # enc_outputs: [batch_size, source_len, d_model] # [1, 6, 512]
        enc_outputs, attn = self.enc_attn(enc_outputs, \
            enc_outputs, enc_outputs, enc_attn_mask) 
        # enc_outputs: [batch_size , len_q , d_model]
        enc_outputs = self.pos_ffn(enc_outputs) 
        return enc_outputs, attn

class DecoderLayer(nn.Module):
    def __init__(self):
        super().__init__()
        self.dec_attn = MultiHeadAttention()
        self.dec_enc_attn = MultiHeadAttention()
        self.pos_ffn = Position_wise_Feed_Forward_Networks()

    def forward(self, dec_outputs, enc_outputs, dec_attn_mask, dec_enc_attn_mask):
        dec_outputs, dec_attn = \
            self.dec_attn(dec_outputs, dec_outputs, dec_outputs, dec_attn_mask)
        # dec_outputs: [1, 5, 512]   dec_enc_attn: [1, 8, 5, 6]
        dec_outputs, dec_enc_attn = \
            self.dec_enc_attn(dec_outputs, enc_outputs, enc_outputs, dec_enc_attn_mask)
        # 相当于两个全连接层 512 -> 2048 -> 512
        dec_outputs = self.pos_ffn(dec_outputs)
        return dec_outputs, dec_attn, dec_enc_attn

class Encoder(nn.Module):
    def __init__(self):
        super().__init__()
        # 输入向量的嵌入矩阵
        self.src_emb = nn.Embedding(src_dict_size, d_model).to(device)
        # 6个编码层
        self.layers = nn.ModuleList([EncoderLayer().to(device) for _ in range(n_layers)])

    def forward(self, enc_inputs): # enc_inputs: [batch_size, source_len] # [1, 6]
        input = [src_dict[i] for i in sentences[0].split()] # [0,1,2,3,4,5]
        
        
        # 可学习的输入向量嵌入矩阵 + 不可学习的序列向量矩阵
        # enc_outputs: [1, 6, 512]
        
        enc_outputs = self.src_emb(enc_inputs) + position_encoding[input] 
        # 屏蔽P,返回一个 [batch_size,src_len,src_len]=[1,6,6]的二值矩阵
        enc_attn_mask = get_attn_pad_mask(enc_inputs, enc_inputs, src_dict)

        enc_attns = []
        for layer in self.layers:
            # enc_outputs既是输入,也是输出
            enc_outputs, enc_attn = layer(enc_outputs, enc_attn_mask)
            enc_attns.append(enc_attn)
        return enc_outputs, enc_attns

class Decoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.tgt_emb = nn.Embedding(tgt_dict_size, d_model)
        self.layers = nn.ModuleList([DecoderLayer() for _ in range(n_layers)])

    def forward(self, dec_inputs, enc_inputs, enc_outputs): 
        # dec_inputs : [batch_size, target_len] [1,5]
        input = [tgt_dict[i] for i in sentences[1].split()]
        dec_outputs = self.tgt_emb(dec_inputs)

        # 为输入句子的每一个单词(不去重)加入序列信息
        # dec_outputs: [1, 5, 512]
        dec_outputs = dec_outputs + position_encoding[input] 
        
        # 屏蔽pad字符,返回一个 [batch_size,tgt_len,tgt_len]=[1,5,5]的二值矩阵
        dec_attn_mask = get_attn_pad_mask(dec_inputs, dec_inputs, tgt_dict)
        
        # 屏蔽pad字符和之后时刻的信息
        for i in range(0, len(dec_inputs[0])):
            for j in range(i+1, len(dec_inputs[0])):
                dec_attn_mask[0][i][j] = True # 使softmax值为0
        
        # 第二个多注意力机制,输入来自encoder和decoder 屏蔽encoder中的pad字符
        dec_enc_attn_mask = get_attn_pad_mask(dec_inputs, enc_inputs, src_dict) # [1,5,6]

        dec_attns, dec_enc_attns = [], []
        for layer in self.layers:
            dec_outputs, dec_attn, dec_enc_attn = \
                layer(dec_outputs, enc_outputs, dec_attn_mask, dec_enc_attn_mask)
            dec_attns.append(dec_attn)
            dec_enc_attns.append(dec_enc_attn)
        return dec_outputs, dec_attns, dec_enc_attns

'''2.构建Transformer模型'''
class Transformer(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = Encoder()
        self.decoder = Decoder()
        self.projection = nn.Linear(d_model, tgt_dict_size, bias=False)
    def forward(self, dec_inputs, enc_inputs):
        enc_outputs, enc_attns = self.encoder(enc_inputs)
        dec_outputs, dec_attns, dec_enc_attns \
            = self.decoder(dec_inputs, enc_inputs, enc_outputs)
        # model_outputs: [batch_size, tgt_len, tgt_dict_size]
        model_outputs = self.projection(dec_outputs) # [1,5,7]
        model_outputs = model_outputs.squeeze(0)

        return model_outputs, enc_attns, dec_attns, dec_enc_attns

'''贪婪搜索'''
def Greedy_Search(model, enc_inputs, start_symbol): # start_symbol = tgt_dict['S']
    
    padding_inputs = 'S' + ' P' * (tgt_len-1)
    # [1,5]
    dec_inputs = torch.Tensor([[tgt_dict[i] for i in padding_inputs.split()]]).to(device).type_as(enc_inputs.data)
    next_symbol = start_symbol
    i = 0

    while True:
        
        dec_inputs[0][i] = next_symbol
        outputs, _, _, _ = model(dec_inputs, enc_inputs)    
        predict = outputs.squeeze(0).max(dim=-1, keepdim=False)[1]
        next_symbol = predict[i].item()
        i = i + 1
        # 超过最大长度或者有终止符则退出循环
        if next_symbol == tgt_dict['E'] or i > max_len:
            break
    outputs, _, _, _ = model(dec_inputs, enc_inputs)    
    predict = outputs.squeeze(0).max(dim=-1, keepdim=False)[1]
    return predict

# 为束搜索构造连续重复矩阵
def repeat(inputs,k=2):
    
    tmp = torch.zeros(inputs.size(0)*k, inputs.size(1), inputs.size(2)).type_as(inputs.data)
    pos = 0
    while pos < len(inputs):
        tmp[pos*k] = inputs[pos]
        for i in range(1,k):
            tmp[pos*k+i] = inputs[pos]
        pos = pos + 1
    return tmp

'''束搜索'''
def Beam_Search(model, enc_inputs, start_symbol, k=2):
    padding_inputs = 'S' + (tgt_len-1)*' P'
    # dec_inputs:[1,5]
    dec_inputs = torch.Tensor([[tgt_dict[i] for i in padding_inputs.split()]]).to(device).type_as(enc_inputs.data)
    # all_dec_inputs用来存储每一种dec_inputs
    # all_dec_inputs:[1,1,5]
    all_dec_inputs = dec_inputs.unsqueeze(0)
    # 由于第一个单词被定为'S',所以其实每轮生成的实际结果数是2,4,8,16,16(最后一轮复制为32,但因为马上就结束循环,不会再分别赋值)
    for i in range(tgt_len):
        # 每一个单词,all_dec_inputs都要重复k次,存放新生成的k**i个结果
        # 注意,repeat函数实现的是连续重复,而不是toch.repeat那样的整体间断重复
        all_dec_inputs = repeat(all_dec_inputs,k) 
        
        # 对于第i个单词,以步长k遍历all_dec_inputs(即前一轮的所有dec_inputs)
        # 分别将其作为模型输入
        for j in range(0,len(all_dec_inputs),k):
            # print(j)
            dec_inputs = all_dec_inputs[j] 
            outputs, _, _, _ = model(dec_inputs, enc_inputs)
            # 排序,得到索引
            indices = outputs.sort(descending=True).indices # [5,7]
            # 提取第i个单词的第k个可能,赋值给all_dec_inputs的第j+pos个样本
            # (因为连续重复,所以all_dec_inputs中第j个dec_inputs生成的
            # 输出就分布在j~j+k个dec_inputs)
            for pos in range(k):
                if i < tgt_len - 1:
                    # i+1表示留给下一轮用,这和贪婪搜索中的思想一致
                    all_dec_inputs[j+pos][0][i+1] = indices[i][pos]
    print(all_dec_inputs)
    '''评判所有dec_inputs,选出输出概率最大的'''
    result = []
    for dec_inputs in all_dec_inputs:
        sum = 0
        outputs, _, _, _ = model(dec_inputs, enc_inputs) 
        indexs = outputs.squeeze(0).max(dim=-1, keepdim=False)[1]
        for i in range(tgt_len):
            # 计算各个样本输出的单词概率之和
            sum = sum + outputs.data[i][indexs[i]]
        result.append(sum.item())
    '''可能过拟合输出错误解'''
    max_index = result.index(max(result))
    predict = all_dec_inputs[max_index]
    return predict

if __name__ == '__main__':
    chars = 30 * '*'
    # sentences = ['ich mochte ein bier P', 'S i want a beer', 'i want a beer E']
    sentences = ['我 要 喝 啤 酒 P', 'S i want a beer', 'i want a beer E']
    device = ['cuda:0' if torch.cuda.is_available() else 'cpu'][0]
    d_model = 512  # Embedding Size 嵌入向量维度
    d_ff = 2048  # FeedForward dimension 
    d_k = d_v = 64  # dimension of K(=Q), V 
    n_layers = 6  # number of Encoder of Decoder Layer 编解码器层数
    n_heads = 8  # number of heads in Multi-Head Attention 多注意力机制头数
    max_len = 5

    '''1.数据预处理'''
    src_dict,src_dict_size,tgt_dict,number_dict,tgt_dict_size,src_len,tgt_len = pre_process(sentences)
    position_encoding = get_position_encoding_table(max(src_dict_size, tgt_dict_size), d_model)
    enc_inputs, dec_inputs, target_batch = make_batch(sentences)

    '''2.构建模型'''
    model = Transformer()
    model.to(device)

    criterion = nn.CrossEntropyLoss()
    # Adam的效果非常不好
    optimizer = optim.SGD(model.parameters(), lr=0.001)

    if os.path.exists('model_param.pt') == True:
        # 加载模型参数到模型结构
        model.load_state_dict(torch.load('model_param.pt', map_location=device))

    '''3.训练'''
    print('{}\nTrain\n{}'.format('*'*30, '*'*30))
    loss_record = []
    for epoch in range(1000):
        optimizer.zero_grad()
        outputs, enc_attns, dec_attns, dec_enc_attns \
            = model(dec_inputs, enc_inputs)
        outputs = outputs.to(device)
        loss = criterion(outputs, target_batch.contiguous().view(-1))
        loss.backward()
        optimizer.step()

        if loss >= 0.01: # 连续30轮loss小于0.01则提前结束训练
            loss_record = []
        else:
            loss_record.append(loss.item())
            if len(loss_record) == 30:
                torch.save(model.state_dict(), 'model_param.pt')
                break    

        if (epoch + 1) % 100 == 0:
            print('Epoch:', '%04d' % (epoch + 1), 'Loss = {:.6f}'.format(loss))
            torch.save(model.state_dict(), 'model_param.pt')

    '''4.测试'''
    print('{}\nTest\n{}'.format('*'*30, '*'*30))
    # 贪婪算法
    predict = Greedy_Search(model, enc_inputs, start_symbol=tgt_dict["S"])
    # 束搜索算法
    predict = Beam_Search(model, enc_inputs, start_symbol=tgt_dict["S"])
    
    for word in sentences[0].split():
        if word not in ['S', 'P', 'E']:
            print(word, end='')
    print(' ->',end=' ')
    
    for i in predict.data.squeeze():
        if number_dict[int(i)] not in ['S', 'P', 'E'] :
            print(number_dict[int(i)], end=' ')

运行结果:

 

  1. 贪婪搜索和束搜索 贪婪搜索和束搜索都是针对多个时间步,每一轮都要比较概率大小的,因此所有预测生成1个单词或者进行单词翻译的 都谈不上贪婪搜索和束搜索(没有多个时间步),直接用predict=model(inputs)的也谈不上贪婪搜索和束搜索(没有每一轮比较概率大小). 对于Seq2Seq和采用了序列模型的transformer来说,贪婪搜索和束搜索都应该用预测的单词覆盖填充的'SPPPP'中的'P'

    1. 对于翻译多个单词的任务,应该对于每个生成的单词设置循环
      1. 贪婪搜索:每个时间步生成一个单词的概率分布,取最大值,然后把这个值传给进行下一时间步,最后生成所有单词
      2. 束搜索(k=3):在每个时间步上预测k个max单词然后把这两个单词分别作为值传给进行下一时间步, 当然,这样会进行$k^T$次预测,存储$k^T$个输出,最后取总概率最高的.
    2. 对于预测生成多个单词的任务,应该对于每个生成的单词设置循环
      1. 贪婪搜索:在输入时间步之后,每个输出时间步生成一个单词的概率分布,取最大值, 然后把这个值传给进行下一时间步,最后生成所有单词
      2. 束搜索(k=3):在输入时间步之后,每个输出时间步上预测k个max单词,然后把这两个单词分别作为值 传给进行下一时间步,当然,这样会进行$k^T$次预测,存储$k^T$个输出,最后取总概率最高的.
  2. 为什么Seq2Seq和基于序列模型的transformer直接用predict=model(enc_inputs, dec_inputs='SPPPPP')的效果不好,其中transformer效果尤其差,而其他模型还不错?          直接用predict=model(enc_inputs, dec_inputs='SPPPPP')既不算贪婪搜索也不算束搜索,因为这样只在最后才比较概率大小,而贪婪搜索和束搜索每轮都要计算

    1. RNN/LSTM等模型不需要'SPPPP'填充,因此不会受到空白信息影响,可以直接生成, 而Seq2Seq/transformer会受到'SPPPP'影响,所以效果不好.
    2. 其中,序列模型和RNN/LSTM类似,一个时间步对应一个单词,一个decoder时间步对应初始输入为'P', 上一次的时间步输出可以影响下一时间步生成,所以效果不好不坏
    3. 而transformer每次输入输出都是以整个句子为单位,所以不存在上一次的时间步输出可以影响下一时间步生成,所以效果尤其差,必须用循环依次生成
  3. 为什么束搜索中有时候其他句子总体评价更高? 模型过于复杂,训练样本太少,导致过拟合.

源码参考:Cheng_0829

论文参考链接:

The Illustrated Transformer – Jay Alammar – Visualizing machine learning one concept at a time. (jalammar.github.io)

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值