jp_cn_translation

Japanese-Chinese Machine Translation Model with Transformer & PyTorch

A tutorial using Jupyter Notebook, PyTorch, Torchtext, and SentencePiece

Import required packages

Firstly, let’s make sure we have the below packages installed in our system, if you found that some packages are missing, make sure to install them.

import math
import torchtext
import torch
import torch.nn as nn
from torch import Tensor
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
from collections import Counter
from torchtext.vocab import Vocab
from torch.nn import TransformerEncoder, TransformerDecoder, TransformerEncoderLayer, TransformerDecoderLayer
import io
import time
import pandas as pd
import numpy as np
import pickle
import tqdm
import sentencepiece as spm
torch.manual_seed(0)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# print(torch.cuda.get_device_name(0)) ## 如果你有GPU,请在你自己的电脑上尝试运行这一套代码
device
device(type='cuda')

Get the parallel dataset

In this tutorial, we will use the Japanese-English parallel dataset downloaded from JParaCrawl![http://www.kecl.ntt.co.jp/icl/lirg/jparacrawl] which is described as the “largest publicly available English-Japanese parallel corpus created by NTT. It was created by largely crawling the web and automatically aligning parallel sentences.” You can also see the paper here.

df = pd.read_csv('./zh-ja/zh-ja.bicleaner05.txt', sep='\\t', engine='python', header=None)#识别文件使用语言为python
trainen = df[2].values.tolist()#[:10000]
trainja = df[3].values.tolist()#[:10000]
# trainen.pop(5972)
# trainja.pop(5972)

After importing all the Japanese and their English counterparts, I deleted the last data in the dataset because it has a missing value. In total, the number of sentences in both trainen and trainja is 5,973,071, however, for learning purposes, it is often recommended to sample the data and make sure everything is working as intended, before using all the data at once, to save time.

Here is an example of sentence contained in the dataset.

print(trainen[500])#数据集打印
print(trainja[500])
Chinese HS Code Harmonized Code System < HS编码 2905 无环醇及其卤化、磺化、硝化或亚硝化衍生物 HS Code List (Harmonized System Code) for US, UK, EU, China, India, France, Japan, Russia, Germany, Korea, Canada ...
Japanese HS Code Harmonized Code System < HSコード 2905 非環式アルコール並びにそのハロゲン化誘導体、スルホン化誘導体、ニトロ化誘導体及びニトロソ化誘導体 HS Code List (Harmonized System Code) for US, UK, EU, China, India, France, Japan, Russia, Germany, Korea, Canada ...

We can also use different parallel datasets to follow along with this article, just make sure that we can process the data into the two lists of strings as shown above, containing the Japanese and English sentences.

Prepare the tokenizers

Unlike English or other alphabetical languages, a Japanese sentence does not contain whitespaces to separate the words. We can use the tokenizers provided by JParaCrawl which was created using SentencePiece for both Japanese and English, you can visit the JParaCrawl website to download them, or click here.

en_tokenizer = spm.SentencePieceProcessor(model_file='enja_spm_models/spm.en.nopretok.model')#下载文件
ja_tokenizer = spm.SentencePieceProcessor(model_file='enja_spm_models/spm.ja.nopretok.model')

After the tokenizers are loaded, you can test them, for example, by executing the below code.

en_tokenizer.encode("All residents aged 20 to 59 years who live in Japan must enroll in public pension system.", out_type='str')#测试代码功能
---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

Cell In[9], line 1
----> 1 en_tokenizer.encode("All residents aged 20 to 59 years who live in Japan must enroll in public pension system.", out_type='str')


File /usr/local/anaconda3/lib/python3.11/site-packages/sentencepiece/__init__.py:561, in SentencePieceProcessor.Encode(self, input, out_type, add_bos, add_eos, reverse, emit_unk_piece, enable_sampling, nbest_size, alpha, num_threads)
    557 if out_type == 'immutable_proto':
    558   return self._EncodeAsImmutableProto(input, enable_sampling, nbest_size,
    559                                       alpha, add_bos, add_eos, reverse, emit_unk_piece)
--> 561 raise RuntimeError('unknown out_type={}'.format(out_type))
    562 return None


RuntimeError: unknown out_type=str
ja_tokenizer.encode("年金 日本に住んでいる20歳~60歳の全ての人は、公的年金制度に加入しなければなりません。", out_type='str')#测试一下翻译的功能
---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

Cell In[10], line 1
----> 1 ja_tokenizer.encode("年金 日本に住んでいる20歳~60歳の全ての人は、公的年金制度に加入しなければなりません。", out_type='str')


File /usr/local/anaconda3/lib/python3.11/site-packages/sentencepiece/__init__.py:561, in SentencePieceProcessor.Encode(self, input, out_type, add_bos, add_eos, reverse, emit_unk_piece, enable_sampling, nbest_size, alpha, num_threads)
    557 if out_type == 'immutable_proto':
    558   return self._EncodeAsImmutableProto(input, enable_sampling, nbest_size,
    559                                       alpha, add_bos, add_eos, reverse, emit_unk_piece)
--> 561 raise RuntimeError('unknown out_type={}'.format(out_type))
    562 return None


RuntimeError: unknown out_type=str

Build the TorchText Vocab objects and convert the sentences into Torch tensors

Using the tokenizers and raw sentences, we then build the Vocab object imported from TorchText. This process can take a few seconds or minutes depending on the size of our dataset and computing power. Different tokenizer can also affect the time needed to build the vocab, I tried several other tokenizers for Japanese but SentencePiece seems to be working well and fast enough for me.

def build_vocab(sentences, tokenizer):
  counter = Counter()# 初始化一个Counter用于计数每个词的频率
  for sentence in sentences:
    counter.update(tokenizer.encode(sentence, out_type=str))# 使用tokenizer对句子进行分词,并将分词结果更新到Counter中
  return Vocab(counter, specials=['<unk>', '<pad>', '<bos>', '<eos>'])
ja_vocab = build_vocab(trainja, ja_tokenizer)# 使用Counter创建Vocab对象,指定特殊字符
en_vocab = build_vocab(trainen, en_tokenizer)#将语言定义完整识别

After we have the vocabulary objects, we can then use the vocab and the tokenizer objects to build the tensors for our training data.

def data_process(ja, en):
  data = []# 初始化一个空列表,用于存储处理后的数据
  for (raw_ja, raw_en) in zip(ja, en):
    ja_tensor_ = torch.tensor([ja_vocab[token] for token in ja_tokenizer.encode(raw_ja.rstrip("\n"), out_type=str)],
                            dtype=torch.long) # 对每个英语句子进行处理,步骤同上
    en_tensor_ = torch.tensor([en_vocab[token] for token in en_tokenizer.encode(raw_en.rstrip("\n"), out_type=str)],
                            dtype=torch.long) # 将处理好的日语张量和英语张量组成一个元组,添加到data列表中
    data.append((ja_tensor_, en_tensor_))# 返回处理好的数据列表
  return data# trainen是包含所有英语训练句子的列表
train_data = data_process(trainja, trainen)

Create the DataLoader object to be iterated during training

Here, I set the BATCH_SIZE to 16 to prevent “cuda out of memory”, but this depends on various things such as your machine memory capacity, size of data, etc., so feel free to change the batch size according to your needs (note: the tutorial from PyTorch sets the batch size as 128 using the Multi30k German-English dataset.)

BATCH_SIZE = 8# 定义批处理大小
PAD_IDX = ja_vocab['<pad>']# 填充标记的索引
BOS_IDX = ja_vocab['<bos>']# 句子开始标记的索引
EOS_IDX = ja_vocab['<eos>']# 句子结束标记的索引
def generate_batch(data_batch):
  ja_batch, en_batch = [], []
  for (ja_item, en_item) in data_batch:# 为每个日文句子添加开始和结束标记,并将其转换为张量
    ja_batch.append(torch.cat([torch.tensor([BOS_IDX]), ja_item, torch.tensor([EOS_IDX])], dim=0))# 为每个英文句子添加开始和结束标记,并将其转换为张量
    en_batch.append(torch.cat([torch.tensor([BOS_IDX]), en_item, torch.tensor([EOS_IDX])], dim=0))
  ja_batch = pad_sequence(ja_batch, padding_value=PAD_IDX)
  en_batch = pad_sequence(en_batch, padding_value=PAD_IDX)# 使用填充标记将所有日文句子填充到相同长度
  return ja_batch, en_batch# 使用填充标记将所有英文句子填充到相同长度
train_iter = DataLoader(train_data, batch_size=BATCH_SIZE,
                        shuffle=True, collate_fn=generate_batch)

Sequence-to-sequence Transformer

The next couple of codes and text explanations (written in italic) are taken from the original PyTorch tutorial [https://pytorch.org/tutorials/beginner/translation_transformer.html]. I did not make any change except for the BATCH_SIZE and the word de_vocabwhich is changed to ja_vocab.

Transformer is a Seq2Seq model introduced in “Attention is all you need” paper for solving machine translation task. Transformer model consists of an encoder and decoder block each containing fixed number of layers.

Encoder processes the input sequence by propagating it, through a series of Multi-head Attention and Feed forward network layers. The output from the Encoder referred to as memory, is fed to the decoder along with target tensors. Encoder and decoder are trained in an end-to-end fashion using teacher forcing technique.

from torch.nn import (TransformerEncoder, TransformerDecoder,
                      TransformerEncoderLayer, TransformerDecoderLayer)


class Seq2SeqTransformer(nn.Module):
    def __init__(self, num_encoder_layers: int, num_decoder_layers: int,
                 emb_size: int, src_vocab_size: int, tgt_vocab_size: int,
                 dim_feedforward:int = 512, dropout:float = 0.1):
        super(Seq2SeqTransformer, self).__init__()# 创建编码器层(TransformerEncoderLayer)和编码器(TransformerEncoder)
        encoder_layer = TransformerEncoderLayer(d_model=emb_size, nhead=NHEAD,
                                                dim_feedforward=dim_feedforward)
        self.transformer_encoder = TransformerEncoder(encoder_layer, num_layers=num_encoder_layers)
        decoder_layer = TransformerDecoderLayer(d_model=emb_size, nhead=NHEAD,# 创建解码器层(TransformerDecoderLayer)和解码器(TransformerDecoder)
                                                dim_feedforward=dim_feedforward)
        self.transformer_decoder = TransformerDecoder(decoder_layer, num_layers=num_decoder_layers)
# 创建生成器(Linear层),用于将解码器输出映射到目标词汇表的维度
        self.generator = nn.Linear(emb_size, tgt_vocab_size)
        self.src_tok_emb = TokenEmbedding(src_vocab_size, emb_size)
        self.tgt_tok_emb = TokenEmbedding(tgt_vocab_size, emb_size)
        self.positional_encoding = PositionalEncoding(emb_size, dropout=dropout)
# 创建源语言和目标语言的词嵌入层(TokenEmbedding)
    def forward(self, src: Tensor, trg: Tensor, src_mask: Tensor,
                tgt_mask: Tensor, src_padding_mask: Tensor,
                tgt_padding_mask: Tensor, memory_key_padding_mask: Tensor):
        src_emb = self.positional_encoding(self.src_tok_emb(src))
        tgt_emb = self.positional_encoding(self.tgt_tok_emb(trg))
        memory = self.transformer_encoder(src_emb, src_mask, src_padding_mask)
        outs = self.transformer_decoder(tgt_emb, memory, tgt_mask, None,
                                        tgt_padding_mask, memory_key_padding_mask)
        return self.generator(outs)
# 创建位置编码层(PositionalEncoding),用于为输入序列添加位置信息
    def encode(self, src: Tensor, src_mask: Tensor):
        return self.transformer_encoder(self.positional_encoding(
                            self.src_tok_emb(src)), src_mask)

        # 使用编码器对源语言序列进行编码
    def decode(self, tgt: Tensor, memory: Tensor, tgt_mask: Tensor):
        return self.transformer_decoder(self.positional_encoding(
                          self.tgt_tok_emb(tgt)), memory,
                          tgt_mask)

Text tokens are represented by using token embeddings. Positional encoding is added to the token embedding to introduce a notion of word order.

class PositionalEncoding(nn.Module):# 计算每个位置的衰减因子,使用指数函数
    def __init__(self, emb_size: int, dropout, maxlen: int = 5000):
        super(PositionalEncoding, self).__init__()
        den = torch.exp(- torch.arange(0, emb_size, 2) * math.log(10000) / emb_size) # 生成从0到maxlen-1的所有位置
        pos = torch.arange(0, maxlen).reshape(maxlen, 1)# 初始化一个大小为 (maxlen, emb_size) 的全零矩阵
        pos_embedding = torch.zeros((maxlen, emb_size)) # 根据公式填充 pos_embedding 矩阵的偶数和奇数列
        pos_embedding[:, 0::2] = torch.sin(pos * den)# 为 pos_embedding 增加一个额外的维度
        pos_embedding[:, 1::2] = torch.cos(pos * den)
        pos_embedding = pos_embedding.unsqueeze(-2) # 定义 Dropout 层以防止过拟合
        self.dropout = nn.Dropout(dropout)
        self.register_buffer('pos_embedding', pos_embedding)
# 将 pos_embedding 注册为 buffer,使其在模型保存和加载时包含在内,但不会作为模型参数训练
    def forward(self, token_embedding: Tensor):
        return self.dropout(token_embedding +# 为输入的 token_embedding 加上对应位置的 pos_embedding,并应用 Dropout
                            self.pos_embedding[:token_embedding.size(0),:])

class TokenEmbedding(nn.Module):
    def __init__(self, vocab_size: int, emb_size):
        super(TokenEmbedding, self).__init__()
        self.embedding = nn.Embedding(vocab_size, emb_size)# 定义嵌入层,将词汇表的索引映射到指定维度的向量
        self.emb_size = emb_size
    def forward(self, tokens: Tensor):# 对输入的 tokens 进行嵌入,并乘以 sqrt(emb_size) 进行缩放
        return self.embedding(tokens.long()) * math.sqrt(self.emb_size)

We create a subsequent word mask to stop a target word from attending to its subsequent words. We also create masks, for masking source and target padding tokens

def generate_square_subsequent_mask(sz): # 生成一个方形的上三角mask矩阵
    mask = (torch.triu(torch.ones((sz, sz), device=device)) == 1).transpose(0, 1)# 将mask转换为浮点型,并将为0的位置填充为负无穷,为1的位置填充为0.0
    mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
    return mask

def create_mask(src, tgt):# 获取源语言序列和目标语言序列的长度
  src_seq_len = src.shape[0]
  tgt_seq_len = tgt.shape[0]
# 生成目标语言序列的mask
  tgt_mask = generate_square_subsequent_mask(tgt_seq_len)
  src_mask = torch.zeros((src_seq_len, src_seq_len), device=device).type(torch.bool)
# 创建源语言序列的mask(全零矩阵)
  src_padding_mask = (src == PAD_IDX).transpose(0, 1)
  tgt_padding_mask = (tgt == PAD_IDX).transpose(0, 1)
  return src_mask, tgt_mask, src_padding_mask, tgt_padding_mask

Define model parameters and instantiate model. 这里我们服务器实在是计算能力有限,按照以下配置可以训练但是效果应该是不行的。如果想要看到训练的效果请使用你自己的带GPU的电脑运行这一套代码。

当你使用自己的GPU的时候,NUM_ENCODER_LAYERS 和 NUM_DECODER_LAYERS 设置为3或者更高,NHEAD设置8,EMB_SIZE设置为512。

SRC_VOCAB_SIZE = len(ja_vocab)# 源语言词汇表大小
TGT_VOCAB_SIZE = len(en_vocab)# 目标语言词汇表大小
EMB_SIZE = 512# 词嵌入维度
NHEAD = 8# 多头注意力机制中注意头的数量
FFN_HID_DIM = 512# FeedForward层隐藏单元的维度
BATCH_SIZE = 16
NUM_ENCODER_LAYERS = 3# 编码器层数
NUM_DECODER_LAYERS = 3# 解码器层数
NUM_EPOCHS = 16# 初始化Seq2SeqTransformer模型
transformer = Seq2SeqTransformer(NUM_ENCODER_LAYERS, NUM_DECODER_LAYERS,
                                 EMB_SIZE, SRC_VOCAB_SIZE, TGT_VOCAB_SIZE,
                                 FFN_HID_DIM)
# 对模型参数进行Xavier初始化
for p in transformer.parameters():
    if p.dim() > 1:
        nn.init.xavier_uniform_(p)

transformer = transformer.to(device)

loss_fn = torch.nn.CrossEntropyLoss(ignore_index=PAD_IDX) # 定义交叉熵损失函数,忽略PAD_IDX

optimizer = torch.optim.Adam(
    transformer.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9
)# Adam优化器设置
def train_epoch(model, train_iter, optimizer):
  model.train()# 将模型设置为训练模式
  losses = 0
  for idx, (src, tgt) in  enumerate(train_iter):
      src = src.to(device)
      tgt = tgt.to(device)

      tgt_input = tgt[:-1, :]

      src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input)

      logits = model(src, tgt_input, src_mask, tgt_mask,
                                src_padding_mask, tgt_padding_mask, src_padding_mask)

      optimizer.zero_grad()# 梯度清零

      tgt_out = tgt[1:,:]
      loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1))
      loss.backward()

      optimizer.step()# 将模型设置为评估模式
      losses += loss.item()
  return losses / len(train_iter)# 返回平均损失


def evaluate(model, val_iter):
  model.eval()
  losses = 0
  for idx, (src, tgt) in (enumerate(valid_iter)):
    src = src.to(device)
    tgt = tgt.to(device)

    tgt_input = tgt[:-1, :]

    src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input)

    logits = model(src, tgt_input, src_mask, tgt_mask,
                              src_padding_mask, tgt_padding_mask, src_padding_mask)
    tgt_out = tgt[1:,:]
    loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1))
    losses += loss.item()
  return losses / len(val_iter)# 返回平均损失

Start training

Finally, after preparing the necessary classes and functions, we are ready to train our model. This goes without saying but the time needed to finish training could vary greatly depending on a lot of things such as computing power, parameters, and size of datasets.

When I trained the model using the complete list of sentences from JParaCrawl which has around 5.9 million sentences for each language, it took around 5 hours per epoch using a single NVIDIA GeForce RTX 3070 GPU.

Here is the code:

for epoch in tqdm.tqdm(range(1, NUM_EPOCHS+1)):
  start_time = time.time()
  train_loss = train_epoch(transformer, train_iter, optimizer) # 调用train_epoch函数进行一个轮次的训练,并返回训练损失
  end_time = time.time()
  print((f"Epoch: {epoch}, Train loss: {train_loss:.3f}, "# 打印当前轮次的编号、训练损失以及该轮次的训练时间
          f"Epoch time = {(end_time - start_time):.3f}s"))
  0%|                                                                                                                                                                                            | 0/16 [00:00<?, ?it/s]

Try translating a Japanese sentence using the trained model

First, we create the functions to translate a new sentence, including steps such as to get the Japanese sentence, tokenize, convert to tensors, inference, and then decode the result back into a sentence, but this time in English.

def greedy_decode(model, src, src_mask, max_len, start_symbol):# 将输入和掩码移动到设备(例如 GPU)
    src = src.to(device)
    src_mask = src_mask.to(device)
    memory = model.encode(src, src_mask)   # 初始化目标序列,起始符填充start_symbol
    ys = torch.ones(1, 1).fill_(start_symbol).type(torch.long).to(device)
    for i in range(max_len-1):
        memory = memory.to(device)# 创建全为0的memory_mask,形状为 (ys的长度, memory的长度)
        memory_mask = torch.zeros(ys.shape[0], memory.shape[0]).to(device).type(torch.bool) # 生成目标序列的掩码
        tgt_mask = (generate_square_subsequent_mask(ys.size(0))# 使用编码后的记忆和当前的目标序列进行解码
                                    .type(torch.bool)).to(device)
        out = model.decode(ys, memory, tgt_mask) 
        # 转置输出,使得时间步为第一个维度
        out = out.transpose(0, 1)
        prob = model.generator(out[:, -1])
        _, next_word = torch.max(prob, dim = 1)# 获取下一个词的索引
        next_word = next_word.item()# 把选出的词添加到目标序列中
        ys = torch.cat([ys,
                        torch.ones(1, 1).type_as(src.data).fill_(next_word)], dim=0)
        if next_word == EOS_IDX:
          break
    return ys
def translate(model, src, src_vocab, tgt_vocab, src_tokenizer):
    model.eval()# 对源句子进行分词并转换为索引,添加起始符和结束符
    tokens = [BOS_IDX] + [src_vocab.stoi[tok] for tok in src_tokenizer.encode(src, out_type=str)]+ [EOS_IDX]
    num_tokens = len(tokens)# 将tokens转换为张量并调整形状,使其符合模型输入要求
    src = (torch.LongTensor(tokens).reshape(num_tokens, 1) )# 创建全为0的源掩码
    src_mask = (torch.zeros(num_tokens, num_tokens)).type(torch.bool)
    tgt_tokens = greedy_decode(model,  src, src_mask, max_len=num_tokens + 5, start_symbol=BOS_IDX).flatten()# 将目标序列的索引转换回单词,去掉起始符和结束符
    return " ".join([tgt_vocab.itos[tok] for tok in tgt_tokens]).replace("<bos>", "").replace("<eos>", "")

Then, we can just call the translate function and pass the required parameters.

translate(transformer, "HSコード 8515 はんだ付け用、ろう付け用又は溶接用の機器(電気式(電気加熱ガス式を含む。)", ja_vocab, en_vocab, ja_tokenizer)

' ▁H S ▁ 代 码 ▁85 15 ▁ 焊 接 设 备 ( 包 括 电 气 加 热 ) 。 '
trainen.pop(5)
'Chinese HS Code Harmonized Code System < HS编码 8515 : 电气(包括电热气体)、激光、其他光、光子束、超声波、电子束、磁脉冲或等离子弧焊接机器及装置,不论是否 HS Code List (Harmonized System Code) for US, UK, EU, China, India, France, Japan, Russia, Germany, Korea, Canada ...'
trainja.pop(5)
'Japanese HS Code Harmonized Code System < HSコード 8515 はんだ付け用、ろう付け用又は溶接用の機器(電気式(電気加熱ガス式を含む。)、レーザーその他の光子ビーム式、超音波式、電子ビーム式、 HS Code List (Harmonized System Code) for US, UK, EU, China, India, France, Japan, Russia, Germany, Korea, Canada ...'

Save the Vocab objects and trained model

Finally, after the training has finished, we will save the Vocab objects (en_vocab and ja_vocab) first, using Pickle.

import pickle
# open a file, where you want to store the data
file = open('en_vocab.pkl', 'wb')
# dump information to that file
pickle.dump(en_vocab, file)
file.close()
file = open('ja_vocab.pkl', 'wb')
pickle.dump(ja_vocab, file)
file.close()#打开文件

Lastly, we can also save the model for later use using PyTorch save and load functions. Generally, there are two ways to save the model depending what we want to use them for later. The first one is for inference only, we can load the model later and use it to translate from Japanese to English.

# save model for inference
torch.save(transformer.state_dict(), 'inference_model')

The second one is for inference too, but also for when we want to load the model later, and want to resume the training.

# save model + checkpoint to resume training later
torch.save({
  'epoch': NUM_EPOCHS,
  'model_state_dict': transformer.state_dict(),
  'optimizer_state_dict': optimizer.state_dict(),
  'loss': train_loss,
  }, 'model_checkpoint.tar')#保存代码

Conclusion

That’s it!

  • 22
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值