【吴恩达深度学习编程作业】5.3序列模型和注意力机制——机器翻译与触发词检测

参考博客:机器翻译与触发词机制

最后一个编程作业,做的云里雾里,好多不懂的地方,非术业专攻等想起来的时候再弄明白吧。

机器翻译

main.py

from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.models import Model
from Deep_Learning.test5_3.nmt_utils import *


# 1.1.1 数据集,运行单元数值数据集并打印一些样例
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
print(dataset[:10])

"""
    dataset: 一个元组列表 (人类可读日期, 机器可读日期)。
    human_vocab: 一个python字典,将人类可读日期中使用的所有字符映射到整数值索引。
    machine_vocab: 一个python字典,将机器可读日期中使用的所有字符映射到整数值索引。这些索引不一定与 human_vocab 的索引一致。
    inv_machine_vocab: machine_vocab的逆字典,从索引到字符的映射。
"""

Tx = 30     # 假设人类可读日期的最大长度,如果得到更长的输入,就截断它
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)      # X.shape: (10000, 30)
print("Y.shape:", Y.shape)      # Y.shape: (10000, 10)
print("Xoh.shape:", Xoh.shape)  # Xoh.shape: (10000, 30, 37)
print("Yoh.shape:", Yoh.shape)  # Yoh.shape: (10000, 10, 11)

# 查看预处理训练样本的示例
index = 0
print("Source date:", dataset[index][0])    # Source date: 9 may 1998
print("Target date:", dataset[index][1])    # Target date: 1998-05-09
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])

# 1.2带注意力的神经机器翻译
# 将共享层定义为全局变量
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation="tanh")
densor2 = Dense(1, activation="relu")
activator = Activation(softmax, name='attention_weights')   # 在这个notebook我们正使用自定义的softmax(axis=1)
dotor = Dot(axes=1)

# Graded function
def one_step_attention(a, s_prev):
    """
    执行一步attention,输出一个上下文向量,输出作为注意力权重的点积计算上下文向量
    "alphas"    Bi-LSTM的隐藏状态"a"
    :param a:       -Bi-LSTM的输出隐藏状态,numpy-array,维度为(m,Tx,2*n_a)
    :param s_prev:  -(post-attention)LSTM的前一个隐藏状态,numpy-array,维度为(m,n_s)
    :return:context -上下文向量,下一个(post-attention)LSTM单元的输入
    """

    # 使用repeator重复s_prev维度(m,Tx,n_s)这样就可以将它与所有隐藏状态a连接起来
    s_prev = repeator(s_prev)

    # 使用concatenator在最后一个轴上连接a和s_prev
    concat = concatenator([a, s_prev])

    # 使用densor1传入参数concat,通过一个小的全连接神经网络来计算中间能量变量e
    e = densor1(concat)

    # 使用densor2传入参数e,通过一个小的全连接神经网络来计算中间能量变量energies
    energies = densor2(e)

    # 使用activator传入参数energies计算注意力权重alphas
    alphas = activator(energies)

    # 使用dotor传入参数alphas和a计算下一个(post-attention)LSTM单元的上下文向量
    context = dotor([alphas, a])

    return context

# 定义全局图层用于在model()中共享权重
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state=True)
output_layer = Dense(len(machine_vocab), activation=softmax)

# Graded function
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
    """
    :param Tx:      -输入序列的长度
    :param Ty:      -输出序列的长度
    :param n_a:     -Bi-LSTM的隐藏状态大小
    :param n_s:     -post-attention LSTM的隐藏状态大小
    :param human_vocab_size:    -字典human_vocab的大小
    :param machine_vocab_size:  -字典machine_vocab的大小
    :return: model      -keras模型实例
    """

    # 定义模型的输入,维度(Tx,)
    # 定义s0和c0,初始化解码器LSTM的隐藏状态,维度(n_s,)
    X = Input(shape=(Tx, human_vocab_size))
    s0 = Input(shape=(n_s,), name='s0')
    c0 = Input(shape=(n_s,), name='c0')
    s = s0
    c = c0

    # 初始化一个空的输出列表
    outputs = []

    # 1.定义pre-attention Bi-LSTM
    a = Bidirectional(LSTM(n_a, return_sequences=True), input_shape=(m, Tx, n_a*2))(X)

    # 2.迭代Ty步
    for t in range(Ty):
        # 2.1执行一步注意机制,得到在t步的上下文向量
        context = one_step_attention(a, s)

        # 2.2使用post-attention LSTM单元的到的新的context
        s, _, c = post_activation_LSTM_cell(context, initial_state=[s, c])

        # 2.3使用全连接层处理post-attention LSTM的隐藏状态输出
        out = output_layer(s)

        # 2.4追加out到outputs
        outputs.append(out)

    # 3.创建模型实例,获取三个输入并返回输出列表
    model = Model(inputs=[X, s0, c0], outputs=outputs)

    return model

model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))

model.summary()

opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])

s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
# ?????????????
outputs = list(Yoh.swapaxes(0, 1))
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)

# 加载训练好了的模型
model.load_weights('models1/model.h5')

EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
    s0 = np.zeros((1, n_s))
    c0 = np.zeros((1, n_s))
    source = string_to_int(example, Tx, human_vocab)
    source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source)))
    source = np.expand_dims(source, axis=0)
    prediction = model.predict([source, s0, c0])
    prediction = np.argmax(prediction, axis=-1)
    output = [inv_machine_vocab[int(i)] for i in prediction]

    print("source:", example)
    print("output:".join(output))

运行结果

Source after preprocessing (indices): [12  0 24 13 34  0  4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36
 36 36 36 36 36 36]
Target after preprocessing (indices): [ 2 10 10  9  0  1  6  0  1 10]

Source after preprocessing (one-hot): [[0. 0. 0. ... 0. 0. 0.]
 [1. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 1.]
 [0. 0. 0. ... 0. 0. 1.]
 [0. 0. 0. ... 0. 0. 1.]]
Target after preprocessing (one-hot): [[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
 [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
 [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
Model: "functional_1"
...
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0
__________________________________________________________________________________________________
100/100 [==============================] - 5s 45ms/step - loss: 16.7616 - dense_2_loss: 1.2131 - dense_2_1_loss: 1.0301 - dense_2_2_loss: 1.7838 - dense_2_3_loss: 2.6970 - dense_2_4_loss: 0.7895 - dense_2_5_loss: 1.2747 - dense_2_6_loss: 2.6956 - dense_2_7_loss: 0.9524 - dense_2_8_loss: 1.7066 - dense_2_9_loss: 2.6189 - dense_2_accuracy: 0.5136 - dense_2_1_accuracy: 0.6962 - dense_2_2_accuracy: 0.3020 - dense_2_3_accuracy: 0.0872 - dense_2_4_accuracy: 0.9473 - dense_2_5_accuracy: 0.3513 - dense_2_6_accuracy: 0.0537 - dense_2_7_accuracy: 0.9264 - dense_2_8_accuracy: 0.2435 - dense_2_9_accuracy: 0.1067
source: 3 May 1979
1output:9output:7output:9output:-output:0output:5output:-output:3output:3
source: 5 April 09
2output:0output:0output:9output:-output:0output:4output:-output:0output:5
source: 21th of August 2016
2output:0output:1output:6output:-output:0output:8output:-output:2output:0
source: Tue 10 Jul 2007
2output:0output:0output:7output:-output:0output:7output:-output:1output:0
source: Saturday May 9 2018
2output:0output:1output:8output:-output:0output:5output:-output:0output:9
source: March 3 2001
2output:0output:0output:1output:-output:0output:3output:-output:0output:3
source: March 3rd 2001
2output:0output:0output:1output:-output:0output:3output:-output:0output:3
source: 1 March 2001
2output:0output:0output:1output:-output:0output:3output:-output:0output:1

触发词检测

main.py

"""
    代码实现:
        1.构建语音识别项目
        2.合成和处理录音,创建训练/开发数据集
        3.训练触发词检测模型并进行预测
"""
import numpy as np
import IPython
from Deep_Learning.test5_3.td_utils import *
import tensorflow as tf
tf.keras.layers.GRU.reset_after = False

# 2.1数据合成:创建语音数据集
# 听取数据
IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/1.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")

IPython.display.Audio("./audio_examples/example_train.wav")

x = graph_spectrogram("audio_examples/example_train.wav")
plt.show()

_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:, 0].shape)
print("Time steps in input after spectrogram", x.shape)

Tx = 5511   # 从频谱图输入到模型的时间步数
n_freq = 101    # 在频谱图的每个时间步输入模型的频率数
Ty = 1375   # 模型输出中的时间步数

# 生成单个训练样例
# 使用pydub加载音频片段
activates, negatives, backgrounds = load_raw_audio()


print("background len: " + str(len(backgrounds[0])))    # 应该是10,000,因为它是一个10秒的剪辑
print("activate[0] len: " + str(len(activates[0])))     # 也许大约1000,因为 "activate" 音频剪辑通常大约1秒(但变化很大)
print("activate[1] len: " + str(len(activates[1])))     # 不同的 "activate" 剪辑可以具有不同的长度

def get_random_time_segment(segment_ms):
    """
    获取10000ms音频剪辑中时间长为segment_ms的随机时间段
    :param segment_ms:      -音频片段的持续时间,以毫秒为单位
    :return: segment_time   -以毫秒为单位的元组(segment_start, segment_end)
    """
    segment_start = np.random.randint(low=0, high=10000-segment_ms)     # 确保段不会超过10s背景
    segment_end = segment_start + segment_ms - 1

    return (segment_start, segment_end)


def is_overlapping(segment_time, previous_segments):
    """
    检查段的时间是否与现有段的时间重叠
    :param segment_time:        -新段的元组(segment_start, segment_end)
    :param previous_segments:   -现有段的元组列表(segment_start, segment_end)
    :return:                    -如果时间段与任何现有段重叠,则为True,否则为False
    """

    segment_start, segment_end = segment_time

    # 1.将重叠标识overlap初始化为False标志
    overlap = False

    # 2.循环遍历previous_segments的开始和结束时间
    # 比较开始/结束时间,如果存在重叠,则将标志overlap设置为True
    for previous_start, previous_end in previous_segments:
        if segment_start <= previous_end and segment_end >= previous_start:
            overlap = True

    return overlap

overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)])
overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)])
print("Overlap 1 = ", overlap1)     # False
print("Overlap 2 = ", overlap2)     # True


def insert_audio_clip(background, audio_clip, previous_segments):
    """
    在随机时间步骤中在背景噪声上插入新的音频片段,确保音频片段与现有片段不重叠
    :param background:      -10s背景录音
    :param audio_clip:      -要插入/叠加的音频剪辑
    :param previous_segments:   -已放置的音频片段的时间
    :return: new_background     -更新的背景音频
    """

    # 以ms为单位获取音频片段的持续时间
    segment_ms = len(audio_clip)

    # 1.使用其中一个辅助函数来选择要插入的随机时间段
    # 新的音频剪辑
    segment_time = get_random_time_segment(segment_ms)

    # 2.检查新的segment_time是否与previous_segments之一重叠
    # 如果重叠则继续选择新的直到不重叠
    while is_overlapping(segment_time, previous_segments):
        segment_time = get_random_time_segment(segment_ms)

    # 3.将新的segment_time添加到previous_segments列表中
    previous_segments.append(segment_time)

    # 4.叠加音频片段和背景
    new_background = background.overlay(audio_clip, position=segment_time[0])

    return new_background, segment_time

np.random.seed(5)
audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)])
audio_clip.export("insert_test.wav", format="wav")
print("Segment Time:", segment_time)
IPython.display.Audio("insert_test.wav")

# 预测的音频
IPython.display.Audio("audio_examples/insert_reference.wav")


def insert_ones(y, segment_end_ms):
    """
    更新标签向量y,段结尾的后面50个输出的标签应设为1
    严格的说,segment_end_y的标签是0,随后的50个标签是1
    :param y:                   -numpy数组, 维度为(1,Ty),训练样例的标签
    :param segment_end_ms:      -以ms为单位的段的结束时间
    :return: y                  -更新标签
    """

    # 背景持续时间(以频谱图时间步长表示)
    segment_end_y = int(segment_end_ms * Ty / 10000.0)

    # 将1添加到背景标签(y)中的正确索引
    for i in range(segment_end_y + 1, segment_end_y + 51):
        if i < Ty:
            y[0, i] = 1

    return y

arr1 = insert_ones(np.zeros((1, Ty)), 9700)
plt.plot(insert_ones(arr1, 4251)[0, :])
plt.show()
print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635])


def create_trainging_example(background, activates, negatives):
    """
    创建具有给定背景,正例和负例的训练示例
    :param background:  -10s背景录音
    :param activates:   -activate一词的音频片段列表
    :param negatives:   -不是activate一词的音频片段列表
    :return: x          -训练样例的频谱图
             y          -频谱图每个时间步的标签
    """

    # 设置随机种子
    np.random.seed(18)

    # 让背景更安静
    background = background - 20

    # 1.初始化y(标签向量)为0
    y = np.zeros((1, Ty))

    # 2.将段时间初始化为空列表
    previous_segments = []

    # 从整个activate录音列表中选择0-4随机activate音频片段
    number_of_activates = np.random.randint(0, 5)
    random_indices = np.random.randint(len(activates), size=number_of_activates)
    random_activates = [activates[i] for i in random_indices]

    # 3.循环选择activate剪辑插入背景
    for random_activate in random_activates:
        # 插入音频剪辑到背景
        background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
        # 从segment_time中取segment_start和segment_end
        segment_start, segment_end = segment_time
        # 在y中插入标签
        y = insert_ones(y, segment_end_ms=segment_end)

    # 从整个负例录音列表中随机选择0-2个负例录音
    number_of_negatives = np.random.randint(0, 3)
    random_indices = np.random.randint(len(negatives), size=number_of_negatives)
    random_negatives = [negatives[i] for i in random_indices]

    # 4.循环随机选择负例片段插入背景中
    for random_negative in random_negatives:
        # 插入音频剪辑到背景
        background, _ = insert_audio_clip(background, random_negative, previous_segments)

    # 标准化音频剪辑的音量
    background = match_target_amplitude(background, -20.0)

    # 导出新的训练样例
    file_handle = background.export("train" + ".wav", format="wav")
    print("文件(train.wav)已保存在您的目录中")

    # 获取并绘制录音的频谱图(正例和负例叠加的背景)
    x = graph_spectrogram("train.wav")
    plt.show()

    return x, y

x, y = create_trainging_example(backgrounds[0], activates, negatives)

IPython.display.Audio("train.wav")

# 绘制生成的训练示例的关联标签
plt.plot(y[0])

# 加载预处理的训练样例
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")

# 加载预处理开发集示例
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")

from keras.models import Model, load_model
from keras.layers import Dense, Activation, Dropout, Input, TimeDistributed, Conv1D
from keras.layers import GRU, BatchNormalization
from keras.optimizers import Adam


def model(input_shape):
    """
    用keras创建模型的图 Function creating the model's graph in Keras
    :param input_shape:     -模型输入数据的维度(使用keras约定)
    :return: model          -keras模型实例
    """

    X_input = Input(shape=input_shape)

    # 1.卷积层
    X = Conv1D(196, 15, strides=4)(X_input)
    X = BatchNormalization()(X)
    X = Activation('relu')(X)
    X = Dropout(0.8)(X)

    # 2.第一个GRU
    X = GRU(units=128, return_sequences=True)(X)
    X = Dropout(0.8)(X)
    X = BatchNormalization()(X)

    # 3.第二个GRU
    X = GRU(units=128, return_sequences=True)(X)
    X = Dropout(0.8)(X)
    X = BatchNormalization()(X)
    X = Dropout(0.8)(X)

    # 4.时间分布全连接层
    X = TimeDistributed(Dense(1, activation="sigmoid"))(X)

    model = Model(inputs=X_input, outputs=X)

    return model

model = model(input_shape=(Tx, n_freq))
model.summary()

# 训练模型,加载模型时会报错就注释掉了
# model = load_model('./models2/tr_model.h5')
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(X, Y, batch_size=5, epochs=1)

# 测试模型
loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev set accuracy = ", acc)

# 作出预测
def detect_triggerword(filename):
    plt.subplot(2, 1, 1)

    x = graph_spectrogram(filename)
    # 频谱图输出(freqs,Tx),我们想要(Tx,freqs)输入到模型中
    x = x.swapaxes(0, 1)
    x = np.expand_dims(x, axis=0)
    predictions = model.predict(x)

    plt.subplot(2, 1, 2)
    plt.plot(predictions[0, :, 0])
    plt.ylabel('probability')
    plt.show()

    return predictions

clime_file = "audio_examples/chime.wav"
def chime_on_activate(filename, predictions, threshold):
    audio_clip = AudioSegment.from_wav(filename)
    chime = AudioSegment.from_wav(clime_file)
    Ty = predictions.shape[1]

    # 1.将连续输出步初始化为0
    consecutive_timesteps = 0
    # 2.循环y中的输出步
    for i in range(Ty):
        # 3.增加连续输出步
        consecutive_timesteps += 1
        # 4.如果预测高于阈值并且已经过了75个连续输出步
        if predictions[0, i, 0] > threshold and consecutive_timesteps > 75:
            # 5.使用pydub叠加音频和背景
            audio_clip = audio_clip.overlay(chime, position=((i / Ty) * audio_clip.duration_seconds) * 1000)
            # 6.将连续输出步重置为0
            consecutive_timesteps = 0

    audio_clip.export("chime_output.wav", format='wav')


# 测试开发样例
IPython.display.Audio("./raw_data/dev/1.wav")
IPython.display.Audio("./raw_data/dev/2.wav")

filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")

filename = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")

运行结果

Time steps in audio recording before spectrogram (441000,)
Time steps in input after spectrogram (101, 5511)
background len: 10000
activate[0] len: 721
activate[1] len: 731
Overlap 1 =  False
Overlap 2 =  True
Segment Time: (2915, 3635)
sanity checks: 0.0 1.0 0.0
文件(train.wav)已保存在您的目录中
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 5511, 101)]       0         
_________________________________________________________________
conv1d (Conv1D)              (None, 1375, 196)         297136    
_________________________________________________________________
batch_normalization (BatchNo (None, 1375, 196)         784       
_________________________________________________________________
activation (Activation)      (None, 1375, 196)         0         
_________________________________________________________________
dropout (Dropout)            (None, 1375, 196)         0         
_________________________________________________________________
gru (GRU)                    (None, 1375, 128)         125184    
_________________________________________________________________
dropout_1 (Dropout)          (None, 1375, 128)         0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 1375, 128)         512       
_________________________________________________________________
gru_1 (GRU)                  (None, 1375, 128)         99072     
_________________________________________________________________
dropout_2 (Dropout)          (None, 1375, 128)         0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 1375, 128)         512       
_________________________________________________________________
dropout_3 (Dropout)          (None, 1375, 128)         0         
_________________________________________________________________
time_distributed (TimeDistri (None, 1375, 1)           129       
=================================================================
Total params: 523,329
Trainable params: 522,425
Non-trainable params: 904
_________________________________________________________________
6/6 [==============================] - 4s 703ms/step - loss: 1.3067 - accuracy: 0.4988
1/1 [==============================] - 0s 501us/step - loss: 0.7014 - accuracy: 0.1298
Dev set accuracy =  0.1298036426305771

audio_examples/example_train.wav的频谱图
在这里插入图片描述

insert_ones()功能显示
在这里插入图片描述

train.wav频谱图在这里插入图片描述

raw_data/dev/1.wav模型运行结果
在这里插入图片描述

raw_data/dev/2.wav模型运行结果
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值