【Transformer应用】语者识别

任务描述

语者识别,根据语音信号来判断说话者的身份。本任务是李宏毅老师深度学习课程的第四次作业,课程主题是Sequence As Input。本作业的目的是学会使用Transformer。

课程链接:2022 - 作业说明HW4_哔哩哔哩_bilibili


数据集描述

数据集简介

数据集名称为:VoxCeleb2,下载地址:ML2022Spring-hw4 | Kaggle

包含:

  • 56666个经过处理后的带标签的声音特征,作为训练集
  • 4000个经过处理后的不带标签的语音特征,作为测试集
  • 600个类别标签,每个类别代表一个语者

声音讯号是经过一系列处理过后产生的频谱信号。

数据集格式

包含四类数据:

  1. metadata.json,记录了数据的元信息,包含特征维度,语者id,特征路径以及长度

image-20230826205905733

  1. testdata.json,记录了测试集的信息,包含语音信号的特征维度以及语音特征的路径和长度

image-20230826205459464

  1. mapping.json,记录了语者名称和id的映射关系
  2. uttr-{random string}.pt,记录了语音特征的文件名

模型结构

image-20230826210206785

模型中最为关键的部分就是Encoder部分,它起到提取特征信息的作用,Encoder的实现使用了Transformer结构。


代码实现

本文代码来源于课程官方提供的Simple Baseline,改进方案实现来源于李宏毅2022机器学习HW4解析_from conformer import conformerblock# 600分类_机器学习手艺人的博客-CSDN博客,本文在此基础上进行修改和注解。

Simple Baseline

Simple Baseline是课程官方提供的最初代码,代码使用了Transformer作为Encoder。具体实现步骤如下:

1.固定随机种子
import numpy as np
import torch
import random

def set_seed(seed):
    np.random.seed(seed)
    random.seed(seed)
    torch.manual_seed(seed)
    if torch.cuda.is_available():
        torch.cuda.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True

set_seed(87)

2.定义Dataset类
import os
import json
import torch
import random
from pathlib import Path
from torch.utils.data import Dataset
from torch.nn.utils.rnn import pad_sequence
 
 
class myDataset(Dataset):
	def __init__(self, data_dir, segment_len=128):
		self.data_dir = data_dir
		self.segment_len = segment_len
	
		# Load the mapping from speaker neme to their corresponding id. 
		mapping_path = Path(data_dir) / "mapping.json"
		mapping = json.load(mapping_path.open())
		self.speaker2id = mapping["speaker2id"]
	
		# Load metadata of training data.
		metadata_path = Path(data_dir) / "metadata.json"
		metadata = json.load(open(metadata_path))["speakers"]
	
		# Get the total number of speaker.
		self.speaker_num = len(metadata.keys())
		self.data = []
		for speaker in metadata.keys():
			for utterances in metadata[speaker]:
				self.data.append([utterances["feature_path"], self.speaker2id[speaker]]) #data存储的是[语音路径,说话者id]
 
	def __len__(self):
			return len(self.data)
 
	def __getitem__(self, index):
		feat_path, speaker = self.data[index]
		# Load preprocessed mel-spectrogram.
		mel = torch.load(os.path.join(self.data_dir, feat_path))

		# Segmemt mel-spectrogram into "segment_len" frames.
        #因为语音的长度不同,为了方便网络进行处理,所以要对语音进行分割,每段语音都取segment_len长度的frame
		if len(mel) > self.segment_len:
			# Randomly get the starting point of the segment.
			start = random.randint(0, len(mel) - self.segment_len)
			# Get a segment with "segment_len" frames.
			mel = torch.FloatTensor(mel[start:start+self.segment_len])
		else:
			mel = torch.FloatTensor(mel)
		# Turn the speaker id into long for computing loss later.
		speaker = torch.FloatTensor([speaker]).long()
		return mel, speaker
 
	def get_speaker_number(self):
		return self.speaker_num

3.定义Dataloader类
import torch
from torch.utils.data import DataLoader, random_split
from torch.nn.utils.rnn import pad_sequence


def collate_batch(batch):
	# Process features within a batch.
	"""Collate a batch of data."""
    #batch中的data格式为[特征路径,语者id],通过zip(*batch)进行解压缩,将所有路径放在mel里,将所有语者id放在speaker里
	mel, speaker = zip(*batch)
	# Because we train the model batch by batch, we need to pad the features in the same batch to make their lengths the same.
    #因为每个mel的长度可能不同,所以要进行填充,填充到相同的长度
	mel = pad_sequence(mel, batch_first=True, padding_value=-20)    # pad log 10^(-20) which is very small value.
	# mel: (batch size, length, 40)
	return mel, torch.FloatTensor(speaker).long()


def get_dataloader(data_dir, batch_size, n_workers):
	"""Generate dataloader"""
	dataset = myDataset(data_dir)
	speaker_num = dataset.get_speaker_number()
	# Split dataset into training dataset and validation dataset
    # 训练集和验证集的比例为9:1
	trainlen = int(0.9 * len(dataset))
	lengths = [trainlen, len(dataset) - trainlen]
	trainset, validset = random_split(dataset, lengths)

	train_loader = DataLoader(
		trainset,
		batch_size=batch_size,
		shuffle=True,
		drop_last=True,
		num_workers=n_workers,
		pin_memory=True, #如果设置为 True,则会将加载的数据存储到 CUDA 的固定内存区域中,以便在 GPU 计算过程中更快地访问数据。
		collate_fn=collate_batch, #在每个批次(batch)加载的过程中,collate_fn 会被调用来整理样本,并将整理后的批次返回给 DataLoader。
	)
	valid_loader = DataLoader(
		validset,
		batch_size=batch_size,
		num_workers=n_workers,
		drop_last=True,
		pin_memory=True,
		collate_fn=collate_batch,
	)

	return train_loader, valid_loader, speaker_num

4.定义模型类
import torch
import torch.nn as nn
import torch.nn.functional as F


class Classifier(nn.Module):
	def __init__(self, d_model=80, n_spks=600, dropout=0.1):
		super().__init__()
		# Project the dimension of features from that of input into d_model.
        # 语音中每个音节的特征维度为40
		self.prenet = nn.Linear(40, d_model)
		# TODO:
		#   Change Transformer to Conformer.
		#   https://arxiv.org/abs/2005.08100
		self.encoder_layer = nn.TransformerEncoderLayer(
			d_model=d_model, dim_feedforward=256, nhead=2
		)
		self.encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=2)

		# Project the the dimension of features from d_model into speaker nums.
        # 输出语者是谁,最终输出向量是600维,对应600个语者
		self.pred_layer = nn.Sequential(
			nn.Linear(d_model, d_model),
			nn.ReLU(),
			nn.Linear(d_model, n_spks),
		)

	def forward(self, mels):
		"""
		args:
			mels: (batch size, length, 40)
		return:
			out: (batch size, n_spks)
		"""
		# out: (batch size, length, d_model)
		out = self.prenet(mels)
		# out: (length, batch size, d_model)
		out = out.permute(1, 0, 2)
		# The encoder layer expect features in the shape of (length, batch size, d_model).
		out = self.encoder(out)
		# out: (batch size, length, d_model)
		out = out.transpose(0, 1)
		# mean pooling
        #计算每个特征维度的平均值,stats: (batch_size, d_model)
		stats = out.mean(dim=1) 

		# out: (batch, n_spks)
		out = self.pred_layer(stats)
		return out

5.设置学习率

学习率采用随训练轮次逐渐上升的策略,最初的学习率设置为0,然后在warmup阶段逐渐上升。

import math

import torch
from torch.optim import Optimizer
from torch.optim.lr_scheduler import LambdaLR


def get_cosine_schedule_with_warmup(
	optimizer: Optimizer,
	num_warmup_steps: int,
	num_training_steps: int, 
	num_cycles: float = 0.5,
	last_epoch: int = -1,
):
	"""
	Create a schedule with a learning rate that decreases following the values of the cosine function between the
	initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the
	initial lr set in the optimizer.

	Args:
		optimizer (:class:`~torch.optim.Optimizer`):
		The optimizer for which to schedule the learning rate.
		num_warmup_steps (:obj:`int`):
		The number of steps for the warmup phase.
		num_training_steps (:obj:`int`):
		The total number of training steps.
		num_cycles (:obj:`float`, `optional`, defaults to 0.5):
		The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0
		following a half-cosine).
		last_epoch (:obj:`int`, `optional`, defaults to -1):
		The index of the last epoch when resuming training.

	Return:
		:obj:`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
	"""
	def lr_lambda(current_step):
		# Warmup
		if current_step < num_warmup_steps:
			return float(current_step) / float(max(1, num_warmup_steps))
		# decadence
		progress = float(current_step - num_warmup_steps) / float(
			max(1, num_training_steps - num_warmup_steps)
		)
		return max(
			0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress))
		)

	return LambdaLR(optimizer, lr_lambda, last_epoch)

6.定义模型运行函数
import torch


def model_fn(batch, model, criterion, device):
	"""Forward a batch through the model."""

	mels, labels = batch
	mels = mels.to(device)
	labels = labels.to(device)

    #获得最终预测输出
	outs = model(mels)
    
    #计算损失值
	loss = criterion(outs, labels)

	# Get the speaker id with highest probability.
	preds = outs.argmax(1)
	# Compute accuracy.
	accuracy = torch.mean((preds == labels).float())

	return loss, accuracy

7.定义验证阶段
from tqdm import tqdm
import torch


def valid(dataloader, model, criterion, device): 
	"""Validate on validation set."""

	model.eval()
	running_loss = 0.0
	running_accuracy = 0.0
	pbar = tqdm(total=len(dataloader.dataset), ncols=0, desc="Valid", unit=" uttr")

	for i, batch in enumerate(dataloader):
		with torch.no_grad():
			loss, accuracy = model_fn(batch, model, criterion, device)
			running_loss += loss.item()
			running_accuracy += accuracy.item()

		pbar.update(dataloader.batch_size)
		pbar.set_postfix(
			loss=f"{running_loss / (i+1):.2f}",
			accuracy=f"{running_accuracy / (i+1):.2f}",
		)

	pbar.close()
	model.train()

	return running_accuracy / len(dataloader)

8.定义主函数
from tqdm import tqdm

import torch
import torch.nn as nn
from torch.optim import AdamW
from torch.utils.data import DataLoader, random_split


def parse_args():
	"""arguments"""
	config = {
		"data_dir": "../../Dataset",
		"save_path": "model.ckpt",
		"batch_size": 32,
		"n_workers": 8,
		"valid_steps": 2000,
		"warmup_steps": 1000,
		"save_steps": 10000,
		"total_steps": 70000,
	}

	return config


def main(
	data_dir,
	save_path,
	batch_size,
	n_workers,
	valid_steps,
	warmup_steps,
	total_steps,
	save_steps,
):
	"""Main function."""
	device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
	print(f"[Info]: Use {device} now!")

	train_loader, valid_loader, speaker_num = get_dataloader(data_dir, batch_size, n_workers)
	train_iterator = iter(train_loader)
	print(f"[Info]: Finish loading data!",flush = True)

	model = Classifier(n_spks=speaker_num).to(device)
	#定义损失函数,选择的是交叉熵损失
    criterion = nn.CrossEntropyLoss()
	#定义优化器
    optimizer = AdamW(model.parameters(), lr=1e-3)
	#定义学习率策略
    scheduler = get_cosine_schedule_with_warmup(optimizer, warmup_steps, total_steps)
	print(f"[Info]: Finish creating model!",flush = True)

	best_accuracy = -1.0
	best_state_dict = None

    #训练进度可视化
	pbar = tqdm(total=valid_steps, ncols=0, desc="Train", unit=" step")

	for step in range(total_steps):
		# Get data
		try:
			batch = next(train_iterator)
		except StopIteration:
			train_iterator = iter(train_loader)
			batch = next(train_iterator)

		loss, accuracy = model_fn(batch, model, criterion, device)
		batch_loss = loss.item()
		batch_accuracy = accuracy.item()

		# Update model
		loss.backward()
		optimizer.step()
		scheduler.step()
		optimizer.zero_grad()

		# Log
		pbar.update()
		pbar.set_postfix(
			loss=f"{batch_loss:.2f}",
			accuracy=f"{batch_accuracy:.2f}",
			step=step + 1,
		)

		# Do validation
		if (step + 1) % valid_steps == 0:
			pbar.close()

			valid_accuracy = valid(valid_loader, model, criterion, device)

			# keep the best model
			if valid_accuracy > best_accuracy:
				best_accuracy = valid_accuracy
				best_state_dict = model.state_dict()

			pbar = tqdm(total=valid_steps, ncols=0, desc="Train", unit=" step")

		# Save the best model so far.
		if (step + 1) % save_steps == 0 and best_state_dict is not None:
			torch.save(best_state_dict, save_path)
			pbar.write(f"Step {step + 1}, best model saved. (accuracy={best_accuracy:.4f})")

	pbar.close()


if __name__ == "__main__":
	main(**parse_args())

9.定义推理数据集
import os
import json
import torch
from pathlib import Path
from torch.utils.data import Dataset


class InferenceDataset(Dataset):
	def __init__(self, data_dir):
		testdata_path = Path(data_dir) / "testdata.json"
		metadata = json.load(testdata_path.open())
		self.data_dir = data_dir
		self.data = metadata["utterances"]

	def __len__(self):
		return len(self.data)

	def __getitem__(self, index):
		utterance = self.data[index]
		feat_path = utterance["feature_path"]
		mel = torch.load(os.path.join(self.data_dir, feat_path))

		return feat_path, mel


def inference_collate_batch(batch):
	"""Collate a batch of data."""
	feat_paths, mels = zip(*batch)

	return feat_paths, torch.stack(mels)

10.定义推理函数
import json
import csv
from pathlib import Path
from tqdm.notebook import tqdm

import torch
from torch.utils.data import DataLoader

def parse_args():
	"""arguments"""  
	config = {
		"data_dir": "../../Dataset",
		"model_path": "model.ckpt",
		"output_path": "output.csv",
	}

	return config


def main(
	data_dir,
	model_path,
	output_path,
):
	"""Main function."""
	device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
	print(f"[Info]: Use {device} now!")

	mapping_path = Path(data_dir) / "mapping.json"
	mapping = json.load(mapping_path.open())

	dataset = InferenceDataset(data_dir)
	dataloader = DataLoader(
		dataset,
		batch_size=1,
		shuffle=False,
		drop_last=False,
		num_workers=8,
		collate_fn=inference_collate_batch,
	)
	print(f"[Info]: Finish loading data!",flush = True)

	speaker_num = len(mapping["id2speaker"])
	model = Classifier(n_spks=speaker_num).to(device)
	model.load_state_dict(torch.load(model_path))
	model.eval()
	print(f"[Info]: Finish creating model!",flush = True)

	results = [["Id", "Category"]]
	for feat_paths, mels in tqdm(dataloader):
		with torch.no_grad():
			mels = mels.to(device)
			outs = model(mels)
			preds = outs.argmax(1).cpu().numpy()
			for feat_path, pred in zip(feat_paths, preds):
				results.append([feat_path, mapping["id2speaker"][str(pred)]])

	with open(output_path, 'w', newline='') as csvfile:
		writer = csv.writer(csvfile)
		writer.writerows(results)


if __name__ == "__main__":
	main(**parse_args())

Median Baseline

改进思路

Median Baseline的主要改进思路是对transformer中的一些超参数进行调整。

首先修改d_model的大小,课程官方将d_model设置为80,也就是将原本特征维度为40的语音讯号转换为80维后再做处理,但需要预测的类别包含600个,两者差距较大,所以需要调整d_model的大小,过大过小都不好,这里设置为224。

另外可以改变多头注意力的头数量,原本是设定为2,可以适当增加,从而能够捕获到不同角度的注意力信息,这里设置为4。

还可以提升TransformerEncodeLayer的数量,就是进行多次自注意力、前馈和层归一化,这里提升为3个。

还可以在最终输出的线性层那里添加BatchNorm,可以提升训练效率,提升模型泛化性能,增强网络稳定性。


代码实现

代码修改主要在模型定义上,代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F


class Classifier(nn.Module):
	def __init__(self, d_model=224, n_spks=600, dropout=0.2): #修改了d_model
		super().__init__()
		# Project the dimension of features from that of input into d_model.
		self.prenet = nn.Linear(40, d_model)
		# TODO:
		#   Change Transformer to Conformer.
		#   https://arxiv.org/abs/2005.08100
		self.encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, dim_feedforward=d_model*2, nhead=4, dropout=dropout)#修改了注意力头的数目
		self.encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=3)#修改了encoderlayer的层数

		# Project the the dimension of features from d_model into speaker nums.
		self.pred_layer = nn.Sequential( #添加了batchnorm
			nn.BatchNorm1d(d_model),
			nn.Linear(d_model, d_model),
			nn.ReLU(),
            nn.BatchNorm1d(d_model),
			nn.Linear(d_model, n_spks),
		)

	def forward(self, mels):
		"""
		args:
			mels: (batch size, length, 40)
		return:
			out: (batch size, n_spks)
		"""
		# out: (batch size, length, d_model)
		out = self.prenet(mels)
		# out: (length, batch size, d_model)
		out = out.permute(1, 0, 2)
		# The encoder layer expect features in the shape of (length, batch size, d_model).
		out = self.encoder(out)
		# out: (batch size, length, d_model)
		out = out.transpose(0, 1) 
		# mean pooling
		stats = out.mean(dim=1)

		# out: (batch, n_spks)
		out = self.pred_layer(stats)
		return out

Strong Baseline

改进思路

官方课程给出的建议是使用Conformer,一种Transformer的变体,来作为encoder。

Conformer论文链接:https://arxiv.org/pdf/2005.08100.pdf

Conformer代码实现:lucidrains/conformer: Implementation of the convolutional module from the Conformer paper, for use in Transformers (github.com)

Conformer是一种将卷积神经网络融入到Transformer中的方法。

  • transformer可以更好地对content-based global interactions进行建模,即“基于内容的全局互动“,或者叫,”全局相关性”;
  • CNN则可以更好地利用local features,即,局部特征。

对应到语者识别这一任务时,识别一个人更应该关注其发生特征,而不是文本特征,例如南北方人说不可能,南方人可能说成bu ke leng,区分的一个关键点是“能”这个字,是比较局部的,全局信息起到的是辅助作用。

image-20230826231952387
代码实现

同样地,代码主要的修改也是在模型定义上,这里是直接引入了conformer包的ConformerBlock模块来定义encoder。

import torch
import torch.nn as nn
import torch.nn.functional as F
#!pip install conformer
from conformer import ConformerBlock

class Classifier(nn.Module):
	def __init__(self, d_model=224, n_spks=600, dropout=0.25):
		super().__init__()
		# Project the dimension of features from that of input into d_model.
		self.prenet = nn.Linear(40, d_model)
		# TODO:
		#   Change Transformer to Conformer.
		#   https://arxiv.org/abs/2005.08100
		#self.encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, dim_feedforward=d_model*2, nhead=2, dropout=dropout)
		#self.encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=3)
		self.encoder = ConformerBlock(
				dim = d_model, #表示输入和输出特征的维度
				dim_head = 4, #表示每个注意力头的维度
				heads = 4, #表示注意力头的数量
				ff_mult = 4, #表示前馈网络模块中内部维度相对于输入的倍数
				conv_expansion_factor = 2, #表示多层卷积模块中卷积扩张的因子
				conv_kernel_size = 20, #表示多层卷积模块中卷积核的大小
				attn_dropout = dropout, #表示多头自注意力模块和前馈网络模块中的注意力dropout概率
				ff_dropout = dropout, #表示前馈网络模块中的全连接层dropout概率
				conv_dropout = dropout, #表示多层卷积模块中的卷积dropout概率
		)


		# Project the the dimension of features from d_model into speaker nums.
		self.pred_layer = nn.Sequential(
			nn.BatchNorm1d(d_model),
			nn.Linear(d_model, d_model),
			nn.ReLU(),
            nn.BatchNorm1d(d_model),
			nn.Linear(d_model, n_spks),
		)

	def forward(self, mels):
		"""
		args:
			mels: (batch size, length, 40)
		return:
			out: (batch size, n_spks)
		"""
		# out: (batch size, length, d_model)
		out = self.prenet(mels)
		# out: (length, batch size, d_model)
		out = out.permute(1, 0, 2)
		# The encoder layer expect features in the shape of (length, batch size, d_model).
		out = self.encoder(out)
		# out: (batch size, length, d_model)
		out = out.transpose(0, 1) 
		# mean pooling
		stats = out.mean(dim=1)

		# out: (batch, n_spks)
		out = self.pred_layer(stats)
		return out

Boss Baseline

改进思路
  1. 使用Self Attentive Pooling,在原来的代码中,最后输出前是对每个音节的所有特征维度进行了一个平均池化,代码如下所示
# mean pooling
stats = out.mean(dim=1)

# out: (batch, n_spks)
out = self.pred_layer(stats)

但是直接平均池化会有一些缺点,如下图所示

image-20230827001532164

对每一个数条进行平均池化,会得到一个值,当一个数条有两处特别亮,其它处较暗,和一个数条整体都比较亮,这二者的平均池化求出来的值可能是很接近的,所以要换一种池化方式来体现这二者的区别

在这里可以采用Self Attentive Pooling,它的思路是学习一组参数,可以将不同特征维度的值进行一个加权求和,而非直接平均池化,从而更好地表征不同特征维度的重要性,代码实现如下:

class SelfAttentionPooling(nn.Module):
    def __init__(self, input_dim):
        super().__init__()
        self.W = nn.Linear(input_dim, 1)
    
    def forward(self, batch_rep):
        """
        input:
            batch_rep : size (N, T, H), N: batch size, T: sequence length, H: Hidden dimension
      
        attention_weight:
            att_w : size (N, T, 1)
    
        return:
            utter_rep: size (N, H)
        """
        att_w = F.softmax(self.W(batch_rep).squeeze(-1), dim=-1).unsqueeze(-1)
        utter_rep = torch.sum(batch_rep * att_w, dim=1)

        return utter_rep
  1. 使用AMSoftmax(Additive Margin Softmax), https://arxiv.org/pdf/1801.05599.pdf,AMSoftmax起到的作用是缩小类内的差异,扩大类间的差异,从而能够使各个类别间的决策边界变大,这样就更容易区分。

    在传统的Softmax损失函数中,通过计算特征向量与类别向量之间的余弦相似度来进行分类。然而,该方法存在一个问题,即边界问题。即使两个特征向量之间的相似度很小,Softmax损失函数也会将它们分为不同的类别,容易导致类别之间的相互干扰。

    为了解决这个问题,Additive Margin Softmax引入了一个边界余弦相似度的概念,并通过添加额外的边界来增强分类的边界。具体来说,它在Softmax损失函数中添加了一个额外的margin(边界),该边界可以将特征向量与类别向量之间的余弦相似度限制在一个较小的范围内。通过引入边界,Additive Margin Softmax可以更好地区分特征向量,并提高分类的准确性和鲁棒性。

    对于该任务,因为在预处理的时候,对语音进行了分割,导致每个语音信号彼此间的差异变小了。

    image-20230828122751783

    代码实现如下所示:

class AMSoftmax(nn.Module):
    def __init__(self):
        super(AMSoftmax, self).__init__()

    def forward(self, input, target, scale=5.0, margin=0.35):
        # self.it += 1
        cos_theta = input
        target = target.view(-1, 1)  # size=(B,1)

        index = cos_theta.data * 0.0  # size=(B,Classnum)
        index.scatter_(1, target.data.view(-1, 1), 1)
        index = index.byte()
        index = Variable(index).bool()

        output = cos_theta * 1.0  # size=(B,Classnum)
        output[index] -= margin
        output = output * scale

        logpt = F.log_softmax(output, dim=-1)
        logpt = logpt.gather(1, target)
        logpt = logpt.view(-1)

        loss = -1 * logpt
        loss = loss.mean()

        return loss
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值