本文主要介绍了情绪语音合成项目训练自己的数据集的实现过程~
innnky/emotional-vits: 无需情感标注的情感可控语音合成模型,基于VITS (github.com)
目录
0.环境设置
因为我用的是之前设置vits的虚拟环境,这里可能也有写不全的的地方~
git clone https://github.com/innnky/emotional-vits
cd emotional-vits
pip install -r requirements.txt
# MAS 对印发音和文本:Cython-version Monotonoic Alignment Search
cd monotonic_align
python setup.py build_ext --inplace
1.数据预处理
# 处理数据集
python preprocess.py --text_index 2 --filelists /jf-training-home/src/emotional-vits/filelists/bea_train.txt /jf-training-home/src/emotional-vits/filelists/val.txt --text_cleaners korean_cleaners
生成文本处理文件
对数据进行16000重采样:
import os
import librosa
import tqdm
import soundfile as sf
import time
if __name__ == '__main__':
audioExt = 'WAV'
input_sample = 22050
output_sample = 16000
audioDirectory = ['/jf-training-home/src/emotional-vits/dataset/bae_before']
outputDirectory = ['/jf-training-home/src/emotional-vits/dataset/bae']
start_time=time.time()
for i, dire in enumerate(audioDirectory):
clean_speech_paths = librosa.util.fin