whisper相关项目的安装与使用

Whisper是一种通用的语音识别模型。它在不同音频的大型数据集上训练。它也是一个多任务模型,可以执行多语种语音识别、语音翻译语音语种识别。

1. openai/whipser安装与使用

1.1. openai/whipser安装

conda create -n whisper python=3.10
source activate whisper

#安装whisper
pip3 install -U openai-whisper -i https://pypi.tuna.tsinghua.edu.cn/simple
#修改PyTorch为对应cuda可支持的版本
pip3 uninstall torch
pip3 install torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cu121 -i https://pypi.tuna.tsinghua.edu.cn/simple

# 需要系统支持ffmpeg,非root安装
# 需要先安装依赖yasm,安装后添加环境量PATH
wget http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
tar -zxvf yasm-1.3.0.tar.gz
cd yasm-1.3.0/
./configure --enable-shared --prefix=/media/yangdi/yasm-1.3.0
make -j20
make install
#安装ffmpeg,安装后添加环境量PATH和LD_LIBRARY_PATH
wget https://johnvansickle.com/ffmpeg/release-source/ffmpeg-4.1.tar.xz
tar -xvf ffmpeg-4.1.tar.xz
cd ffmpeg-4.1/
./configure --enable-shared --prefix=/media/yangdi/ffmpeg-4.1
make -j20
make install

1.2. openai/whisper的使用

import whisper
import time

model_start_time = time.time()
# device选择具体的卡,默认0
model = whisper.load_model("turbo",  device="cuda:0")
model_end_time = time.time()

print("load model {:.4f}s".format(model_end_time - model_start_time))

asr_start_time = time.time()
result = model.transcribe("test.wav")
asr_end_time = time.time()
print(result["text"])
print("use time {:.4f}".format(asr_end_time - asr_start_time))

2. faster-whisper安装与使用

2.1. faster-whisper安装

faster-whisper要求cuda12以上和CuDNN9以上,我们安装cuda12.1和cuDNN9.1。
cuDNN下载地址:https://developer.nvidia.com/cudnn-archive

#安装cuDNN9.1
wget https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-9.1.1.17_cuda12-archive.tar.xz


conda create -n faster-whisper python=3.10
source activate faster-whisper

#安装faster-whisper
pip3 install faster-whisper -i https://pypi.tuna.tsinghua.edu.cn/simple

2.2. faster-whisper使用

需要在huggingface上下载turbo模型,然后使用local_files_only=True读取本地路径文件

from faster_whisper import WhisperModel
import time

model_start_time = time.time()
# Run on GPU with FP16
model = WhisperModel("faster-whisper-large-v3-turbo-ct2", device="cuda", device_index=0, compute_type="float16", local_files_only=True)
model_end_time = time.time()
print("load model: {:.4f}".format(model_end_time - model_start_time))

text_list = []
asr_start_time = time.time()
segments, info = model.transcribe("data-en/Anne Hathaway Forgets The Princess Diaries and The Devil Wears Prada Details.wav", beam_size=5)
segments = list(segments)
asr_end_time = time.time()
for segment in segments:
    #print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
    text_list.append(segment.text)

text = " ".join(text_list)
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
print(text)
print("use time {:.4f}".format(asr_end_time - asr_start_time))

3. openai/whisper与faster-whisper对比

使用混合turbo模型,在4090D上的测试时间
faster-whisper使用FP16

对比项whisperfaster-whisper提升情况
模型初始化时间8.4s1.7s80%
显存占用6G2.5G58%
处理耗时21.5s23.2s-8%
WER0.17820.1795-0.8%
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

莽夫搞战术

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值