openai开源的whisper在huggingface中使用例子(语音转文字中文)

该代码示例展示了如何利用HuggingFace中的OpenAIWhisper模型进行语音识别,特别是针对繁体中文的处理。首先加载预训练模型和处理器,然后将音频文件转换为输入特征,通过模型生成预测的文本ID,再将这些ID解码为文字。需要注意的是,模型默认支持繁体中文,所以需要额外的转换步骤将结果转为简体中文。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

openai开源的语音转文字支持多语言在huggingface中使用例子。
目前发现多语言模型large-v2支持中文是繁体,因此需要繁体转简体。
后续编写微调训练例子

GitHub地址:
https://github.com/openai/whisper

!pip install zhconv
!pip install whisper
!pip install tqdm
!pip install ffmpeg-python
!pip install transformers
!pip install librosa

from transformers import WhisperProcessor, WhisperForConditionalGeneration

import librosa
import torch
from zhconv import convert
import warnings

warnings.filterwarnings("ignore")

audio_file = f"test.wav"
#load audio file
audio, sampling_rate = librosa.load(audio_file, sr=16_000)

# # audio
# display.Audio(audio_file, autoplay=True)

# load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
tokenizer = WhisperProcessor.from_pretrained("openai/whisper-large-v2")

processor.save_pretrained("openai/model/whisper-large-v2")
model.save_pretrained("openai/model/whisper-large-v2")
tokenizer.save_pretrained("openai/model/whisper-large-v2")

processor = WhisperProcessor.from_pretrained("openai/model/whisper-large-v2")
model = WhisperForConditionalGeneration.from_pretrained("openai/model/whisper-large-v2")
tokenizer = WhisperProcessor.from_pretrained("openai/model/whisper-large-v2")


# load dummy dataset and read soundfiles
# ds = load_dataset("common_voice", "fr", split="test", streaming=True)
# ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
# input_speech = next(iter(ds))["audio"]["array"]
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
input_features = processor(audio, return_tensors="pt").input_features
predicted_ids = model.generate(input_features)
# transcription = processor.batch_decode(predicted_ids)

transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription)
print('转化为简体结果:', convert(transcription, 'zh-cn'))
It is strongly recommended to pass the `sampling_rate` argument to this function. Failing to do so can result in silent errors that might be hard to debug.


['启动开始录音']
转化为简体结果: 启动开始录音
input_features = processor(audio, return_tensors="pt").input_features
predicted_ids = model.generate(input_features)
# transcription = processor.batch_decode(predicted_ids)

transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription)
print('转化为简体结果:', convert(transcription, 'zh-cn'))
It is strongly recommended to pass the `sampling_rate` argument to this function. Failing to do so can result in silent errors that might be hard to debug.


['启动开始录音']
转化为简体结果: 启动开始录音
#长文本如下
#使用参考网站:https://huggingface.co/openai/whisper-large-v2

在这里插入图片描述

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值