本地部署 faster-whisper


在这里插入图片描述

1. 创建虚拟环境

conda create -n faster-whisper python=3.11 -y
conda activate faster-whisper

2. 安装依赖模块

pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu118
pip install faster-whisper
conda install matplotlib
pip install gradio

3. 创建 Web UI

# webui.py
import gradio as gr
from faster_whisper import WhisperModel

# Initialize the model
# model_size = "large-v3"
model_size = "Systran/faster-whisper-large-v3"
model = WhisperModel(model_size, device="cuda", compute_type="float16")

def transcribe_audio(audio_file, language):
    # Transcribe the audio
    segments, info = model.transcribe(audio_file, beam_size=5, language=language)

    # Prepare the output
    transcription = ""
    for segment in segments:
        transcription += f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}\n"

    detected_language = f"Detected language: {info.language} (probability: {info.language_probability:.2f})"

    return detected_language, transcription

# Define Gradio interface
iface = gr.Interface(
    fn=transcribe_audio,
    inputs=[
        gr.Audio(type="filepath", label="Upload Audio"),
        gr.Dropdown(["en", "zh", "ja"], label="Select Language", value="en")
    ],
    outputs=[
        gr.Textbox(label="Detected Language"),
        gr.Textbox(label="Transcription", lines=20)
    ],
    allow_flagging='never',
    title="Audio Transcription with Faster Whisper",
    description="Upload an audio file and select the language to transcribe the audio to text. Choose 'auto' for automatic language detection."
)

# Launch the interface
iface.launch()

4. 启动 Web UI

python webui.py

5. 访问 Web UI

使用浏览器打开 http://localhost:7860
在这里插入图片描述

reference:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值