本文翻译整理自:https://github.com/gradio-app/fastrtc
文章目录
一、关于 FastRTC
Python实时通信库,可将任何Python函数转换为通过WebRTC或WebSocket传输的实时音视频流。
相关链接资源
- github : https://github.com/gradio-app/fastrtc
- 官网:https://fastrtc.org
- 官方文档:https://fastrtc.org/userguide/
- Demo/在线试用:https://huggingface.co/spaces/fastrtc/talk-to-claude
- Hugging Face : https://huggingface.co/spaces/fastrtc
- 示例代码库 Cookbook:https://fastrtc.org/cookbook/
关键功能特性
- 🗣️ 内置自动语音检测和轮换机制,只需关注用户响应逻辑
- 💻 自动生成UI - 使用
.ui.launch()
方法启动支持WebRTC的Gradio内置界面 - 🔌 自动WebRTC支持 - 使用
.mount(app)
方法将流挂载到FastAPI应用,为前端提供 WebRTC 端点 - ⚡️ WebSocket支持 - 使用
.mount(app)
方法将流挂载到FastAPI应用,为前端提供 WebSocket 端点 - 📞 自动电话支持 - 使用流的
fastphone()
方法启动应用并获取免费临时电话号码 - 🤖 完全可定制后端 - 可轻松将
Stream
挂载到FastAPI应用,适应生产环境需求
二、安装
基础安装:
pip install fastrtc
如需使用内置的暂停检测(参见ReplyOnPause)和文本转语音功能(参见Text To Speech),需安装vad
和tts
扩展:
pip install "fastrtc[vad, tts]"
三、使用示例
1、音频回传
from fastrtc import Stream, ReplyOnPause
import numpy as np
def echo(audio: tuple[int, np.ndarray]):
# The function will be passed the audio until the user pauses
# Implement any iterator that yields audio
# See "LLM Voice Chat" for a more complete example
yield audio
stream = Stream(
handler=ReplyOnPause(echo),
modality="audio",
mode="send-receive",
)
2、LLM语音对话
from fastrtc import (
ReplyOnPause, AdditionalOutputs, Stream,
audio_to_bytes, aggregate_bytes_to_16bit
)
import gradio as gr
from groq import Groq
import anthropic
from elevenlabs import ElevenLabs
groq_client = Groq()
claude_client = anthropic.Anthropic()
tts_client = ElevenLabs()
# See "Talk to Claude" in Cookbook for an example of how to keep
# track of the chat history.
def response(
audio: tuple[int, np.ndarray],
):
prompt = groq_client.audio.transcriptions.create(
file=("audio-file.mp3", audio_to_bytes(audio)),
model="whisper-large-v3-turbo",
response_format="verbose_json",
).text
response = claude_client.messages.create(
model="claude-3-5-haiku-20241022",
max_tokens=512,
messages=[{"role": "user", "content": prompt}],
)
response_text = " ".join(
block.text
for block in response.content
if getattr(block, "type", None) == "text"
)
iterator = tts_client.text_to_speech.convert_as_stream(
text=response_text,
voice_id="JBFqnCBsd6RMkjVDRZzb",
model_id="eleven_multilingual_v2",
output_format="pcm_24000"
)
for chunk in aggregate_bytes_to_16bit(iterator):
audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
yield (24000, audio_array)
stream = Stream(
modality="audio",
mode="send-receive",
handler=ReplyOnPause(response),
)
3、摄像头流处理
from fastrtc import Stream
import numpy as np
def flip_vertically(image):
return np.flip(image, axis=0)
stream = Stream(
handler=flip_vertically,
modality="video",
mode="send-receive",
)
4、目标检测
from fastrtc import Stream
import gradio as gr
import cv2
from huggingface_hub import hf_hub_download
from .inference import YOLOv10
model_file = hf_hub_download(
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
)
# git clone https://huggingface.co/spaces/fastrtc/object-detection
# for YOLOv10 implementation
model = YOLOv10(model_file)
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return cv2.resize(new_image, (500, 500))
stream = Stream(
handler=detection,
modality="video",
mode="send-receive",
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
]
)
四、运行流服务
1、通过Gradio运行
stream.ui.launch()
2、电话接入(仅音频)
stream.fastphone()
3、FastAPI集成
app = FastAPI()
stream.mount(app)
# 可选:添加路由
@app.get("/")
async def _():
return HTMLResponse(content=open("index.html").read())
# uvicorn app:app --host 0.0.0.0 --port 8000
伊织 xAI 2025-04-23(三)