uvicorn reload-dir参数

本文介绍如何使用FastAPI框架部署服务时,通过uvicorn实现A目录下a.file代码改动时自动重启,而B/C文件改动不触发重新加载,提升开发效率。
# 目录结构
.
├── A
│   └── a.file
├── B
│   └── b.file
├── C
│   └── c.file
└── start.py

前提:使用fastapi框架起服务会用到uvicorn
需求:只想a.file内代码变动会重新加载,b.file或者c.file内文件内容修改不重新加载
解决办法

uvicorn start:app --reload --reload-dir A

详情可以参照uvicorn官方文档

有下述代码1,我想将模型(model、vad_model、punc_model等)的位置存储在"/data/funasr_pyannote/model"中,下述代码1如何修改?其中代码2是官方部署FunASR的脚本,其模型的保存地址为当前执行目录下的model中,其中代码的启动方式参考代码3所示: 代码1:backend.py(/data/funasr_pyannote) # web_demo/backend.py import uvicorn, io, json from fastapi import FastAPI, WebSocket, WebSocketDisconnect from funasr import AutoModel from pyannote.audio import Pipeline import torch, numpy as np, os # -------------- 模型初始化 -------------- # asr_model = AutoModel(model="paraformer-zh", vad_model="fsmn-vad") asr_model = AutoModel( # model="iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch", # vad_model="fsmn-vad", # punc_model="ct-punc", # CT-Transformer 标点模型 # itn_model="thuduj12/fst_itn_zh", # 新增:FST中文文本逆归一化模型 # lm_model="damo/speech_ngram_lm_zh-cn-ai-wesp-fst", # 新增:语言模型 # lm_weight=0.1, # 新增:LM权重(0.1~0.5效果较佳) # device="cuda" if torch.cuda.is_available() else "cpu" model="iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch", vad_model="damo/speech_fsmn_vad_zh-cn-16k-common-onnx", # 语音端点检测模型 punc_model="ct-punc", # CT-Transformer 标点模型 itn_model="thuduj12/fst_itn_zh", # 新增:FST中文文本逆归一化模型 lm_model="damo/speech_ngram_lm_zh-cn-ai-wesp-fst", # 新增:语言模型 lm_weight=0.1, # 新增:LM权重(0.1~0.5效果较佳) device="cuda" if torch.cuda.is_available() else "cpu" ) pipeline = Pipeline.from_pretrained( "pyannote/speaker-diarization-3.1", use_auth_token=os.getenv("HF_TOKEN", "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") ) print("模型加载完成,等待前端连接…") app = FastAPI() @app.websocket("/ws") async def websocket_endpoint(ws: WebSocket): await ws.accept() try: while True: # 前端每次发 1 秒 16000*2 字节 pcm_bytes = await ws.receive_bytes() audio_np = np.frombuffer(pcm_bytes, dtype=np.int16).astype(np.float32) / 32768.0 # ASR asr_result = asr_model.generate(input=audio_np, batch_size_s=300) text = asr_result[0]["text"] if asr_result and asr_result[0] else "" # 说话人分离 speaker = "Unknown" try: diarization = pipeline({"waveform": torch.from_numpy(audio_np).unsqueeze(0), "sample_rate": 16000}) if diarization: speaker = next(diarization.itertracks(yield_label=True))[2] # SPEAKER_00, SPEAKER_01... except Exception as e: print("diarization error:", e) await ws.send_text(json.dumps({"speaker": speaker, "text": text}, ensure_ascii=False)) except WebSocketDisconnect: print("客户端断开") if __name__ == "__main__": uvicorn.run("backend:app", host="0.0.0.0", port=8000, reload=False) 代码2:FunASR官方部署 download_model_dir="/workspace/models" model_dir="damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx" online_model_dir="damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx" vad_dir="damo/speech_fsmn_vad_zh-cn-16k-common-onnx" punc_dir="damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727-onnx" itn_dir="thuduj12/fst_itn_zh" lm_dir="damo/speech_ngram_lm_zh-cn-ai-wesp-fst" port=10095 certfile=0 keyfile="$(pwd)/ssl_key/server.key" hotword="$(pwd)/websocket/hotwords.txt" # set decoder_thread_num decoder_thread_num=$(cat /proc/cpuinfo | grep "processor"|wc -l) || { echo "Get cpuinfo failed. Set decoder_thread_num = 32"; decoder_thread_num=32; } multiple_io=16 io_thread_num=$(( (decoder_thread_num + multiple_io - 1) / multiple_io )) model_thread_num=1 cmd_path=/workspace/FunASR/runtime/websocket/build/bin cmd=funasr-wss-server-2pass . ./tools/utils/parse_options.sh || exit 1; if [ -z "$certfile" ] || [ "$certfile" = "0" ]; then certfile="" keyfile="" fi cd $cmd_path $cmd_path/${cmd} \ --download-model-dir "${download_model_dir}" \ --model-dir "${model_dir}" \ --online-model-dir "${online_model_dir}" \ --vad-dir "${vad_dir}" \ --punc-dir "${punc_dir}" \ --itn-dir "${itn_dir}" \ --lm-dir "${lm_dir}" \ --decoder-thread-num ${decoder_thread_num} \ --model-thread-num ${model_thread_num} \ --io-thread-num ${io_thread_num} \ --port ${port} \ --certfile "${certfile}" \ --keyfile "${keyfile}" \ --hotword "${hotword}" & 代码3: ### 02-镜像启动 > 通过下述命令拉取并启动FunASR runtime-SDK的docker镜像 ```markdown # (1)拉取镜像(若是因网络拉取不了,可采用打包的方法,具体操作方法请看最后) sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.13 # (2)选择创建目录位置 cd /data/funasr # (3)创建 mkdir -p ./funasr-runtime-resources/models # (4)启动FunASR软件包的docker镜像 sudo docker run -p 10096:10095 -it --privileged=true \ -v $PWD/funasr-runtime-resources/models:/workspace/models \ registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.13 ``` ### 03-服务端启动 # (1)首先定位到"FunASR/runtime"目录下 cd FunASR/runtime # (2)修改run_server_2pass.sh的文件,将certfile设置为0(不可忽略,否则连接失败) vi run_server_2pass.sh # (3)运行(运行结果如图3-3所示) bash run_server_2pass.sh 代码3:run_server_2pass.sh文件 download_model_dir="/workspace/models" model_dir="damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx" online_model_dir="damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx" vad_dir="damo/speech_fsmn_vad_zh-cn-16k-common-onnx" punc_dir="damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727-onnx" itn_dir="thuduj12/fst_itn_zh" lm_dir="damo/speech_ngram_lm_zh-cn-ai-wesp-fst" port=10095 certfile=0 keyfile="$(pwd)/ssl_key/server.key" hotword="$(pwd)/websocket/hotwords.txt" # set decoder_thread_num decoder_thread_num=$(cat /proc/cpuinfo | grep "processor"|wc -l) || { echo "Get cpuinfo failed. Set decoder_thread_num = 32"; decoder_thread_num=32; } multiple_io=16 io_thread_num=$(( (decoder_thread_num + multiple_io - 1) / multiple_io )) model_thread_num=1 cmd_path=/workspace/FunASR/runtime/websocket/build/bin cmd=funasr-wss-server-2pass . ./tools/utils/parse_options.sh || exit 1; if [ -z "$certfile" ] || [ "$certfile" = "0" ]; then certfile="" keyfile="" fi cd $cmd_path $cmd_path/${cmd} \ --download-model-dir "${download_model_dir}" \ --model-dir "${model_dir}" \ --online-model-dir "${online_model_dir}" \ --vad-dir "${vad_dir}" \ --punc-dir "${punc_dir}" \ --itn-dir "${itn_dir}" \ --lm-dir "${lm_dir}" \ --decoder-thread-num ${decoder_thread_num} \ --model-thread-num ${model_thread_num} \ --io-thread-num ${io_thread_num} \ --port ${port} \ --certfile "${certfile}" \ --keyfile "${keyfile}" \ --hotword "${hotword}" &
07-19
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值