前言
vllm 启动大模型默认从https://huggingface.co/ 下载大模型,如果不能上外网则启动不成功,因次本文介绍如何使用国内大模型启动。
说明
本文使用 阿里魔塔 的大模型仓库。
部署大模型 (不使用 --served-model-name 参数)
以 阿里魔塔托管的Qwen2.5-0.5B-Instruct 为例
1. 使用 modelscope-cli 下载
安装 modelscope-cli 工具
pip install modelscope
创建模型存储目录
mkdir -p /models/Qwen2.5-0.5B-Instruct
设置模型存储目录环境变量
export model_dir=/models/Qwen2.5-0.5B-Instruct
下载大模型到本地目录
modelscope download --model="Qwen/Qwen2.5-0.5B-Instruct" --local_dir $model_dir
下载完成后的目录结构
ls -l /model/Qwen2.5-0.5B-Instruct/
total 976196
-rw-r--r-- 1 root root 11343 Mar 20 07:39 LICENSE
-rw-r--r-- 1 root root 4917 Mar 20 07:39 README.md
-rw-r--r-- 1 root root 659 Mar 20 07:39 config.json
-rw-r--r-- 1 root root 2 Mar 20 07:39 configuration.json
-rw-r--r-- 1 root root 242 Mar 20 07:39 generation_config.json
-rw-r--r-- 1 root root 1671839 Mar 20 07:39 merges.txt
-rw-r--r-- 1 root root 988097824 Mar 20 07:39 model.safetensors
-rw-r--r-- 1 root root 7031645 Mar 20 07:39 tokenizer.json
-rw-r--r-- 1 root root 7305 Mar 20 07:39 tokenizer_config.json
-rw-r--r-- 1 root root 2776833 Mar 20 07:39 vocab.json
2. 使用 VLLM 加载本地模型
vllm serve $model_dir \
--trust-remote-code \
--max_num_batched_tokens 1024 \
--max_model_len 1024 \
--dtype=half \
--gpu_memory_utilization=0.95 \
--port 8080
vllm 启动参数解释
modelscope://qwen/Qwen-7B: | 指定从阿里 ModelScope 下载 Qwen-7B 模型(不支持) |
---|---|
/models/Qwen-7B | 指定从本地目录 /models/Qwen-7B 加载大模型(提前下载好) |
–trust-remote-code: | 允许从 ModelScope 加载自定义代码(Qwen 需要 |
–enable-chunked-prefill: | 优化大 batch 处理,减少显存占用 |
–max_num_batched_tokens 1024: | 限制批处理的最大 token 数 |
–max_model_len 1024: | 模型最大输入长度 |
–dtype=half: | 使用 FP16 进行推理,减少显存占用 |
–gpu_memory_utilization=0.95: | 最大化 GPU 显存利用率 |
–port 8080: | VLLM API 监听端口 |
启动输出
INFO 03-20 07:42:12 __init__.py:190] Automatically detected platform cuda.
INFO 03-20 07:42:13 api_server.py:840] vLLM API server version 0.7.2
INFO 03-20 07:42:13 api_server.py:841] args: Namespace(subparser='serve', model_tag='/model/Qwen2.5-0.5B-Instruct', config='', host=None, port=8080, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='/model/Qwen2.5-0.5B-Instruct', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='half', kv_cache_dtype='auto', max_model_len=1024, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=1024, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=