在A100上跑通QAnything
命令行
这三个模型,问问题,答案都是一样的
bash ./run.sh -c local -i 0,1 -b default -m Qwen-7B-QAnything
bash ./run.sh -c local -i 0,1 -b default -m Qwen-7B-Chat
bash ./run.sh -c local -i 0,1 -b default -m chatglm3-6b
# 成功,本来一直跑不通,是因为GPU显存不够,需要24G
bash ./run.sh -c local -i 1 -b vllm -m Qwen-7B-Chat -t qwen-7b-chat
# 成功
bash ./run.sh -c local -i 1 -b vllm -m Qwen-7B-QAnything -t qwen-7b-qanything
# 失败
bash ./run.sh -c local -i 1 -b vllm -m chatglm3-6b -t chatglm3-6b
#失败 curl: (7) Failed to connect to localhost port 7800 after 0 ms: Connection refused
bash ./run.sh -c local -i 1 -b hf -m chatglm3-6b -t chatglm3-6b
# 成功
bash ./run.sh -c local -i 1 -b hf -m Qwen-7B-Chat -t qwen-7b-chat
# 成功
bash ./run.sh -c local -i 1 -b vllm -m chatglm2-6b -t chatglm2
# 成功
bash ./run.sh -c local -i 1 -b vllm -m chatglm3-6b -t chatglm3
#失败
bash ./run.sh -c local -i 1 -b vllm -m BlueLM-7B-Chat -t bluelm-7b-chat
把projs/QAnything-1.2.0/scripts/run_for_local_option.sh --max-model-len 4096改成2048,还是报错
2024-02-02 16:27:41 | ERROR | stderr | ValueError: Model architectures ['BlueLMForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'DeciLMForCausalLM', 'FalconForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'LlamaForCausalLM', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'OPTForCausalLM', 'PhiForCausalLM', 'QWenLMHeadModel', 'RWForCausalLM', 'YiForCausalLM']
暂没有解决!