第一步:创建ollama缓存目录
```
# 创建宿主机缓存目录(确保所在分区有足够空间)
mkdir -p /data/ollama-cache
```
*备注:如docker所在磁盘空间充裕,此步骤可忽略。*
第二步:下载leopony/ollama:latest镜像(适配NPU)
```
# 手动下载镜像
docker pull leopony/ollama:latest
```
第三步:启动容器
```
docker run \
--name ollama \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2 \
--device /dev/davinci3 \
--device /dev/davinci4 \
--device /dev/davinci5 \
--device /dev/davinci6 \
--device /dev/davinci7 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /data/LLaMA-Factory/output/Qwen2.5-14B-Instruct/lora/megred-model:/models/qwen-14b-finetuned \
-v /data/ollama-cache:/root/.ollama \
-e TMPDIR=/models/qwen-14b-finetuned/tmp \
-p 11434:11434 \
-it leopony/ollama:latest /bin/bash
```
第四步:进入容器,创建临时目录并设置权限
```
# 进入容器
docker exec -it ollama bash
# 创建临时目录
mkdir -p /models/qwen-14b-finetuned/tmp
# 设置权限(可选)
chmod 777 /models/qwen-14b-finetuned/tmp
```
第五步:加载模型并运行
```
# 加载模型(需在服务启动后执行)
ollama create qwen-14b-finetuned -f /models/qwen-14b-finetuned/Modelfile
ollama run qwen-14b-finetuned
```
第六步:测试
```
curl http://localhost:11434/api/generate -d '{
> "model": "qwen-32b-finetuned",
> "prompt": "你好"
> }'
```
至此结束。