目前Docker+Ollama 是一个AI本地化部署最常见的组合之一,但现两者也有一些冲突设置。
问题表现:ollama run 模型 时候报错
Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
ip的11434端口依然在运行“Ollama is running”
问题所在:docker desktop 升级到4.41.0所致
最新的Docker Desktop(4.41.0,2025-04-28)引入Model Runner(llama等dll),导致与Ollama冲突
- Docker Model Runner is now available on x86 Windows machines with NVIDIA GPUs.
- You can now push models to Docker Hub with Docker Model Runner.
- Added support for Docker Model Runner's model management and chat interface in Docker Desktop for Mac and Windows (on hardware supporting Docker Model Runner). Users can now view, interact with, and manage local AI models through a new dedicated interface.
- ……
解决方案:
将C:\Program Files\Docker\Docker\resources\bin文件夹下的ggml-base.dll文件改名,如加个.old后缀,至此问题可以得到有效解决!
有可能后续一段时间docker Desktop升级后都要如此处理,来解决与Ollama两者冲突问题。