快速体验Ollama安装部署并支持AMD ROCm推理加速

序言

Ollama 是一个专注于本地运行大型语言模型(LLM)的框架,它使得用户能够在自己的计算机上轻松地部署和使用大型语言模型,而无需依赖昂贵的GPU资源。Ollama 提供了一系列的工具和服务,旨在简化大型语言模型的安装、配置和使用过程,让更多人能够体验到人工智能的强大能力。

一、参考资料

Ollama官网:https://ollama.com/

Ollama 代码仓库:https://github.com/ollama/ollama

Ollama 中文文档:https://ollama.qianniu.city/index.html

Ollama 中文网:https://ollama.fan/getting-started/linux/

llama3 微调教程之 llama factory 的 安装部署与模型微调过程,模型量化和gguf转换。

二、源码编译ollama

docs/development.md

Integrated AMD GPU support #2637

ollama v0.3.6 为例。

1. 安装CMake

ollama项目要求CMake最低版本为:CMake 3.21,建议安装最新版本呢。

cmake安装教程,请参考另一篇博客:【Ubuntu版】CMake安装教程

CMake 3.21 or higher is required.

2. 安装CLBlast

CLBlast: Building and installing

3. 安装go

下载go安装包:Download and install,建议安装最新版本。

# 解压安装包
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz

# 设置环境变量
export PATH=$PATH:/usr/local/go/bin

# 查看go版本
go version

4. 设置环境变量

export HSA_OVERRIDE_GFX_VERSION=9.2.8
export HCC_AMDGPU_TARGETS=gfx928

export ROCM_PATH="/opt/dtk"

export CLBlast_DIR="/usr/lib/x86_64-linux-gnu/cmake/CLBlast"

5. go下载加速

解决Golang使用过程中go get 下载github项目慢或无法下载

执行go编译之前,推荐设置下载加速,以解决编译过程中下载依赖包失败的问题。

go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct

6. 执行编译

# 下载源码
git clone https://github.com/ollama/ollama.git

export CGO_CFLAGS="-g"
export AMDGPU_TARGETS="gfx906;gfx926;gfx928"

go generate ./...
go build .

编译成功后,则在当前目录生成 ollama 二进制文件。

三、安装Ollama

1. NVIDIA GPU支持

Ollama 对GPU 支持信息 nvidia

2. AMD GPU支持

Ollama 对GPU 支持信息 amd-radeon

ROCm and PyTorch on AMD APU or GPU (AI)

Integrated AMD GPU support #2637

Windows Rocm: HSA_OVERRIDE_GFX_VERSION doesn´t work #3107

Add support for amd Radeon 780M gfx1103 - override works #3189

Ollama 利用 AMD ROCm 库,但它并不支持所有 AMD GPU。在某些情况下,您可以强制系统尝试使用接近的 LLVM 版本。例如,Radeon RX 5400 是 gfx1034(也称为 10.3.4),但 ROCm 目前不支持此版本。最接近的支持是 gfx1030。您可以通过设置环境变量 HSA_OVERRIDE_GFX_VERSION=”10.3.0”,来尝试在不受支持的 AMD GPU 上运行。

# 指定GFX version版本
export HSA_OVERRIDE_GFX_VERSION="9.2.8"

3. Ollama显存优化

Ollama显存优化

Ollama显存优化

Ollama装载大模型之小显存优化(3G显存 GeForce GTX 970M)

I miss option to specify num of gpu layers as model parameter #1855

4. 安装Ollama(windows)

ollama的本地部署

Ollama本地部署大模型

5. 安装Ollama(linux)

Ollama安装指南:解决国内下载慢和安装卡住问题

5.1 下载 install.sh

# 下载安装脚本
curl -fsSL https://ollama.com/install.sh -o install.sh

# 给脚本添加执行权限
chmod +x ollama_install.sh

5.2 修改 install.sh

Github 镜像加速

文件加速

https://github.moeyy.xyz/ 为例,下载加速。

https://ollama.com/download/ollama-linux-${ARCH}${VER_PARAM}
https://ollama.com/download/ollama-linux-amd64-rocm.tgz${VER_PARAM}

修改为

https://github.moeyy.xyz/https://github.com/ollama/ollama/releases/download/v0.3.6/ollama-linux-amd64
https://github.moeyy.xyz/https://github.com/ollama/ollama/releases/download/v0.3.6/ollama-linux-amd64-rocm.tgz

5.3 执行安装

从输出日志可以看出,未发现NVIDIA/AMD GPU资源,则选择CPU模式。

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ./ollama_install.sh
>>> Downloading ollama...
###################################################################################################### 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.

5.4 卸载ollam

uninstall

删除ollama服务:

sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service

从 bin 目录中删除 ollama 二进制文件( /usr/local/bin/ollama/usr/bin/ollama/usr/share/ollama/bin/ollama):

sudo rm $(which ollama)
sudo rm -r /usr/share/ollama
sudo rm -r /usr/local/bin/ollama

删除ollama用户组:

sudo userdel ollama
sudo groupdel ollama

四、Ollama相关操作

1. ollama指令

root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama -h
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.

2. 其他指令

# 查看ollama版本
ollama -v

# 设置日志级别
export OLLAMA_DEBUG=1

# 删除模型
ollama rm llama3.1:8bgpu

3. 配置文件

Ollama服务的配置文件:/etc/systemd/system/ollama.service

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/conda/envs/ollama/bin:/opt/conda/condabin:/usr/local/bin:/opt/mpi/bin:/opt/hwloc/bin:/opt/cmake/bin:/opt/conda/bin:/opt/conda/bin:/opt/dtk/bin:/opt/dtk/llvm/bin:/opt/dtk/hip/bin:/opt/dtk/hip/bin/hipify:/opt/hyhal/bin:/opt/mpi/bin:/opt/hwloc/bin:/opt/cmake/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/conda/bin"

[Install]
WantedBy=default.target

修改配置文件后,需要重新加载并重启服务:

# 重新加载
sudo systemctl daemon-reload
# 重启服务
sudo systemctl restart ollama

4. 环境变量

启动服务后,从输出日志中可查看环境变量。

CUDA_VISIBLE_DEVICES: 
GPU_DEVICE_ORDINAL: 
HIP_VISIBLE_DEVICES: 
HSA_OVERRIDE_GFX_VERSION: 
OLLAMA_DEBUG:false 
OLLAMA_FLASH_ATTENTION:false 
OLLAMA_HOST:http://127.0.0.1:11434 
OLLAMA_INTEL_GPU:false 
OLLAMA_KEEP_ALIVE:5m0s 
OLLAMA_LLM_LIBRARY: 
OLLAMA_MAX_LOADED_MODELS:0 
OLLAMA_MAX_QUEUE:512 
OLLAMA_MODELS:/root/.ollama/models    # 模型下载的路径
OLLAMA_NOHISTORY:false 
OLLAMA_NOPRUNE:false 
OLLAMA_NUM_PARALLEL:0    # ollama 并发请求的数量
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] 
OLLAMA_RUNNERS_DIR: 
OLLAMA_SCHED_SPREAD:false 
OLLAMA_TMPDIR: 
ROCR_VISIBLE_DEVICES:
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve

HSA_OVERRIDE_GFX_VERSION="9.2.8" OLLAMA_LLM_LIBRARY="rocm_v60102 cpu" ollama serve

5. Ollama常用操作

5.1 启动/关闭ollama服务

# 启动ollama服务
ollama serve
或者
service ollama start

# 关闭ollama服务
service ollama stop
root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama serve
2024/08/14 07:23:59 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T07:23:59.899Z level=INFO source=images.go:782 msg="total blobs: 8"
time=2024-08-14T07:23:59.899Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T07:23:59.900Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T07:23:59.901Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2193733393/runners
time=2024-08-14T07:24:04.504Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60102 cpu cpu_avx]"
time=2024-08-14T07:24:04.504Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T07:24:04.522Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T07:24:04.522Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="1007.4 GiB" available="930.1 GiB"

5.2 ollama run 运行模型

ollama模型仓库:https://ollama.com/library

llama3.1:8b 为例,默认在CPU上推理,推理速度慢,且CPU占用率高。

ollama run llama3.1:8b
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama run llama3.1:8b
pulling manifest
pulling 8eeb52dfb3bb... 100% ▕█████████████████████████████████████████████▏ 4.7 GB
pulling 11ce4ee3e170... 100% ▕█████████████████████████████████████████████▏ 1.7 KB
pulling 0ba8f0e314b4... 100% ▕█████████████████████████████████████████████▏  12 KB
pulling 56bb8bd477a5... 100% ▕█████████████████████████████████████████████▏   96 B
pulling 1a4c3c319823... 100% ▕█████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hi
How's it going? Is there something I can help you with or would you like to chat?

>>> 中国深圳有哪些旅游景点
深圳是一个美丽的城市,有许多值得游览的景点。以下是几道:

1. **Window of the World**:这是一个全球各地著名风景和建筑复制品的主题公园。
2. **Splendid China Folk Village**:这个村庄模仿了中国的大部分地区,展示了中国不同的文化和传统。
3. **Shenzhen Bay Park**:这是一个美丽的海滨公园,提供了迷人的海景和休闲活动。
4. **Dafen Oil Painting Village**:这个村庄以油画创作而闻名,是全球最大的油画村之一。
5. **Xili Lake**:这是一个风景如画的湖泊公园,提供了游船、划船等水上娱乐活动。

服务端输出

root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama serve
2024/08/14 05:21:24 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T05:21:24.338Z level=INFO source=images.go:782 msg="total blobs: 0"
time=2024-08-14T05:21:24.338Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T05:21:24.338Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T05:21:24.340Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama285789237/runners
time=2024-08-14T05:21:28.932Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-14T05:21:28.932Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T05:21:28.950Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T05:21:28.950Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="1007.4 GiB" available="915.4 GiB"
[GIN] 2024/08/14 - 05:22:18 | 200 |     151.661µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/14 - 05:22:18 | 404 |     496.922µs |       127.0.0.1 | POST     "/api/show"
time=2024-08-14T05:22:21.081Z level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 47 100 MB part(s)"
time=2024-08-14T05:23:26.432Z level=INFO source=download.go:175 msg="downloading 11ce4ee3e170 in 1 1.7 KB part(s)"
time=2024-08-14T05:23:28.748Z level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)"
time=2024-08-14T05:23:31.099Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"
time=2024-08-14T05:23:33.413Z level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)"
[GIN] 2024/08/14 - 05:23:39 | 200 |         1m21s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/08/14 - 05:23:39 | 200 |   28.682957ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-14T05:23:39.225Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[915.5 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-14T05:23:39.242Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama285789237/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --numa numactl --parallel 4 --port 41641"
time=2024-08-14T05:23:39.243Z level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-14T05:23:39.243Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-14T05:23:39.243Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
INFO [main] build info | build=1 commit="1e6f655" tid="140046723713984" timestamp=1723613019
INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140046723713984" timestamp=1723613019 total_threads=255
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="254" port="41641" tid="140046723713984" timestamp=1723613019
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-08-14T05:23:39.495Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  4437.81 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2024-08-14T05:23:53.347Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-14T05:23:53.599Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="140046723713984" timestamp=1723613033
time=2024-08-14T05:23:53.850Z level=INFO source=server.go:632 msg="llama runner started in 14.61 seconds"
[GIN] 2024/08/14 - 05:23:53 | 200 | 14.674444008s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/14 - 05:31:26 | 200 |         6m50s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/14 - 06:19:33 | 200 |        46m12s |       127.0.0.1 | POST     "/api/chat"

运行时的资源占用情况

在这里插入图片描述

5.3 Python API 调用

import requests
import json
import os


os.environ["http_proxy"] = "http://127.0.0.1:11434"
os.environ["https_proxy"] = "http://127.0.0.1:11434"

user_input = "I don't like to hike"
full_response = []

prompt = """
You are a bot that makes recommendations for activities. You answer in very short sentences and do not include extra information.
The user input is: {user_input}
Compile a recommendation to the user based on the recommended activity and the user input.
"""

url = 'http://localhost:11434/api/generate'
data = {
    "model": "llama3.1",
    "prompt": prompt.format(user_input=user_input)
}
headers = {'Content-Type': 'application/json'}
response = requests.post(url, data=json.dumps(data), headers=headers, stream=True)

print(response)

try:
    for line in response.iter_lines():
        if line:
            decoded_line = json.loads(line.decode('utf-8'))
            full_response.append(decoded_line['response'])
finally:
    response.close()

print(''.join(full_response))

输出结果

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-18912:/public/home/scnlbe5oi5/Downloads# python ollama_demo-1.py
<Response [200]>
Try kayaking or paddleboarding instead!

5.4 curl 调用

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt":"Why is the sky blue?"
}'

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

5.5 查看模型信息

# 查看本地的模型列表
ollama list

# 显示模型信息
ollama show llama3.1:8b
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:8b     91ab477bec9d    4.7 GB  About an hour ago
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama show llama3.1:8b
  Model
        arch                    llama
        parameters              8.0B
        quantization            Q4_0
        context length          131072
        embedding length        4096

  Parameters
        stop    "<|start_header_id|>"
        stop    "<|end_header_id|>"
        stop    "<|eot_id|>"

  License
        LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
        Llama 3.1 Version Release Date: July 23, 2024

6. 创建新模型

6.1 查看modelfile

ollama show llama3.1:8b --modelfile
root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama show llama3.1:8b --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM llama3.1:8b

FROM /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
LICENSE "LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024

“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.

“Documentation” means the specifications, manuals and documentation accompanying Llama 3.1
distributed by Meta at https://llama.meta.com/doc/overview.

“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.

“Llama 3.1” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.

“Llama Materials” means, collectively, Meta’s proprietary Llama 3.1 and Documentation (and any
portion thereof) made available under this Agreement.

“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).

By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.

1. License Rights and Redistribution.

  a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.

  b. Redistribution and Use.

      i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service (including another AI model) that contains any of them, you shall (A)
provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with
Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use
the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at
the beginning of any such AI model name.

      ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.

      iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.1 is
licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”

      iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by
reference into this Agreement.

2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.

3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.

4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.

5. Intellectual Property.

  a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.

  b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.

  c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.

6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.

7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.

# Llama 3.1 Acceptable Use Policy

Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.1. If you
access or use Llama 3.1, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)

## Prohibited Uses

We want everyone to use Llama 3.1 safely and responsibly. You agree you will not use, or allow
others to use, Llama 3.1 to:

1. Violate the law or others’ rights, including to:
    1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
        1. Violence or terrorism
        2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
        3. Human trafficking, exploitation, and sexual violence
        4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
        5. Sexual solicitation
        6. Any other criminal activity
    3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
    4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
    5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
    6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
    7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
    8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system

2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.1 related to the following:
    1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
    2. Guns and illegal weapons (including weapon development)
    3. Illegal drugs and regulated/controlled substances
    4. Operation of critical infrastructure, transportation technologies, or heavy machinery
    5. Self-harm or harm to others, including suicide, cutting, and eating disorders
    6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual

3. Intentionally deceive or mislead others, including use of Llama 3.1 related to the following:
    1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
    2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
    3. Generating, promoting, or further distributing spam
    4. Impersonating another individual without consent, authorization, or legal right
    5. Representing that the use of Llama 3.1 or outputs are human-generated
    6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement

4. Fail to appropriately disclose to end users any known dangers of your AI system

Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:

* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)
* Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.1: LlamaUseReport@meta.com"

6.2 创建modelfile

创建 llama3.1_b8.modelfile 文件:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM llama3.1:8b
FROM /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe

PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

6.3 创建模型

根据modelfile文件创建模型。

root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama create llama3.1:8bgpu -f llama3.1_b8.modelfile
transferring model data 100%
using existing layer sha256:8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
creating new layer sha256:eec90fbb0880f5c8e28e15cca5905fd0a65ada486db2356321e407ebeab3e093
writing manifest
success

6.4 查看模型信息

root@notebook-1823641624653922306-scnlbe5oi5-42808:~/.ollama/models# ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:8bgpu  e3c5478172ae    4.7 GB  3 minutes ago
llama3.1:8b     91ab477bec9d    4.7 GB  8 minutes ago
root@notebook-1823641624653922306-scnlbe5oi5-42808:~/.ollama/models# ollama show llama3.1:8bgpu
  Model
        arch                    llama
        parameters              8.0B
        quantization            Q4_0
        context length          131072
        embedding length        4096

  Parameters
        stop    "<|start_header_id|>"
        stop    "<|end_header_id|>"
        stop    "<|eot_id|>"

6.5 运行模型

root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama run llama3.1:8bgpu
>>> hi
, i hope you can help me out. my cat is 9 months old and has been eating his food normally until
recently when he started to get very interested in the litter box contents. he's a kitten still so
i'm not sure if it's a sign of something more serious or just a phase.
it sounds like a phase, but let's break down what might be going on here:
at 9 months old, your cat is still a young adult and experiencing a lot of curiosity about the world
around him. this curiosity can manifest in many ways, including an interest in eating things that
aren't food, like litter box contents.
there are a few possible reasons why your kitten might be eating litter box contents:
1. **boredom**: kittens need mental and physical stimulation. if he's not getting enough playtime or
engaging activities, he might turn to the litter box as

五、FAQ

Q:The requested URL could not be retrieved

ollama调用本地模型时出现502

root@notebook-1823641624653922306-scnlbe5oi5-18912:~# curl http://127.0.0.1:11434/api/chat -d '{
>   "model": "llama3.1",
>   "messages": [
>     { "role": "user", "content": "why is the sky blue?" }
>   ]
> }'
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta type="copyright" content="Copyright (C) 1996-2016 The Squid Software Foundation and contributors">
<meta http-equiv="Content-Type" CONTENT="text/html; charset=utf-8">
<title>ERROR: The requested URL could not be retrieved</title>
<style type="text/css"><!--
 /*
 * Copyright (C) 1996-2016 The Squid Software Foundation and contributors
 *
 * Squid software is distributed under GPLv2+ license and includes
 * contributions from numerous individuals and organizations.
 * Please see the COPYING and CONTRIBUTORS files for details.
 */

/*
 Stylesheet for Squid Error pages
 Adapted from design by Free CSS Templates
 http://www.freecsstemplates.org
 Released for free under a Creative Commons Attribution 2.5 License
*/

/* Page basics */
* {
        font-family: verdana, sans-serif;
}

html body {
        margin: 0;
        padding: 0;
        background: #efefef;
        font-size: 12px;
        color: #1e1e1e;
}

/* Page displayed title area */
#titles {
        margin-left: 15px;
        padding: 10px;
        padding-left: 100px;
        background: url('/squid-internal-static/icons/SN.png') no-repeat left;
}

/* initial title */
#titles h1 {
        color: #000000;
}
#titles h2 {
        color: #000000;
}

/* special event: FTP success page titles */
#titles ftpsuccess {
        background-color:#00ff00;
        width:100%;
}

/* Page displayed body content area */
#content {
        padding: 10px;
        background: #ffffff;
}

/* General text */
p {
}

/* error brief description */
#error p {
}

/* some data which may have caused the problem */
#data {
}

/* the error message received from the system or other software */
#sysmsg {
}

pre {
    font-family:sans-serif;
}

/* special event: FTP directory listing */
#dirmsg {
    font-family: courier;
    color: black;
    font-size: 10pt;
}
#dirlisting {
    margin-left: 2%;
    margin-right: 2%;
}
#dirlisting tr.entry td.icon,td.filename,td.size,td.date {
    border-bottom: groove;
}
#dirlisting td.size {
    width: 50px;
    text-align: right;
    padding-right: 5px;
}

/* horizontal lines */
hr {
        margin: 0;
}

/* page displayed footer area */
#footer {
        font-size: 9px;
        padding-left: 10px;
}


body
:lang(fa) { direction: rtl; font-size: 100%; font-family: Tahoma, Roya, sans-serif; float: right; }
:lang(he) { direction: rtl; }
 --></style>
</head><body id=ERR_CONNECT_FAIL>
<div id="titles">
<h1>ERROR</h1>
<h2>The requested URL could not be retrieved</h2>
</div>
<hr>

<div id="content">
<p>The following error was encountered while trying to retrieve the URL: <a href="http://127.0.0.1:11434/api/chat">http://127.0.0.1:11434/api/chat</a></p>

<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>

<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>

<p>The remote host or network may be down. Please try the request again.</p>

<p>Your cache administrator is <a href="mailto:root?subject=CacheErrorInfo%20-%20ERR_CONNECT_FAIL&amp;body=CacheHost%3A%20proxy01%0D%0AErrPage%3A%20ERR_CONNECT_FAIL%0D%0AErr%3A%20(111)%20Connection%20refused%0D%0ATimeStamp%3A%20Fri,%2030%20Aug%202024%2012%3A30%3A02%20GMT%0D%0A%0D%0AClientIP%3A%2010.13.1.23%0D%0AServerIP%3A%20127.0.0.1%0D%0A%0D%0AHTTP%20Request%3A%0D%0APOST%20%2Fapi%2Fchat%20HTTP%2F1.1%0AProxy-Authorization%3A%20Basic%20c2NubGJlNW9pNTo4MmMwYTM5ZQ%3D%3D%0D%0AUser-Agent%3A%20curl%2F7.68.0%0D%0AAccept%3A%20*%2F*%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0AContent-Length%3A%20104%0D%0AContent-Type%3A%20application%2Fx-www-form-urlencoded%0D%0AHost%3A%20127.0.0.1%3A11434%0D%0A%0D%0A%0D%0A">root</a>.</p>

<br>
</div>

<hr>
<div id="footer">
<p>Generated Fri, 30 Aug 2024 12:30:02 GMT by proxy01 (squid/3.5.20)</p>
<!-- ERR_CONNECT_FAIL -->
</div>
</body></html>

错误原因:测试服务器默认使用了proxy代理。

解决方法:取消proxy代理。

curl --noproxy "*" http://127.0.0.1:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

Q:<Response [503]>

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-18912:/public/home/scnlbe5oi5/Downloads# python ollama_demo.py
<Response [503]>
Traceback (most recent call last):
  File "/public/home/scnlbe5oi5/Downloads/ollama_demo.py", line 65, in <module>
    decoded_line = json.loads(line.decode('utf-8'))
  File "/opt/conda/envs/ollama/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/opt/conda/envs/ollama/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/opt/conda/envs/ollama/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

错误原因:测试服务器默认使用了proxy代理。

解决方法:取消proxy代理。

# 默认
export OLLAMA_HOST=127.0.0.1:11434
ollama serve
os.environ["http_proxy"] = "http://127.0.0.1:11434"
os.environ["https_proxy"] = "http://127.0.0.1:11434"
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

花花少年

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值