【AIGC】Mac Intel 本地 LLM 部署经验汇总(llama.cpp)

本文介绍了在Mac Intel平台上使用llama.cpp和OpenVino部署LLM的经验。尽管llama.cpp能利用Metal GPU,作者最终选择CPU推理。llama.cpp的Python实现需要特定设置,而llama.cpp本身在CPU模式下性能优于GPU。对比实验表明,对于Mac Intel,OpenVino在内容质量和速度上更优,建议使用OpenVino搭配小模型进行部署。
摘要由CSDN通过智能技术生成

看到标题的各位都知道了。是的,终于也轮到 llama.cpp 了。先说结论,本次 llama.cpp 部署已能在 Intel 核心的 MBP 中使用 Metal GPUs 进行推理。

但出于各种原因最后我还是选择了 CPU 推理。

在进入正文前,还是要回应一下之前提到的 Ollama 不能使用 Metal GPUs 推理的问题。从结论来看,若需要使用 Metal GPUs 推理,则需要从 Github 下载源码后本地编译才可以使用(编译时需要添加 CMAKE Metal 参数)。此外,Docker 版的 Ollama 不能使用 Metal 进行加速,因为 Docker 只支持 CUDA 。

1. llama-cpp-python 实现

由于 llama-cpp-python 是 llama.cpp 的 python 实现,因此还是有必要进行说明的。
MacOS Install with Metal GPU - llama-cpp-python
在重看了 llama-cpp-python cookbook 后发现两个关键点:

  1. 关于 xcode 的安装路径

image.png
如 cookbook 所示,一定要指向这个路径,不然编译时虽然能够编译通过,但是无法达到目标效果。要怎样切换呢?只需要通过 “–switch” 参数切换即可,如下图:

# 将xcode路径进行切换
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
  1. 模型要选 gguf v2 版本的 4 位量化模型

image.png
如 cookbook 所示,模型要选 4 位量化的 gguf 模型。文档中提到 TheBloke 量化模型就是一个不错的选择,如下图:
image.png
上图来自“TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF”的 model card 内容,可以看出 TheBloke 的 gguf 模型基本上都是 gguf v2 模型。大伙儿不用自己做量化直接下载使用即可,如下图:

(base) yuanzhenhui@MacBook-Pro hub % huggingface-cli download --resume-download TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF tinyllama-1.1b-chat-v1.0.Q4_0.gguf --local-dir-use-symlinks False
Consider using `hf_transfer` for faster downloads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.
downloading https://hf-mirror.com/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_0.gguf to /Users/yuanzhenhui/.cache/huggingface/hub/models--TheBloke--TinyLlama-1.1B-Chat-v1.0-GGUF/blobs/da3087fb14aede55fde6eb81a0e55e886810e43509ec82ecdc7aa5d62a03b556.incomplete
tinyllama-1.1b-chat-v1.0.Q4_0.gguf: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 638M/638M [05:56<00:00, 1.79MB/s]
/Users/yuanzhenhui/.cache/huggingface/hub/models--TheBloke--TinyLlama-1.1B-Chat-v1.0-GGUF/snapshots/52e7645ba7c309695bec7ac98f4f005b139cf465/tinyllama-1.1b-chat-v1.0.Q4_0.gguf

好了,做完上面的关键点处理后我们开始创建一个新的 python 环境进行验证,如下图:

(base) yuanzhenhui@MacBook-Pro llama.cpp % conda create -n llm python=3.11.7 
Channels:
 - defaults
 - conda-forge
Platform: osx-64
Collecting package metadata (repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
    current version: 24.3.0
    latest version: 24.5.0

Please update conda by running

    $ conda update -n base -c conda-forge conda

...

Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate llm
#
# To deactivate an active environment, use
#
#     $ conda deactivate

(base) yuanzhenhui@MacBook-Pro llama.cpp % conda activate llm
(llm) yuanzhenhui@MacBook-Pro llama.cpp % 

由于上一篇文章中已经安装了 Miniforge3 这里就无需再安装了。

接下来使用 conda create 创建环境时会自动在“/Users/yuanzhenhui/miniforge3/envs/”路径下创建一个新的环境,如果没有安装 Miniforge3 请翻看上一篇文章按照指引先进行安装。

之后就可以安装 llama-cpp-python 了,如下图:

(llm) yuanzhenhui@MacBook-Pro ~ % pip uninstall llama-cpp-python
WARNING: Skipping llama-cpp-python as it is not installed.

(llm) yuanzhenhui@MacBook-Pro ~ % CMAKE_ARGS="-DLLAMA_METAL=on" pip install -U llama-cpp-python==0.2.27 --no-cache-dir
Collecting llama-cpp-python==0.2.27
  Using cached llama_cpp_python-0.2.27-cp311-cp311-macosx_10_16_x86_64.whl
Collecting typing-extensions>=4.5.0 (from llama-cpp-python==0.2.27)
  Downloading typing_extensions-4.11.0-py3-none-any.whl.metadata (3.0 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python==0.2.27)
  Downloading numpy-1.26.4-cp311-cp311-macosx_10_9_x86_64.whl.metadata (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.1/61.1 kB 239.8 kB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python==0.2.27)
  Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 726.9 kB/s eta 0:00:00
Downloading numpy-1.26.4-cp311-cp311-macosx_10_9_x86_64.whl (20.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.6/20.6 MB 95.5 kB/s eta 0:00:00
Downloading typing_extensions-4.11.0-py3-none-any.whl (34 kB)
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
Successfully installed diskcache-5.6.3 llama-cpp-python-0.2.27 numpy-1.26.4 typing-extensions-4.11.0

(llm) yuanzhenhui@MacBook-Pro ~ % pip install 'llama-cpp-python[server]'
Requirement already satisfied: llama-cpp-python[server] in /Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages (0.2.27)
...
Installing collected packages: websockets, uvloop, ujson, sniffio, shellingham, pyyaml, python-multipart, python-dotenv, pygments, pydantic-core, orjson, mdurl, MarkupSafe, idna, httptools, h11, dnspython, click, certifi, annotated-types, uvicorn, pydantic, markdown-it-py, jinja2, httpcore, email_validator, anyio, watchfiles, starlette, rich, pydantic-settings, httpx, typer, starlette-context, sse-starlette, fastapi-cli, fastapi
Successfully installed MarkupSafe-2.1.5 annotated-types-0.6.0 anyio-4.3.0 certifi-2024.2.2 click-8.1.7 dnspython-2.6.1 email_validator-2.1.1 fastapi-0.111.0 fastapi-cli-0.0.3 h11-0.14.0 httpcore-1.0.5 httptools-0.6.1 httpx-0.27.0 idna-3.7 jinja2-3.1.4 markdown-it-py-3.0.0 mdurl-0.1.2 orjson-3.10.3 pydantic-2.7.1 pydantic-core-2.18.2 pydantic-settings-2.2.1 pygments-2.18.0 python-dotenv-1.0.1 python-multipart-0.0.9 pyyaml-6.0.1 rich-13.7.1 shellingham-1.5.4 sniffio-1.3.1 sse-starlette-2.1.0 starlette-0.37.2 starlette-context-0.3.6 typer-0.12.3 ujson-5.10.0 uvicorn-0.29.0 uvloop-0.19.0 watchfiles-0.21.0 websockets-12.0

关于 llama-cpp-python 的安装必须关注的点有 3 个:

  1. 必须加上 “-DLLAMA_METAL=on” 参数确保 llama-cpp-python 是可用于 Metal 加速;
  2. llama-cpp-python==0.2.27是必须加上的,因为最新版本无法使用 GPU 加速(估计是针对 M 系列做了调整);
  3. llama-cpp-python[server] 必须装,不然无法直接通过 API 调用;

都准备就绪后就可以运行 tinyllama 模型了,但启动的时候你或许会遇到以下的错误:

ggml_metal_init: allocating
ggml_metal_init: found device: Intel(R) Iris(TM) Plus Graphics
ggml_metal_init: picking default device: Intel(R) Iris(TM) Plus Graphics
ggml_metal_init: ggml.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:58:9: error: invalid type 'const constant int64_t &' (aka 'const constant long &') for buffer declaration
        constant  int64_t & ne00,
        ^~~~~~~~~~~~~~~~~~~~~~~~
program_source:58:19: note: type 'int64_t' (aka 'long') cannot be used in buffer pointee type
        constant  int64_t & ne00,
                  ^
program_source:59:9: error: invalid type 'const constant int64_t &' (aka 'const constant long &') for buffer declaration
        constant  int64_t & ne01,
...

嗯…这种情况只需要将 llama-cpp-python “先删除后安装”多安装几次就好(我也无解)。

(llm) yuanzhenhui@MacBook-Pro ~ % pip uninstall llama-cpp-python
Found existing installation: llama_cpp_python 0.2.27
Uninstalling llama_cpp_python-0.2.27:
  Would remove:
...
  Successfully uninstalled llama_cpp_python-0.2.27
(llm) yuanzhenhui@MacBook-Pro ~ % pip cache purge
Files removed: 307
(llm) yuanzhenhui@MacBook-Pro ~ % CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python==0.2.27 --no-cache-dir
(llm) yuanzhenhui@MacBook-Pro ~ % pip install 'llama-cpp-python[server]'

虽然 tinyllama 能够正常运行起来,但可惜的是它不支持中文输出。出于对中文输出的需要找了一个名为“llama-2-7b-langchain-chat.Q4_0.gguf”的模型,执行效果如下:

(llm) yuanzhenhui@MacBook-Pro Downloads % python3 -m llama_cpp.server --model /Users/yuanzhenhui/Downloads/llama-2-7b-langchain-chat.Q4_0.gguf --n_gpu_layers 16 --n_ctx 512 --n_threads 4 --cache true --chat_format chatml
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /Users/yuanzhenhui/Downloads/llama-2-7b-langchain-chat.Q4_0.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
...
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
...
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size       =    0.11 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size =  2048.00 MiB, offs =            0
ggml_backend_metal_buffer_from_ptr: allocated buffer, size =  1703.12 MiB, offs =   2039959552, ( 3755.41 /  1536.00)ggml_backend_metal_buffer_from_ptr: warning: current allocated size is greater than the recommended max working set size
llm_load_tensors: system memory used  = 3647.98 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Intel(R) Iris(TM) Plus Graphics
ggml_metal_init: picking default device: Intel(R) Iris(TM) Plus Graphics
ggml_metal_init: ggml.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: GPU name:   Intel(R) Iris(TM) Plus Graphics
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  =  1610.61 MB
ggml_metal_init: maxTransferRate               = built-in GPU
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   256.00 MiB, ( 4018.91 /  1536.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =     0.00 MiB, ( 4018.92 /  1536.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 73.69 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =    70.50 MiB, ( 4089.41 /  1536.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
Using ram cache with size 2147483648
INFO:     Started server process [61420]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
Llama._create_completion: cache miss

llama_print_timings:        load time =   16405.52 ms
llama_print_timings:      sample time =      88.29 ms /   423 runs   (    0.21 ms per token,  4791.08 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_print_timings:        eval time =   88859.66 ms /   423 runs   (  210.07 ms per token,     4.76 tokens per second)
llama_print_timings:       total time =   92104.09 ms
Llama._create_completion: cache save
Llama.save_state: saving llama state
Llama.save_state: got state size: 334053420
Llama.save_state: allocated state
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   255.50 MiB, ( 4359.41 /  1536.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
Llama.save_state: copied llama state: 333537328
Llama.save_state: saving 333537328 bytes of llama state
INFO:     127.0.0.1:53543 - "POST /v1/chat/completions HTTP/1.1" 200 OK

在使用“–n_gpu_layers”参数时系统提示“模型所需显存大于本机显存”,但这个不影响运行因此可以选择忽视,因为在这种情况下若 GPU 资源不够会先进行“排队等待”,若还是不够则会自动调取 CPU 资源进行推理。此外,由于使用了 cache 参数,因此在第二次问同一个问题的时候就不用重新解析相同的 prompt 了。但有一说一 7b 的模型对于我来说还是太大了,虽然结果出来了,但是性能上不去。两次生成一次 108 秒,另一次 92 秒,而且还是使用了 Metal 推理的情况下…

难道 TheBloke 就没有体积小一点的中文模型吗?

随后到 huggingface 上看了一下 TheBloke 的中文模型,如下图:
image.png
逐个点进去查看发现这些模型动不动就几个 G…

😮‍💨…事到如今只能抱着“万一成功呢”的心态试了一下“qwen1_5-0_5b-chat-q4_0.gguf”,结果如下图:

(llm) yuanzhenhui@MacBook-Pro ~ % python3 -m llama_cpp.server --model /Users/yuanzhenhui/.cache/huggingface/hub/models--Qwen--Qwen1.5-0.5B-Chat-GGUF/snapshots/cfab082d2fef4a8736ef384dc764c2fb6887f387/qwen1_5-0_5b-chat-q4_0.gguf --n_gpu_layers -1 --n_ctx 512
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /Users/yuanzhenhui/.cache/huggingface/hub/models--Qwen--Qwen1.5-0.5B-Chat-GGUF/snapshots/cfab082d2fef4a8736ef384dc764c2fb6887f387/qwen1_5-0_5b-chat-q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen1.5-0.5B-Chat-AWQ-fp16
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 24
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 1024
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 2816
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 16
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  10:                qwen2.use_parallel_residual bool             = true
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  16:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  18:                    tokenizer.chat_template str              = {
   % for message in messages %}{
   {
   '<|im_...
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - kv  20:                          general.file_type u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q4_0:  169 tensors
llama_model_loader: - type q6_K:    1 tensors
error loading model: unknown model architecture: 'qwen2'
llama_load_model_from_file: failed to load model
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
          ^^^^^^^^^^^
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/server/app.py", line 133, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/server/app.py", line 70, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/server/model.py", line 27, in __init__
    self._current_model = self.load_llama_from_model_settings(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/server/model.py", line 75, in load_llama_from_model_settings
    _model = llama_cpp.Llama(
             ^^^^^^^^^^^^^^^^
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/llama.py", line 962, in __init__
    self._n_vocab = self.n_vocab()
                    ^^^^^^^^^^^^^^
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/llama.py", line 2274, in n_vocab
    return self._model.n_vocab()
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yuanzhenhui/miniforge3/envs/llm/lib/python3.11/site-packages/llama_cpp/llama.py", line 251, in n_vocab
    assert self.model is not None
           ^^^^^^^^^^^^^^^^^^^^^^
AssertionError

启动的时候直接报错,显示的是“unknown model architecture: ‘qwen2’”,也就是说 0.2.27 版本的 llama-cpp-python 不支持 qwen2 模型架构。既然这样那就换一个吧,我看 llama.cpp 是支持 qwen 的,那就试试 llama.cpp 吧。

2. llama.cpp 实现

首先到 Github 下载最新的 llama.cpp 源码,如下图:

(base) yuanzhenhui@MacBook-Pro c++ % git clone https://github.com/ggerganov/llama.cpp.git           
Cloning into 'llama.cpp'...
remote: Enumerating objects: 24649, done.
remote: Counting objects: 100% (24649/24649), done.
remote: Compressing objects: 100% (7051/7051), done.
remote: Total 24649 (delta 17542), reused 24341 (delta 17378), pack-reused 0
Receiving objects: 100% (24649/24649), 41.57 MiB | 1.74 MiB/s, done.
Resolving deltas: 100% (17542/17542), done.

接着就可以通过 make 进行编译,如下图:

(base) yuanzhenhui@MacBook-Pro llama.cpp % LLAMA_METAL=1 make -j4 
I ccache found, compilation results will be cached. Disable with LLAMA_NO_CCACHE.
I llama.cpp build info: 
I UNAME_S:   Darwin
I UNAME_P:   i386
I UNAME_M:   x86_64
I CFLAGS:    -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion 
I CXXFLAGS:  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL 
I NVCCFLAGS: -std=c++11 -O3 
I LDFLAGS:   -framework Accelerate -framework Foundation -framework Metal -framework MetalKit 
I CC:        Apple clang version 14.0.3 (clang-1403.0.22.14.1)
I CXX:       Apple clang version 14.0.3 (clang-1403.0.22.14.1)

/usr/local/bin/ccache cc  -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion    -c ggml.c -o ggml.o
ggml.c:1756:5: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion]
    GGML_F16_VEC_REDUCE(sumf, 
  • 17
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Kida 的技术小屋

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值