最近在一台 8卡H20 机器上,先后部署了 DeepSeek-R1-AWQ (671B)和最新的 DeepSeek-V3-0324 (685B) ,测试了下性能和数学问题跑分。服务器由火山引擎提供。先来看一下机器配置:
8卡H20机器配置
GPU:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA H20 On | 00000000:65:02.0 Off | 0 |
| N/A 29C P0 71W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA H20 On | 00000000:65:03.0 Off | 0 |
| N/A 32C P0 72W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA H20 On | 00000000:67:02.0 Off | 0 |
| N/A 32C P0 74W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA H20 On | 00000000:67:03.0 Off | 0 |
| N/A 30C P0 73W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA H20 On | 00000000:69:02.0 Off | 0 |
| N/A 30C P0 74W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA H20 On | 00000000:69:03.0 Off | 0 |
| N/A 33C P0 74W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA H20 On | 00000000:6B:02.0 Off | 0 |
| N/A 33C P0 73W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 7 NVIDIA H20 On | 00000000:6B:03.0 Off | 0 |
| N/A 29C P0 75W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
这里踩过一个坑:最初的这个驱动版本有问题,在RTX4090上是好的,在H20上跑 DeepSeek-R1-AWQ 试过各种配置及软件版本,一推理就崩溃。后来换了NVIDIA官网为H20推荐的驱动版本 Driver Version: 550.144.03 ( CUDA 12.4), 什么配置都没改,问题就解决了。
卡间互联:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 X OK OK OK OK OK OK OK
GPU1 OK X OK OK OK OK OK OK
GPU2 OK OK X OK OK OK OK OK
GPU3 OK OK OK X OK OK OK OK
GPU4 OK OK OK OK X OK OK OK
GPU5 OK OK OK OK OK X OK OK
GPU6 OK OK OK OK OK OK X OK
GPU7 OK OK OK OK OK OK OK X
Legend:
X = Self
OK = Status Ok
CNS = Chipset not supported
GNS = GPU not supported
TNS = Topology not supported
NS = Not supported
U = Unknown
内存:
root@H20:/data/ai/models# free -g
total used free shared buff/cache available
Mem: 1929 29 1891 0 9 1892
Swap: 0 0 0
磁盘:
vda 252:0 0 100G 0 disk
├─vda1 252:1 0 200M 0 part /boot/efi
└─vda2 252:2 0 99.8G 0 part /
nvme3n1 259:0 0 3.5T 0 disk
nvme2n1 259:1 0 3.5T 0 disk
nvme0n1 259:2 0 3.5T 0 disk
nvme1n1 259:3 0 3.5T 0 disk
OS
root@H20:/data/ai/models# uname -a
Linux H20 5.4.0-162-generic #179-Ubuntu SMP Mon Aug 14 08:51:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
root@H20:/data/ai/models# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
启动推理
用 vLLM v0.8.2 启动推理服务,分别先后启动如下两个模型的推理:
- DeepSeek-R1-AWQ: https://huggingface.co/cognitivecomputations/DeepSeek-R1-AWQ
- DeepSeek-V3-0324:https://modelscope.cn/models/deepseek-ai/DeepSeek-V3-0324
H20 性能评测
启动性能评测:
nohup python3 -u simple-bench-to-api.py --url http://localhost:7800/v1 \
--model DeepSeek-R1 \
--concurrencys 1,10,20,30,40,50 \
--prompt "Introduce the history of China" \
--max_tokens 100,1024,16384,32768,65536,131072 \
--api_key sk-xxx \
--duration_seconds 30 \
> benth-DeepSeek-R1-AWQ-8-H20.log 2>&1 &
这个命令会分别用 max_tokens 为100,1024,16384,32768,65536,131072, 来对1个并发,10个并发,。。。,50个并发,进行批量测试。每个max_tokens取值生成一个不同并发的表格。压测脚本 simple-bench-to-api.py 及详细参数含义在上一篇文章 https://blog.csdn.net/weixin_53138109/article/details/146919527 中有,需要的小伙伴可以自取。
压测结果:
8卡H20部署DeepSeek-R1-AWQ性能实测
----- max_tokens=100 压测结果汇总 -----
指标 \ 并发数 | 1个并发 | 10个并发 | 20个并发 | 30个并发 | 40个并发 | 50个并发 |
---|---|---|---|---|---|---|
总请求数 | 4 | 40 | 80 | 120 | 160 | 200 |
成功率 | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% |
平均延迟 | 7.8265s | 8.1742s | 8.3271s | 8.6902s | 8.7426s | 9.0815s |
最大延迟 | 7.9687s | 8.2911s | 8.4582s | 9.0513s | 9.0191s | 9.4417s |
最小延迟 | 7.7197s | 8.1062s | 8.1941s | 8.4626s | 8.4411s | 8.7822s |
P90延迟 | 7.9226s | 8.2208s | 8.4206s | 8.9813s | 8.9725s | 9.2873s |
P95延迟 | 7.9456s | 8.2801s | 8.4312s | 9.0094s | 8.9932s | 9.3191s |
P99延迟 | 7.9641s | 8.2879s | 8.4574s | 9.0323s | 9.0047s | 9.4240s |
平均首字延迟 | 7.8265s | 8.1742s | 8.3271s | 8.6902s | 8.7426s | 9.0815s |
总生成tokens数 | 400 | 4000 | 8000 | 12000 | 16000 | 20000 |
单并发最小吞吐量 | 12.55 tokens/s | 12.06 tokens/s | 11.82 tokens/s | 11.05 tokens/s | 11.09 tokens/s | 10.59 tokens/s |
单并发最大吞吐量< |