PVE显卡直通、Ubuntu安装N卡驱动与vLLM多卡部署DeepSeek Llama、QWQ

基于PVE虚拟机,启用PCIe Nvidia显卡直连,安装Ubuntu和vLLm,部署DeepSeek Llama 32B和QWQ。

PVE的基础设置

基于pve8.3

PVE换源

1.修改sources.list

# cp /etc/apt/sources.list /etc/apt/sources.list.bak
# nano /etc/apt/sources.list
deb https://mirrors.aliyun.com/debian bookworm main contrib

deb https://mirrors.aliyun.com/debian bookworm-updates main contrib

# security updates
deb http://security.debian.org bookworm-security main contrib

2.修改ceph.list

# nano /etc/apt/sources.list.d/ceph.list

注释仓库

#deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise

3.pve-enterprise.list

# cp /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.bak
# nano /etc/apt/sources.list.d/pve-enterprise.list
deb https://mirrors.tuna.tsinghua.edu.cn/proxmox/debian/pve bookworm pve-no-subscription

4.更新

# apt update
# apt upgrade

CT模板

# cp /usr/share/perl5/PVE/APLInfo.pm /usr/share/perl5/PVE/APLInfo.pm.bak
# sed -i 's|http://download.proxmox.com|https://mirrors.tuna.tsinghua.edu.cn/proxmox|g' /usr/share/perl5/PVE/APLInfo.pm
# systemctl restart pvedaemon.service

更新内核头文件

# apt install pve-headers-$(uname -r)

PCIe直通

官方链接Proxmox VE Administration Guide

1.修改grub

# nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

2.修改/etc/modules

# nano /etc/modules
vfio
vfio_iommu_type1
vfio_pci

3.PCIe直通

查询gpu组的IDs
# lspci -nn

02:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] [10de:1e04] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation TU102 High Definition Audio Controller [10de:10f7] (rev a1)
02:00.2 USB controller [0c03]: NVIDIA Corporation TU102 USB 3.1 Host Controller [10de:1ad6] (rev a1)
02:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU102 USB Type-C UCSI Controller [10de:1ad7] (rev a1)

这里为一张显卡的多个设备ID,包括显卡、声卡、USB控制器和串行数据控器,但同型号的设备ID相同,需要记录不同型号的设备ID,同设备多个设备控制器都要记录。

修改vfio.conf
# nano /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1e04,10de:10f7,10de:1ad6,10de:1ad7 disable_vga=1
修改iommu_unsafe_interrupts.conf
# nano /etc/modprobe.d/iommu_unsafe_interrupts.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1
修改pve-blacklist.conf
# nano /etc/modprobe.d/pve-blacklist.conf
blacklist nvidiafb
blacklist nvidia
blacklist nouveau
更新
# update-grub
# update-initramfs -u -k all

如果出现 Couldn’t find EFI system partition. It is recommended to mount it to /boot or /efi.
Alternatively, use --esp-path= to specify path to mount point.

# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT # 找到vfat对应的Name,这里是sdb2
# mkdir -p /boot/efi
# mount /dev/sdb2 /boot/efi
重启
# reboot

Ubuntu操作

Nvidia驱动

安装qemu支持

apt install qemu-guest-agent

安装nVidia驱动

# sudo apt update
# sudo apt upgrade

# sudo add-apt-repository ppa:graphics-drivers/ppa
# sudo apt update

# ubuntu-drivers devices
# sudo ubuntu-drivers autoinstall
# 或指定版本
# sudo apt install nvidia-driver-460

# sudo reboot

# nvidia-smi

CUDA和cuDNN

CUDA Toolkit 12.8 Update 1 Downloads | NVIDIA Developer

# wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-ubuntu2404.pin
# sudo mv cuda-ubuntu2404.pin /etc/apt/preferences.d/cuda-repository-pin-600
# wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda-repo-ubuntu2404-12-8-local_12.8.1-570.124.06-1_amd64.deb
# sudo dpkg -i cuda-repo-ubuntu2404-12-8-local_12.8.1-570.124.06-1_amd64.deb
# sudo cp /var/cuda-repo-ubuntu2404-12-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
# sudo apt-get update
# sudo apt-get -y install cuda-toolkit-12-8

# nvcc --version
# wget https://developer.download.nvidia.com/compute/cudnn/9.8.0/local_installers/cudnn-local-repo-ubuntu2404-9.8.0_1.0-1_amd64.deb
# sudo dpkg -i cudnn-local-repo-ubuntu2404-9.8.0_1.0-1_amd64.deb
# sudo cp /var/cudnn-local-repo-ubuntu2404-9.8.0/cudnn-*-keyring.gpg /usr/share/keyrings/
# sudo apt-get update
# sudo apt-get -y install cudnn

# nvcc --version
容器支持

Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit

# curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    
# sudo apt-get update

# sudo apt-get install -y nvidia-container-toolkit
# sudo nvidia-ctk runtime configure --runtime=docker
# sudo systemctl restart docker

安装vLLm

使用uv

安装 Python | uv-zh-cn

# uv add vllm
# uv run vllm vllm serve /home/ubuntu/models/QwQ-32B-AWQ/ --tensor-parallel-size 4 --max-model-len 16384 --gpu-memory-utilization 0.40 --served-model-name llm --api-key token-llm --dtype=half --quantization awq --enable-reasoning --reasoning-parser deepseek_r1
使用docker
docker run -d --runtime nvidia --gpus all \
    -v /home/ubuntu/models:/models \
    -p 8000:8000 \
    --ipc=host \
    --name Meri \
    --restart unless-stopped \
    vllm/vllm-openai:v0.8.0 \
    --model /models/QwQ-32B-AWQ \
    --tensor-parallel-size 4 \
    --max-model-len 16384 \
    --gpu-memory-utilization 0.40 \
    --served-model-name llm \
    --api-key token-llm \
    --dtype half \
    --quantization awq \
    --enable-reasoning \
    --reasoning-parser deepseek_r1
参数解释
  • tensor-parallel-size 张量计算,二的幂
  • max-model-le 最大上下文
  • gpu-memory-utilization 单卡可用显存的百分比
  • served-model-name 模型名称
  • api-key api的key
  • dtype 模型权重和激活的数据类型
  • quantization 量化权重的方法
  • enable-reasoning与reasoning-parser 新参数,用于思考模型
  • –enforce-eager --enable-chunked-prefill --enable-prefix-caching 可用于调试,会降低性能
gguf模型
vllm serve {your.gguf} --tokenizer {your gguf dir}
其他
功率限制
# sudo nvidia-smi -pl 240
禁用nvlink
# export NCCL_P2P_DISABLE=1
指定卡号运行
# export CUDA_VISIBLE_DEVICES=0,1,2,3
nvlink状态
# nvidia-smi topo -m
查询 GPU 0 上 NVLink 的状态
# nvidia-smi nvlink -i 0 -s

其他

Ubuntu挂载SMB

可将host写入/etc/hosts

# sudo mount -t cifs -o username={server host},password={wd} //nas/Server /mnt/nas
Ubuntu扩容
sudo parted /dev/sda resizepart 3 100%
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 419430400
blocks) or continue with the current setting? 
Fix/Ignore? Fix
Partition number? 3                                                       
End?  [107GB]? (enter)
reboot
sudo pvresize /dev/sda3
sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
sudo resize2fs /dev/ubuntu-vg/ubuntu-lv

ollama环境变量

# vim /etc/systemd/system/ollama.service
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_SCHED_SPREAD=1"
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值