备忘用
Yi
docker
pull,run,start,stop,attach,rm
不知道为啥exec没法调用python
网络:host
挂载文件夹
sudo docker run --gpus all -i -t -v /home/aistudio/llm/yi:/home/yi/workspace/Yi/vf --net=host --name llm_yi registry.lingyiwanwu.com/ci/01-ai/yi:latest
clash
sudo bash start.sh
source /etc/profile.d/clash.sh
proxy_on
netstat -tln | grep -E '9090|789.'
env | grep -E 'http_proxy|https_proxy'
其实proxy_on
就是把http_proxy
和https_proxy
改成http://127.0.0.1:7890
,所以在docker容器里可以直接export http_proxy=http://127.0.0.1:7890
(要host模式,仅限当前终端)
测试:curl cip.cc
proxychains4
不能有两个127.0.0.1
apt不能用,要改/etc/apt/apt.conf
安装rocm和torch
别忘了加权限
https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-radeon.html
Set permissions for Groups to allow access to GPU hardware resources
否则没法使用硬件资源,torch.cuda.is_available()会是false
boot
ESC进bios, F11进boot menu
IPMI
IPMI密码超微主板默认ADMIN,ADMIN
IPMI地址可以在bios里看,也可以远程用ipmitool看,详见 sudo ipmitool lan print 1
huggingface镜像站
export HF_ENDPOINT=https://hf-mirror.com
cache:
export HF_DATASETS_CACHE="/home/user/data/cache" export HF_HOME="/home/user/data/cache" export HUGGINGFACE_HUB_CACHE="/home/user/data/cache" export HF_ENDPOINT=https://hf-mirror.com
pip换源
pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/
卡在Using …\Cache\py310_cu121 as PyTorch extensions root…
把cache里的删掉即可
screen
screen
启动
screen -r XXX
重新连接
hf dataset下载
huggingface-cli download --repo-type dataset --token hf_** --resume-download ibrahimhamamci/CT-RATE --cache-dir /home/local-dir --local-dir-use-symlinks False
监视rocm-smi
用watch -n 1 rocm-smi
torch.cuda.set_device(DEVICE)
计算torch运行时间时要设置好cuda对应的gpu,否则torch.cuda.synchronize()就无法等待对应的GPU上的线程完成(默认是GPU0)