MLU370-M8部署stable-diffusion部署手册

一、智算中心环境准备

驱动:建议MLU370 -5.10.22

镜像:
在这里插入图片描述

二、环境准备

注释requirements.txt

GitPython
Pillow
# accelerate
modelscope
#basicsr
blendmodes
clean-fid
einops
fastapi>=0.90.1
gfpgan
gradio==3.41.2
inflection
jsonmerge
kornia
lark
numpy
omegaconf
clip
open-clip-torch

piexif
psutil
pytorch_lightning
realesrgan
requests
resize-right

safetensors
scikit-image>=0.19
timm
tomesd
# torch
torchdiffeq
torchsde
# transformers==4.30.2

transforemrs

git clone -b v4.30.2 https://githubfast.com/huggingface/transformers.git
python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i transformers/
pip install -e ./transformers_mlu/

2.accelerate

git clone -b v0.22.0 https://githubfast.com/huggingface/accelerate.git
python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i accelerate/
pip install -e ./accelerate_mlu/

basicsr

git clone https://githubfast.com/XPixelGroup/BasicSR.git
python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i BasicSR/
pip install -e ./BasicSR_mlu/
修改BasicSR_mlu/basicsr/utils/misc.py文件,添加以下几行代码
def get_device():
    if torch.mlu.is_available():
        return torch.device("mlu")
    else:
        return torch.device("cpu")
def gpu_is_available():
        return torch.mlu.is_available()

三.代码准备

下载代码

git clone https://githubfast.com/AUTOMATIC1111/stable-diffusion-webui.git

代码转换

python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i stable-diffusion-webui/

组件安装

cd stable-diffusion-webui_mlu && mkdir repositories
git clone https://githubfast.com/CompVis/stable-diffusion.git repositories/stable-diffusion-stability-ai
git clone https://githubfast.com/Stability-AI/generative-models.git repositories/generative-models
git clone https://githubfast.com/sczhou/CodeFormer.git repositories/CodeFormer
git clone https://githubfast.com/salesforce/BLIP.git repositories/BLIP
git clone https://githubfast.com/Stability-AI/stablediffusion.git
git clone -b v0.0.16 https://githubfast.com/crowsonkb/k-diffusion.git repositories/k-diffusion
git clone https://githubfast.com/CompVis/taming-transformers.git repositories/taming-transformers
cd repositories/
python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i BLIP/ && python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i CodeFormer/ && python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py  -i generative-models/ && python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i k-diffusion/ && python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py  -i stable-diffusion-stability-ai/ &&
python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i taming-transformers/ &&
python /torch/src/catch/tools/torch_gpu2mlu/torch_gpu2mlu.py -i stable-diffusion
mkdir ../tmp && mv *_mlu ../tmp && rm -rf *
cd ../tmp
mv BLIP_mlu/ BLIP/ && mv CodeFormer_mlu/ CodeFormer/ && mv generative-models_mlu/ generative-models/ && mv k-diffusion_mlu/ k-diffusion/ && mv stable-diffusion-stability-ai_mlu/ stable-diffusion-stability-ai/ && mv taming-transformers_mlu/ taming-transformers
mv * ../repositories/
cd ../repositories/ && cp -r taming-transformers/taming/ stable-diffusion-stability-ai/
cd ../repositories/ && cp -r stable-diffusion/ldm/ ..

注释modules/devices.py 63行

   # torch.mlu.ipc_collect()

注释modules/launch_utils.py 398-400行

  # if (not is_installed("xformers") or args.reinstall_xformers) and args.xformers:
    #     run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
    #     startup_timer.record("install xformers")

注释modules/launch_utils.py 408-412 和 423-425行

    # git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
    # git_clone(stable_diffusion_xl_repo, repo_dir('generative-models'), "Stable Diffusion XL", stable_diffusion_xl_commit_hash)
    # git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
    # git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
    # git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)


 # if not requirements_met(requirements_file):
    #     run_pip(f"install -r \"{requirements_file}\"", "requirements")
    #     startup_timer.record("install requirements")

模型下载

from modelscope import snapshot_download
model_dir = snapshot_download("AI-ModelScope/stable-diffusion-v1-5")
model_dir = snapshot_download("AI-ModelScope/clip-vit-large-patch14")

模型拷贝到自己项目路径

mv /root/.cache/modelscope/hub/AI-ModelScope/stable-diffusion-v1-5/v1-5-pruned-emaonly.safetensors /workspace/volume/magic/sd/code/stable-diffusion-webui_mlu/models/Stable-diffusion/
mv /root/.cache/modelscope/hub/AI-ModelScope/stable-diffusion-v1-5/v1-5-pruned-emaonly.ckpt /workspace/volume/magic/sd/code/stable-diffusion-webui_mlu/models/Stable-diffusion/pip

修改 ldm/moudles/encoders/modules.py 100行
将version路径改成自己的模型路径

def __init__(self, version="/root/.cache/modelscope/hub/AI-ModelScope/clip-vit-large-patch14/", device="mlu", max_length=77,

修改modules/sd_vae_approx.py 57行

loaded_model.load_state_dict(torch.load(model_path, map_location='cpu' if devices.device.type != 'mlu' else None))
改成
loaded_model.load_state_dict(torch.load(model_path, map_location='mlu'))

运行命令:

 ./webui.sh -f --skip-torch-mlu-test

在这里插入图片描述

总结

因为该框架是一个集成很多包和模型得工具,难免在用到一些工具得时候会调用cuda,这个时候可以直接将cuda改成mlu或者拉对应github源码转换后安装就能解决大部分问题哦。

  • 11
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值