Atlas300IDuo跑的sd-webui
参考:https://www.hiascend.com/software/mindie
环境准备
cpu: Kunpeng920
npu: Atlas300 IDuo * 4
memory:256GB
安装环境
驱动安装:https://www.hiascend.com/document/detail/zh/quick-installation/24.0.RC1/quickinstg/800_3000/quickinstg_800_3000_0001.html
cann安装:https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha001/softwareinst/instg/instg_0001.html
mindie安装:https://www.hiascend.com/document/detail//zh/mindie/10RC1/description/releasenote/download
mindie官方提供镜像:https://ascendhub.huawei.com/
下载源码
源代码路径:
https://gitee.com/ascend/ModelZoo-PyTorch/tree/master/MindIE/MindIE-Torch/built-in/foundation/sd-webui
branch:master
tag: e10748482de9f8be5371c470127e6098d41ed0e4
SD-WebUI-TorchAIE推理指导
参考:Model-PyTorch/MindIE/MindIE-Torch/built-in/foundation/sd-webui/torch_aie_extension/readme.md
- 1、打开命令行终端,导入环境变量:
source /usr/local/Ascend/mindie/set_env.sh
source /usr/local/Ascend/ascend-toolkit/set_env.sh
-
2、代码修改,修改clip和cross_attention,用于trace正确的模型
进入/home/ModelZoo-PyTorch/MindIE/MindIE-Torch/built-in/foundation/sd-webui/torch_aie_extension目录执行命令:python sd_webui_patch.py
-
3、sd-webui部署
- 1、下载sd-webui的官方源码:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
-
2、将Model-PyTorch/MindIE/MindIE-Torch/built-in/foundation/sd-webui/torch_aie_extension/复制到stable-diffusion-webui/extension目录下;
-
3、下载权重文件参考,本次使用的是sd1.5的模型,其他自行下载;
-
提供几个1.5的模型:
beautifulRealistic_v7:https://tools.obs.cn-south-292.ca-aicc.com:443/cache/sd-webui/models/beautifulRealistic_v7.safetensors
天清:https://tools.obs.cn-south-292.ca-aicc.com:443/cache/sd-webui/models/TQing%20v3.4_v3.4.safetensors
下载放至:stable-diffusion-webui/models/Stable-diffusion目录下
-
使用魔塔社区下载模型:
git lfs install
git clone https://www.modelscope.cn/AI-ModelScope/stable-diffusion-v1-5.git复制大模型文件到指定目录: cp stable-diffusion-v1-5/v1-5-pruned-emaonly.safetensors ../../../models/Stable-diffusion
-
运行功能
在stable-diffusion-webui工程路径下执行命令启动webui,自动安装需要的环境
python launch.py --skip-torch-cuda-test --port 7860 --enable-insecure-extension-access --listen --log-startup --disable-safe-unpickle --no-half --skip-prepare-environment
存在的问题:推理速度很慢,一张512*512的图片生成需要18分钟。太慢了,感觉使用cpu来跑的。。。。。。后续蹲优化;
QA
-
问题一
安装requirements.txt的时候
ERROR: Could not find a version that satisfies the requirement tb-nightly (from versions: none)
ERROR: No matching distribution found for tb-nightly的出现报错
解决方式:
pip install tb-nightly -i http://mirrors.aliyun.com/pypi/simple --trusted-host mirrors.aliyun.com
-
问题二、运行的时候出现问题ModuleNotFoundError: No module named ‘_bz2’
Traceback (most recent call last): File "/home/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/home/stable-diffusion-webui/launch.py", line 44, in main start() File "/home/stable-diffusion-webui/modules/launch_utils.py", line 465, in start import webui File "/home/stable-diffusion-webui/webui.py", line 13, in <module> initialize.imports() File "/home/stable-diffusion-webui/modules/initialize.py", line 26, in imports from modules import paths, timer, import_hook, errors # noqa: F401 File "/home/stable-diffusion-webui/modules/paths.py", line 60, in <module> import sgm # noqa: F401 File "/home/stable-diffusion-webui/repositories/generative-models/sgm/__init__.py", line 1, in <module> from .models import AutoencodingEngine, DiffusionEngine File "/home/stable-diffusion-webui/repositories/generative-models/sgm/models/__init__.py", line 1, in <module> from .autoencoder import AutoencodingEngine File "/home/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 12, in <module> from ..modules.diffusionmodules.model import Decoder, Encoder File "/home/stable-diffusion-webui/repositories/generative-models/sgm/modules/__init__.py", line 1, in <module> from .encoders.modules import GeneralConditioner File "/home/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 7, in <module> import open_clip File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/__init__.py", line 1, in <module> from .coca_model import CoCa File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/coca_model.py", line 9, in <module> from .transformer import ( File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/transformer.py", line 10, in <module> from .utils import to_2tuple File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/utils.py", line 6, in <module> from torchvision.ops.misc import FrozenBatchNorm2d File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/__init__.py", line 5, in <module> from torchvision import datasets, io, models, ops, transforms, utils File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/datasets/__init__.py", line 1, in <module> from ._optical_flow import FlyingChairs, FlyingThings3D, HD1K, KittiFlow, Sintel File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/datasets/_optical_flow.py", line 12, in <module> from .utils import _read_pfm, verify_str_arg File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/datasets/utils.py", line 1, in <module> import bz2 File "/usr/local/python3.10.2/lib/python3.10/bz2.py", line 17, in <module> from _bz2 import BZ2Compressor, BZ2Decompressor ModuleNotFoundError: No module named '_bz2'
- 解决方法:复制/usr/lib/python3.10/lib-dynload/_bz2.cpython-310-aarch64-linux-gnu.so 至/usr/local/python3.10.2/lib/python3.10/lib-dynload目录下,并分配可读权限:chmod a+r ./_bz2.cpython-310-aarch64-linux-gnu.so
-
问题三
运行报错ModuleNotFoundError: No module named ‘_lzma’
Traceback (most recent call last): File "/home/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/home/stable-diffusion-webui/launch.py", line 44, in main start() File "/home/stable-diffusion-webui/modules/launch_utils.py", line 465, in start import webui File "/home/stable-diffusion-webui/webui.py", line 13, in <module> initialize.imports() File "/home/stable-diffusion-webui/modules/initialize.py", line 26, in imports from modules import paths, timer, import_hook, errors # noqa: F401 File "/home/stable-diffusion-webui/modules/paths.py", line 60, in <module> import sgm # noqa: F401 File "/home/stable-diffusion-webui/repositories/generative-models/sgm/__init__.py", line 1, in <module> from .models import AutoencodingEngine, DiffusionEngine File "/home/stable-diffusion-webui/repositories/generative-models/sgm/models/__init__.py", line 1, in <module> from .autoencoder import AutoencodingEngine File "/home/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 12, in <module> from ..modules.diffusionmodules.model import Decoder, Encoder File "/home/stable-diffusion-webui/repositories/generative-models/sgm/modules/__init__.py", line 1, in <module> from .encoders.modules import GeneralConditioner File "/home/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 7, in <module> import open_clip File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/__init__.py", line 1, in <module> from .coca_model import CoCa File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/coca_model.py", line 9, in <module> from .transformer import ( File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/transformer.py", line 10, in <module> from .utils import to_2tuple File "/usr/local/python3.10.2/lib/python3.10/site-packages/open_clip/utils.py", line 6, in <module> from torchvision.ops.misc import FrozenBatchNorm2d File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/__init__.py", line 5, in <module> from torchvision import datasets, io, models, ops, transforms, utils File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/datasets/__init__.py", line 1, in <module> from ._optical_flow import FlyingChairs, FlyingThings3D, HD1K, KittiFlow, Sintel File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/datasets/_optical_flow.py", line 12, in <module> from .utils import _read_pfm, verify_str_arg File "/usr/local/python3.10.2/lib/python3.10/site-packages/torchvision/datasets/utils.py", line 6, in <module> import lzma File "/usr/local/python3.10.2/lib/python3.10/lzma.py", line 27, in <module> from _lzma import * ModuleNotFoundError: No module named '_lzma'
-
解决方法:
sudo apt-get install liblzma-dev
pip install backports.lzma
修改 lzma.py文件
find / -name lzma.py找到源码文件路径
vim /usr/local/python/lib/python3.10.xxx/lzma.py#修改前 from _lzma import * from _lzma import _encode_filter_properties, _decode_filter_properties #修改后 try: from _lzma import * from _lzma import _encode_filter_properties, _decode_filter_properties except ImportError: from backports.lzma import * from backports.lzma import _encode_filter_properties, _decode_filter_properties
-
-
问题四、运行报错ImportError: libGL.so.1
Traceback (most recent call last): File "/home/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/home/stable-diffusion-webui/launch.py", line 44, in main start() File "/home/stable-diffusion-webui/modules/launch_utils.py", line 465, in start import webui File "/home/stable-diffusion-webui/webui.py", line 13, in <module> initialize.imports() File "/home/stable-diffusion-webui/modules/initialize.py", line 39, in imports from modules import processing, gradio_extensons, ui # noqa: F401 File "/home/stable-diffusion-webui/modules/processing.py", line 14, in <module> import cv2 File "/usr/local/python3.10.2/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module> bootstrap() File "/usr/local/python3.10.2/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap native_module = importlib.import_module("cv2") File "/usr/local/python3.10.2/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: libGL.so.1: cannot open shared object file: No such file or directory
-
解决办法:
ubuntu: sudo apt-get install libgl1-mesa-glx
centos: yum install mesa-libGL -y
-
-
问题五、运行报错libgthread-2.0.so.0:
No module 'xformers'. Proceeding without it. Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled initialize shared: done in 0.196s Traceback (most recent call last): File "/home/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/home/stable-diffusion-webui/launch.py", line 44, in main start() File "/home/stable-diffusion-webui/modules/launch_utils.py", line 465, in start import webui File "/home/stable-diffusion-webui/webui.py", line 13, in <module> initialize.imports() File "/home/stable-diffusion-webui/modules/initialize.py", line 39, in imports from modules import processing, gradio_extensons, ui # noqa: F401 File "/home/stable-diffusion-webui/modules/processing.py", line 14, in <module> import cv2 File "/usr/local/python3.10.2/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module> bootstrap() File "/usr/local/python3.10.2/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap native_module = importlib.import_module("cv2") File "/usr/local/python3.10.2/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: libgthread-2.0.so.0: cannot open shared object file: No such file or directory
-
解决办法:
apt-get install libglib2.0-0
-
-
问题六、运行报错Can’t load tokenizer for 'openai/clip-vit-large-patch14
reating model quickly: OSError Traceback (most recent call last): File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 966, in _bootstrap self._bootstrap_inner() File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "/home/stable-diffusion-webui/modules/initialize.py", line 149, in load_model shared.sd_model # noqa: B018 File "/home/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model return modules.sd_models.model_data.get_sd_model() File "/home/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model load_model() File "/home/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model sd_model = instantiate_from_config(sd_config.model) File "/home/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "/home/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__ self.instantiate_cond_stage(cond_stage_config) File "/home/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "/home/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "/home/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__ self.tokenizer = CLIPTokenizer.from_pretrained(version) File "/usr/local/python3.10.2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
-
解决办法:
进入stable-diffusion-webui路径下
1、新建openai目录
2、git lfs install
3、git clone https://www.modelscope.cn/AI-ModelScope/clip-vit-large-patch14.git
4、cd clip-vit-large-patch14执行git lfs pull
-
-
问题七、运行报错:
Creating model from config: /home/stable-diffusion-webui/configs/v1-inference.yaml loading stable diffusion model: RuntimeError Traceback (most recent call last): File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 966, in _bootstrap self._bootstrap_inner() File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "/home/stable-diffusion-webui/modules/initialize.py", line 149, in load_model shared.sd_model # noqa: B018 File "/home/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model return modules.sd_models.model_data.get_sd_model() File "/home/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model load_model() File "/home/stable-diffusion-webui/modules/sd_models.py", line 748, in load_model load_model_weights(sd_model, checkpoint_info, state_dict, timer) File "/home/stable-diffusion-webui/modules/sd_models.py", line 393, in load_model_weights model.load_state_dict(state_dict, strict=False) File "/home/stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda> module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs)) File "/home/stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict original(module, state_dict, strict=strict) File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2138, in load_state_dict load(self, state_dict) File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2126, in load load(child, child_state_dict, child_prefix) File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2126, in load load(child, child_state_dict, child_prefix) File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2126, in load load(child, child_state_dict, child_prefix) [Previous line repeated 1 more time] File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2120, in load module._load_from_state_dict( File "/home/stable-diffusion-webui/modules/sd_disable_initialization.py", line 225, in <lambda> linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs)) File "/home/stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad) File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/_meta_registrations.py", line 4507, in zeros_like res = aten.empty_like.default( File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/_ops.py", line 448, in __call__ return self._op(*args, **kwargs or {}) File "/usr/local/python3.10.2/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4681, in empty_like return torch.empty_permuted( RuntimeError: unknown format type:117 Stable diffusion model failed to load Applying attention optimization: InvokeAI... done. Exception in thread Thread-6 (load_model): Traceback (most recent call last): File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/usr/local/python3.10.2/lib/python3.10/threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "/home/stable-diffusion-webui/modules/initialize.py", line 154, in load_model devices.first_time_calculation() File "/home/stable-diffusion-webui/modules/devices.py", line 265, in first_time_calculation x = torch.zeros((1, 1)).to(device, dtype) RuntimeError: unknown format type:-748707168
解决办法:
把stable-diffusion-webui退回到1.7.0的版本;
git checkout v1.7.0