【Triton 教程】triton_language.load

Triton 是一种用于并行编程的语言和编译器。它旨在提供一个基于 Python 的编程环境,以高效编写自定义 DNN 计算内核,并能够在现代 GPU 硬件上以最大吞吐量运行。

更多 Triton 中文文档可访问 →triton.hyper.ai/

triton.language.load(pointer, mask=None, other=None, boundary_check=(), padding_option='', cache_modifier='', eviction_policy='', volatile=False)

返回 1 个数据张量,其值从由指针所定义的内存位置处加载:

1.如果 pointer 是单元素指针,则加载 1 个标量。在这种情况下:

  • mask 和 other 必须也是标量,
  • other 会隐式地转换为 pointer.dtype.element_ty 类型,
  • boundary_check 和 padding_option 必须为空。

2.如果 pointer 是 1 个 N 维指针张量,则加载 1 个 N 维张量。在这种情况下:

  • mask 和 other 会被隐式地广播到 pointer.shape
  • other 会隐式地转换为 pointer.dtype.element_ty 类型,
  • boundary_check 和 padding_option 必须为空。

3.如果 pointer 是由 make_block_ptr 定义的块指针,则加载 1 个张量。在这种情况下:

  • mask 和 other 必须为 None
  • 可以指定 boundary_check 和 padding_option 来控制超出越界访问的行为。

参数**:**

  • pointertriton.PointerType*,*或 dtype=triton.PointerType 的块)- 指向要加载的数据的指针。
  • masktriton.int1 的块*,*可选)- 如果 mask[idx] 为 false,则不加载 pointer[idx] 处的数据(对于块指针必须为 None)。
  • other (可选) - 如果 mask[idx] 为 false,则返回 other[idx]。
  • boundary_check整数元组*,*可选)- 表示应进行边界检查维度的元组。
  • padding_option - 应为 {“”, “zero”, “nan”} 中的一个,越界时进行填充。
  • cache_modifier**(*str,可选,*应为 {“”, “ca”, “cg”} 中的一个)- 其中「ca」表示在所有层级进行缓存,「cg」表示在全局层级缓存(在 L2 及以下缓存,不是 L1),详细信息请参见缓存操作符。)在 NVIDIA PTX 中更改缓存选项。
  • eviction_policystr可选) - 更改 NVIDIA PTX 中的驱逐策略。
  • volatilebool可选) - 更改 NVIDIA PTX 中的易失性选项。
Traceback (most recent call last): File "/home/robot/UR5-Pick-and-Place-Simulation-main/catkin_ws/src/vision/scripts/lego-vision.py", line 475, in <module> load_models() File "/home/robot/UR5-Pick-and-Place-Simulation-main/catkin_ws/src/vision/scripts/lego-vision.py", line 465, in load_models model = torch.hub.load(path_yolo,'custom',path=weight, source='local') File "/home/robot/.local/lib/python3.8/site-packages/torch/hub.py", line 570, in load model = _load_local(repo_or_dir, model, *args, **kwargs) File "/home/robot/.local/lib/python3.8/site-packages/torch/hub.py", line 599, in _load_local model = entry(*args, **kwargs) File "/home/robot/yolov5/hubconf.py", line 135, in custom return _create(path, autoshape=autoshape, verbose=_verbose, device=device) File "/home/robot/yolov5/hubconf.py", line 54, in _create from models.common import AutoShape, DetectMultiBackend File "/home/robot/yolov5/models/common.py", line 39, in <module> from utils.dataloaders import exif_transpose, letterbox File "/home/robot/yolov5/utils/dataloaders.py", line 23, in <module> import torchvision File "/home/robot/.local/lib/python3.8/site-packages/torchvision/__init__.py", line 10, in <module> from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip File "/home/robot/.local/lib/python3.8/site-packages/torchvision/models/__init__.py", line 2, in <module> from .convnext import * File "/home/robot/.local/lib/python3.8/site-packages/torchvision/models/convnext.py", line 8, in <module> from ..ops.misc import Conv2dNormActivation, Permute File "/home/robot/.local/lib/python3.8/site-packages/torchvision/ops/__init__.py", line 23, in <module> from .poolers import MultiScaleRoIAlign File "/home/robot/.local/lib/python3.8/site-packages/torchvision/ops/poolers.py", line 10, in <module> from .roi_align import roi_align File "/home/robot/.local/lib/python3.8/site-packages/torchvision/ops/roi_align.py", line 7, in <module> from torch._dynamo.utils import is_compile_supported File "/home/robot/.local/lib/python3.8/site-packages/torch/_dynamo/__init__.py", line 2, in <module> from . import convert_frame, eval_frame, resume_execution File "/home/robot/.local/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 48, in <module> from . import config, exc, trace_rules File "/home/robot/.local/lib/python3.8/site-packages/torch/_dynamo/exc.py", line 12, in <module> from .utils import counters File "/home/robot/.local/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1063, in <module> if has_triton_package(): File "/home/robot/.local/lib/python3.8/site-packages/torch/utils/_triton.py", line 9, in has_triton_package import triton File "/home/robot/.local/lib/python3.8/site-packages/triton/__init__.py", line 8, in <module> from .runtime import ( File "/home/robot/.local/lib/python3.8/site-packages/triton/runtime/__init__.py", line 1, in <module> from .autotuner import (Autotuner, Config, Heuristics, autotune, heuristics) File "/home/robot/.local/lib/python3.8/site-packages/triton/runtime/autotuner.py", line 9, in <module> from ..testing import do_bench, do_bench_cudagraph File "/home/robot/.local/lib/python3.8/site-packages/triton/testing.py", line 7, in <module> from . import language as tl File "/home/robot/.local/lib/python3.8/site-packages/triton/language/__init__.py", line 4, in <module> from . import math File "/home/robot/.local/lib/python3.8/site-packages/triton/language/math.py", line 1, in <module> from . import core File "/home/robot/.local/lib/python3.8/site-packages/triton/language/core.py", line 10, in <module> from ..runtime.jit import jit File "/home/robot/.local/lib/python3.8/site-packages/triton/runtime/jit.py", line 12, in <module> from ..runtime.driver import driver File "/home/robot/.local/lib/python3.8/site-packages/triton/runtime/driver.py", line 1, in <module> from ..backends import backends File "/home/robot/.local/lib/python3.8/site-packages/triton/backends/__init__.py", line 50, in <module> backends = _discover_backends() File "/home/robot/.local/lib/python3.8/site-packages/triton/backends/__init__.py", line 44, in _discover_backends driver = _load_module(name, os.path.join(root, name, 'driver.py')) File "/home/robot/.local/lib/python3.8/site-packages/triton/backends/__init__.py", line 12, in _load_module spec.loader.exec_module(module) File "/home/robot/.local/lib/python3.8/site-packages/triton/backends/amd/driver.py", line 7, in <module> from triton.runtime.build import _build File "/home/robot/.local/lib/python3.8/site-packages/triton/runtime/build.py", line 8, in <module> import setuptools File "/home/robot/.local/lib/python3.8/site-packages/setuptools/__init__.py", line 27, in <module> from .dist import Distribution File "/home/robot/.local/lib/python3.8/site-packages/setuptools/dist.py", line 30, in <module> from . import ( File "/home/robot/.local/lib/python3.8/site-packages/setuptools/_entry_points.py", line 45, in <module> def validate(eps: metadata.EntryPoints): AttributeError: module 'importlib_metadata' has no attribute 'EntryPoints'
09-29
from typing import List, Dict, Optional import torch from loguru import logger from transformers import ( AutoModel, AutoTokenizer, TrainingArguments, Trainer, BitsAndBytesConfig ) from peft import ( TaskType, LoraConfig, get_peft_model, set_peft_model_state_dict, prepare_model_for_kbit_training, ) from peft.utils import TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/transformers/utils/import_utils.py:1364, in _LazyModule._get_module(self, module_name) 1363 try: -> 1364 return importlib.import_module("." + module_name, self.__name__) 1365 except Exception as e: File ~/autodl-tmp/conda/envs/glm/lib/python3.10/importlib/__init__.py:126, in import_module(name, package) 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:688, in _load_unlocked(spec) File <frozen importlib._bootstrap_external>:883, in exec_module(self, module) File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds) File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/transformers/trainer.py:190 189 if is_peft_available(): --> 190 from peft import PeftModel 193 if is_accelerate_available(): File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/peft/__init__.py:22 20 __version__ = "0.10.0" ---> 22 from .auto import ( 23 AutoPeftModel, 24 AutoPeftModelForCausalLM, 25 AutoPeftModelForSequenceClassification, 26 AutoPeftModelForSeq2SeqLM, 27 AutoPeftModelForTokenClassification, 28 AutoPeftModelForQuestionAnswering, 29 AutoPeftModelForFeatureExtraction, 30 ) 31 from .mapping import ( 32 MODEL_TYPE_TO_PEFT_MODEL_MAPPING, 33 PEFT_TYPE_TO_CONFIG_MAPPING, (...) 36 inject_adapter_in_model, 37 ) File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/peft/auto.py:31 21 from transformers import ( 22 AutoModel, 23 AutoModelForCausalLM, (...) 28 AutoTokenizer, 29 ) ---> 31 from .config import PeftConfig 32 from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/peft/config.py:23 21 from transformers.utils import PushToHubMixin ---> 23 from .utils import CONFIG_NAME, PeftType, TaskType 26 @dataclass 27 class PeftConfigMixin(PushToHubMixin): File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/peft/utils/__init__.py:21 1 # flake8: noqa 2 # There's no way to ignore "F401 '...' imported but unused" warnings in this 3 # module, but to preserve other warnings. So, don't check this module at all (...) 19 20 # from .config import PeftConfig, PeftType, PromptLearningConfig, TaskType ---> 21 from .loftq_utils import replace_lora_weights_loftq 22 from .peft_types import PeftType, TaskType File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/peft/utils/loftq_utils.py:35 34 if is_bnb_available(): ---> 35 import bitsandbytes as bnb 38 class NFQuantizer: File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/bitsandbytes/__init__.py:16 15 from .cextension import COMPILED_WITH_CUDA ---> 16 from .nn import modules 18 if COMPILED_WITH_CUDA: File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/bitsandbytes/nn/__init__.py:6 5 from .modules import Int8Params, Linear8bitLt, StableEmbedding, Linear4bit, LinearNF4, LinearFP4, Params4bit, OutlierAwareLinear, SwitchBackLinearBnb ----> 6 from .triton_based_modules import SwitchBackLinear, SwitchBackLinearGlobal, SwitchBackLinearVectorwise, StandardLinear File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/bitsandbytes/nn/triton_based_modules.py:8 6 from bitsandbytes.triton.triton_utils import is_triton_available ----> 8 from bitsandbytes.triton.dequantize_rowwise import dequantize_rowwise 9 from bitsandbytes.triton.quantize_rowwise import quantize_rowwise File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/bitsandbytes/triton/dequantize_rowwise.py:12 11 import triton.language as tl ---> 12 from triton.ops.matmul_perf_model import early_config_prune, estimate_matmul_time 14 # rowwise quantize 15 16 # TODO: autotune this better. ModuleNotFoundError: No module named 'triton.ops' The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[14], line 4 2 import torch 3 from loguru import logger ----> 4 from transformers import ( 5 AutoModel, 6 AutoTokenizer, 7 TrainingArguments, 8 Trainer, 9 BitsAndBytesConfig 10 ) 11 from peft import ( 12 TaskType, 13 LoraConfig, (...) 16 prepare_model_for_kbit_training, 17 ) 18 from peft.utils import TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive) File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/transformers/utils/import_utils.py:1354, in _LazyModule.__getattr__(self, name) 1352 value = self._get_module(name) 1353 elif name in self._class_to_module.keys(): -> 1354 module = self._get_module(self._class_to_module[name]) 1355 value = getattr(module, name) 1356 else: File ~/autodl-tmp/conda/envs/glm/lib/python3.10/site-packages/transformers/utils/import_utils.py:1366, in _LazyModule._get_module(self, module_name) 1364 return importlib.import_module("." + module_name, self.__name__) 1365 except Exception as e: -> 1366 raise RuntimeError( 1367 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1368 f" traceback):\n{e}" 1369 ) from e RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): No module named 'triton.ops'
08-02
代码from pathlib import Path from ultralytics.utils import verify_image_label # 验证所有标注文件 for label_path in Path('../safehat/dataset/labels').glob('*.txt'): verify_image_label(label_path) PS C:\源码\金柚项目\ultralytics-main> & C:/YOLOv8py38/python.exe c:/源码/金柚项目/ultralytics-main/ultralytics-main/ultralytics/utils/1.py Traceback (most recent call last): File "c:/源码/金柚项目/ultralytics-main/ultralytics-main/ultralytics/utils/1.py", line 2, in <module> from ultralytics.utils import verify_image_label File "C:\YOLOv8py38\lib\site-packages\ultralytics\__init__.py", line 5, in <module> from ultralytics.hub import checks File "C:\YOLOv8py38\lib\site-packages\ultralytics\hub\__init__.py", line 10, in <module> from ultralytics.hub.auth import Auth File "C:\YOLOv8py38\lib\site-packages\ultralytics\hub\auth.py", line 5, in <module> from ultralytics.hub.utils import HUB_API_ROOT, request_with_credentials File "C:\YOLOv8py38\lib\site-packages\ultralytics\hub\utils.py", line 10, in <module> from ultralytics.yolo.utils import DEFAULT_CONFIG_DICT, LOGGER, RANK, SETTINGS, TryExcept, colorstr, emojis File "C:\YOLOv8py38\lib\site-packages\ultralytics\yolo\__init__.py", line 3, in <module> from . import v8 File "C:\YOLOv8py38\lib\site-packages\ultralytics\yolo\v8\__init__.py", line 5, in <module> from ultralytics.yolo.v8 import classify, detect, segment File "C:\YOLOv8py38\lib\site-packages\ultralytics\yolo\v8\classify\__init__.py", line 3, in <module> from ultralytics.yolo.v8.classify.predict import ClassificationPredictor, predict File "C:\YOLOv8py38\lib\site-packages\ultralytics\yolo\v8\classify\predict.py", line 6, in <module> from ultralytics.yolo.engine.predictor import BasePredictor File "C:\YOLOv8py38\lib\site-packages\ultralytics\yolo\engine\predictor.py", line 34, in <module> from ultralytics.nn.autobackend import AutoBackend File "C:\YOLOv8py38\lib\site-packages\ultralytics\nn\autobackend.py", line 18, in <module> from ultralytics.yolo.utils.ops import xywh2xyxy File "C:\YOLOv8py38\lib\site-packages\ultralytics\yolo\utils\ops.py", line 12, in <module> import torchvision File "C:\YOLOv8py38\lib\site-packages\torchvision\__init__.py", line 10, in <module> from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip File "C:\YOLOv8py38\lib\site-packages\torchvision\models\__init__.py", line 2, in <module> from .convnext import * File "C:\YOLOv8py38\lib\site-packages\torchvision\models\convnext.py", line 8, in <module> from ..ops.misc import Conv2dNormActivation, Permute File "C:\YOLOv8py38\lib\site-packages\torchvision\ops\__init__.py", line 23, in <module> from .poolers import MultiScaleRoIAlign File "C:\YOLOv8py38\lib\site-packages\torchvision\ops\poolers.py", line 10, in <module> from .roi_align import roi_align File "C:\YOLOv8py38\lib\site-packages\torchvision\ops\roi_align.py", line 7, in <module> from torch._dynamo.utils import is_compile_supported File "C:\YOLOv8py38\lib\site-packages\torch\_dynamo\__init__.py", line 2, in <module> from . import convert_frame, eval_frame, resume_execution File "C:\YOLOv8py38\lib\site-packages\torch\_dynamo\convert_frame.py", line 48, in <module> from . import config, exc, trace_rules File "C:\YOLOv8py38\lib\site-packages\torch\_dynamo\exc.py", line 12, in <module> from .utils import counters File "C:\YOLOv8py38\lib\site-packages\torch\_dynamo\utils.py", line 1066, in <module> common_constant_types.add(triton.language.dtype) AttributeError: module 'triton' has no attribute 'language'
11-02
可以帮我详细解释一下嘛def omniquant( lm, args, dataloader, act_scales, act_shifts, logger=None, ): logger.info("Starting ...") # move embedding layer and first layer to target device model = lm.model dev = lm.device use_cache = model.config.use_cache model.config.use_cache = False is_llama = False if "llama" in args.net.lower(): is_llama = True layers = model.model.layers model.model.embed_tokens = model.model.embed_tokens.to(dev) model.model.norm = model.model.norm.to(dev) DecoderLayer = QuantLlamaDecoderLayer pairs = { "q_proj":"qkv", "o_proj":"out", "up_proj":"fc1" } layer_name_prefix = "model.layers" elif "opt" in args.net.lower(): layers = model.model.decoder.layers model.model.decoder.embed_tokens = model.model.decoder.embed_tokens.to(dev) model.model.decoder.embed_positions = model.model.decoder.embed_positions.to(dev) if hasattr(model.model.decoder, "project_out") and model.model.decoder.project_out: model.model.decoder.project_out = model.model.decoder.project_out.to(dev) if hasattr(model.model.decoder, "project_in") and model.model.decoder.project_in: model.model.decoder.project_in = model.model.decoder.project_in.to(dev) DecoderLayer = QuantOPTDecoderLayer pairs = { "q_proj":"qkv", "out_proj":"out", "fc1":"fc1" } layer_name_prefix = "model.decoder.layers" elif "falcon" in args.net.lower(): layers = model.transformer.h model.transformer.word_embeddings.to(dev) model.transformer.ln_f.to(dev) model.lm_head.to(dev) DecoderLayer = QuantFalconDecoderLayer layer_name_prefix = "model.transformer.h" elif 'mixtral' in args.net.lower(): is_llama = True # same to llama except ffn layers = model.model.layers model.model.embed_tokens = model.model.embed_tokens.to(dev) model.model.norm = model.model.norm.to(dev) layer_name_prefix = "model.layers" else: raise ValueError("Only support for opt/llama/Llama-2/falcon/mixtral now") layers[0] = layers[0].to(dev) if args.deactive_amp and args.epochs>0: dtype = torch.float traincast = nullcontext else: dtype = torch.float16 traincast = torch.cuda.amp.autocast inps = torch.zeros( (args.nsamples, lm.seqlen, model.config.hidden_size), dtype=dtype, device=dev ) cache = {"i": 0} # catch the first layer input class Catcher(nn.Module): def __init__(self, module): super().__init__() self.module = module self.is_llama = False def forward(self, inp, **kwargs): inps[cache["i"]] = inp cache["i"] += 1 cache["attention_mask"] = kwargs["attention_mask"] if self.is_llama: cache["position_ids"] = kwargs["position_ids"] raise ValueError layers[0] = Catcher(layers[0]) layers[0].is_llama = is_llama with torch.no_grad(): for batch in dataloader: if cache["i"] >= args.nsamples: break try: model(batch[0].to(dev)) except ValueError: pass # move embedding layer and first layer to cpu layers[0] = layers[0].module layers[0] = layers[0].cpu() if "llama" in args.net.lower() or "mixtral" in args.net.lower(): model.model.embed_tokens = model.model.embed_tokens.cpu() model.model.norm = model.model.norm.cpu() elif "opt" in args.net.lower(): model.model.decoder.embed_tokens = model.model.decoder.embed_tokens.cpu() model.model.decoder.embed_positions = model.model.decoder.embed_positions.cpu() if hasattr(model.model.decoder, "project_out") and model.model.decoder.project_out: model.model.decoder.project_out = model.model.decoder.project_out.cpu() if hasattr(model.model.decoder, "project_in") and model.model.decoder.project_in: model.model.decoder.project_in = model.model.decoder.project_in.cpu() elif 'falcon' in args.model: model.transformer.word_embeddings = model.transformer.word_embeddings.cpu() else: raise ValueError("Only support for opt/llama/Llama-2/falcon/mixtral now") torch.cuda.empty_cache() # same input of first layer for fp model and quant model quant_inps = inps fp_inps = copy.deepcopy(inps) # take output of fp model as input fp_inps_2 = copy.deepcopy(inps) if args.aug_loss else None # take output of quantization model as input attention_mask = cache["attention_mask"] if attention_mask is not None: attention_mask_batch = attention_mask.repeat(args.batch_size,1,1,1) if args.deactive_amp else attention_mask.repeat(args.batch_size,1,1,1).float() else: logger.info( "No attention mask caught from the first layer." " Seems that model's attention works without a mask." ) attention_mask_batch = None loss_func = torch.nn.MSELoss() if is_llama: position_ids = cache["position_ids"] else: position_ids = None if args.resume: omni_parameters = torch.load(args.resume) else: omni_parameters = {} for i in range(len(layers)): logger.info(f"=== Start quantize layer {i} ===") layer = layers[i].to(dev) if "mixtral" in args.net.lower(): # for mixtral, we only leverage lwc, which can be achieve by simply replace Linear with QuantLinear qlayer = copy.deepcopy(layer) for name, module in qlayer.named_modules(): if isinstance(module,torch.nn.Linear) and not "gate" in name: # do not quantize gate quantlinear = QuantLinear(module, args.weight_quant_params, args.act_quant_params) add_new_module(name, qlayer, quantlinear) else: qlayer = DecoderLayer(lm.model.config, layer, args) qlayer = qlayer.to(dev) # obtain output of full-precision model set_quant_state(qlayer, weight_quant=False, act_quant=False) if args.epochs > 0: with torch.no_grad(): with torch.cuda.amp.autocast(): for j in range(args.nsamples): fp_inps[j] = qlayer(fp_inps[j].unsqueeze(0), attention_mask=attention_mask,position_ids=position_ids)[0] if args.aug_loss: fp_inps_2[j] = qlayer(quant_inps[j].unsqueeze(0), attention_mask=attention_mask,position_ids=position_ids)[0] # init smooth parameters set_quant_state(qlayer, weight_quant=False, act_quant=True) # weight will be manually quantized before forward qlayer.let = args.let use_shift = True if is_llama or args.abits == 16: use_shift = False # deactivate channel-wise shifting for llama model and weight-only quantization if args.let: # init channel-wise scaling and shift qlayer.register_parameter("qkt_smooth_scale",torch.nn.Parameter(torch.ones(layer.self_attn.q_proj.out_features,device=dev, dtype=dtype))) for name,module in qlayer.named_modules(): if isinstance(module, QuantLinear): for key in pairs.keys(): if key in name: act = act_scales[f"{layer_name_prefix}.{i}.{name}"].to(device=dev, dtype=dtype).clamp(min=1e-5) weight = module.weight.abs().max(dim=0)[0].clamp(min=1e-5) scale = (act.pow(args.alpha)/weight.pow(1-args.alpha)).clamp(min=1e-5) if use_shift and not is_llama: shift = act_shifts[f"{layer_name_prefix}.{i}.{name}"].to(device=dev, dtype=dtype) else: shift = torch.zeros_like(scale) qlayer.register_parameter(f"{pairs[key]}_smooth_shift",torch.nn.Parameter(shift)) qlayer.register_parameter(f"{pairs[key]}_smooth_scale",torch.nn.Parameter(scale)) if args.resume: qlayer.load_state_dict(omni_parameters[i], strict=False) if args.epochs > 0: with torch.no_grad(): qlayer.float() # required for AMP training # create optimizer optimizer = torch.optim.AdamW( [{"params":let_parameters(qlayer, use_shift),"lr":args.let_lr}, {"params":lwc_parameters(qlayer),"lr":args.lwc_lr}],weight_decay=args.wd) loss_scaler = utils.NativeScalerWithGradNormCount() for epochs in range(args.epochs): loss_list = [] norm_list = [] for j in range(args.nsamples//args.batch_size): index = j * args.batch_size # obtain output of quantization model with traincast(): smooth_and_quant_temporary(qlayer, args, is_llama) quant_out = qlayer(quant_inps[index:index+args.batch_size,], attention_mask=attention_mask_batch,position_ids=position_ids)[0] loss = loss_func(fp_inps[index:index+args.batch_size,], quant_out) if args.aug_loss: loss += loss_func(fp_inps_2[index:index+args.batch_size,], quant_out) if not math.isfinite(loss.item()): logger.info("Loss is NAN, stopping training") pdb.set_trace() loss_list.append(loss.detach().cpu()) optimizer.zero_grad() norm = loss_scaler(loss, optimizer,parameters= get_omni_parameters(qlayer, use_shift)).cpu() norm_list.append(norm.data) loss_mean = torch.stack(loss_list).mean() norm_mean = torch.stack(norm_list).mean() logger.info(f"layer {i} iter {epochs} loss:{loss_mean} norm:{norm_mean} max memory_allocated {torch.cuda.max_memory_allocated(lm._device) / 1024**2} ") clear_temp_variable(qlayer) del optimizer qlayer.half() # real smooth and quantization smooth_and_quant_inplace(qlayer, args, is_llama) if args.epochs>0: # update input of quantization model with torch.no_grad(): # with torch.cuda.amp.autocast(): with traincast(): for j in range(args.nsamples): quant_inps[j] = qlayer(quant_inps[j].unsqueeze(0), attention_mask=attention_mask,position_ids=position_ids)[0] register_scales_and_zeros(qlayer) layers[i] = qlayer.to("cpu") omni_parameters[i] = omni_state_dict(qlayer) torch.save(omni_parameters, os.path.join(args.output_dir, f"omni_parameters.pth")) else: register_scales_and_zeros(qlayer) layers[i] = qlayer.to("cpu") if args.real_quant: assert args.wbits in [2,3,4] and args.abits >= 16 # only support weight-only quantization named_linears = get_named_linears(qlayer) for name, module in named_linears.items(): scales = module.weight_quantizer.scales zeros = module.weight_quantizer.zeros group_size = module.weight_quantizer.group_size dim0 = module.weight.shape[0] scales = scales.view(dim0,-1) zeros = zeros.view(dim0,-1) if args.wbits == 3: q_linear = qlinear_cuda.QuantLinear(args.wbits, group_size, module.in_features,module.out_features,not module.bias is None) else: q_linear = qlinear_triton.QuantLinear(args.wbits, group_size, module.in_features,module.out_features,not module.bias is None) q_linear.pack(module.cpu(), scales.float().cpu(), zeros.float().cpu()) add_new_module(name, qlayer, q_linear) print(f"pack quantized {name} finished") del module del layer torch.cuda.empty_cache() del inps del quant_inps del fp_inps del fp_inps_2 torch.cuda.empty_cache() gc.collect() model.config.use_cache = use_cache return model
08-20
bash: docker: command not found [root@ac6b15bb1f77 mas]# python -m vllm.entrypoints.openai.api_server \ > --model /models/z50051264/summary/Qwen2.5-7B-awq/ \ > --max-num-seqs=256 \ > --max-model-len=4096 \ > --max-num-batched-tokens=4096 \ > --tensor-parallel-size=1 \ > --block-size=128 \ > --host=0.0.0.0 \ > --port=8080 \ > --gpu-memory-utilization=0.9 \ > --trust-remote-code \ > --served-model-name=zzz \ > --quantization awq INFO 07-25 03:09:14 [__init__.py:39] Available plugins for group vllm.platform_plugins: INFO 07-25 03:09:14 [__init__.py:41] - ascend -> vllm_ascend:register INFO 07-25 03:09:14 [__init__.py:44] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load. INFO 07-25 03:09:14 [__init__.py:235] Platform plugin ascend is activated WARNING 07-25 03:09:15 [_custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") INFO 07-25 03:09:18 [importing.py:63] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 07-25 03:09:19 [registry.py:413] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_mtp:CustomDeepSeekMTP. WARNING 07-25 03:09:19 [registry.py:413] Model architecture Qwen2VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_vl:AscendQwen2VLForConditionalGeneration. WARNING 07-25 03:09:19 [registry.py:413] Model architecture Qwen2_5_VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_5_vl:AscendQwen2_5_VLForConditionalGeneration. WARNING 07-25 03:09:19 [registry.py:413] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV2ForCausalLM. WARNING 07-25 03:09:19 [registry.py:413] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV3ForCausalLM. WARNING 07-25 03:09:19 [registry.py:413] Model architecture Qwen3MoeForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen3_moe:CustomQwen3MoeForCausalLM. INFO 07-25 03:09:20 [api_server.py:1395] vLLM API server version 0.9.2 INFO 07-25 03:09:20 [cli_args.py:325] non-default args: {'host': '0.0.0.0', 'port': 8080, 'model': '/models/z50051264/summary/Qwen2.5-7B-awq/', 'trust_remote_code': True, 'max_model_len': 4096, 'quantization': 'awq', 'served_model_name': ['zzz'], 'block_size': 128, 'max_num_batched_tokens': 4096, 'max_num_seqs': 256} INFO 07-25 03:09:34 [config.py:841] This model supports multiple tasks: {'generate', 'classify', 'embed', 'reward'}. Defaulting to 'generate'. INFO 07-25 03:09:34 [config.py:1472] Using max model len 4096 WARNING 07-25 03:09:35 [config.py:960] ascend quantization is not fully optimized yet. The speed can be slower than non-quantized models. INFO 07-25 03:09:35 [config.py:2285] Chunked prefill is enabled with max_num_batched_tokens=4096. INFO 07-25 03:09:35 [platform.py:174] PIECEWISE compilation enabled on NPU. use_inductor not supported - using only ACL Graph mode INFO 07-25 03:09:35 [utils.py:321] Calculated maximum supported batch sizes for ACL graph: 66 INFO 07-25 03:09:35 [utils.py:336] Adjusted ACL graph batch sizes for Qwen2ForCausalLM model (layers: 28): 67 → 66 sizes INFO 07-25 03:09:45 [__init__.py:39] Available plugins for group vllm.platform_plugins: INFO 07-25 03:09:45 [__init__.py:41] - ascend -> vllm_ascend:register INFO 07-25 03:09:45 [__init__.py:44] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load. INFO 07-25 03:09:45 [__init__.py:235] Platform plugin ascend is activated WARNING 07-25 03:09:46 [_custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") INFO 07-25 03:09:50 [importing.py:63] Triton not installed or not compatible; certain GPU-related functions will not be available. INFO 07-25 03:09:50 [core.py:526] Waiting for init message from front-end. WARNING 07-25 03:09:50 [registry.py:413] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_mtp:CustomDeepSeekMTP. WARNING 07-25 03:09:50 [registry.py:413] Model architecture Qwen2VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_vl:AscendQwen2VLForConditionalGeneration. WARNING 07-25 03:09:50 [registry.py:413] Model architecture Qwen2_5_VLForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen2_5_vl:AscendQwen2_5_VLForConditionalGeneration. WARNING 07-25 03:09:50 [registry.py:413] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV2ForCausalLM. WARNING 07-25 03:09:50 [registry.py:413] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.deepseek_v2:CustomDeepseekV3ForCausalLM. WARNING 07-25 03:09:50 [registry.py:413] Model architecture Qwen3MoeForCausalLM is already registered, and will be overwritten by the new model class vllm_ascend.models.qwen3_moe:CustomQwen3MoeForCausalLM. INFO 07-25 03:09:50 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='/models/z50051264/summary/Qwen2.5-7B-awq/', speculative_config=None, tokenizer='/models/z50051264/summary/Qwen2.5-7B-awq/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=ascend, enforce_eager=False, kv_cache_dtype=auto, device_config=npu, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=zzz, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":["all"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.unified_ascend_attention_with_output"],"use_inductor":false,"compile_sizes":[],"inductor_compile_config":{},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} ERROR 07-25 03:09:53 [core.py:586] EngineCore failed to start. ERROR 07-25 03:09:53 [core.py:586] Traceback (most recent call last): ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 577, in run_engine_core ERROR 07-25 03:09:53 [core.py:586] engine_core = EngineCoreProc(*args, **kwargs) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 404, in __init__ ERROR 07-25 03:09:53 [core.py:586] super().__init__(vllm_config, executor_class, log_stats, ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 75, in __init__ ERROR 07-25 03:09:53 [core.py:586] self.model_executor = executor_class(vllm_config) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/executor_base.py", line 53, in __init__ ERROR 07-25 03:09:53 [core.py:586] self._init_executor() ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 47, in _init_executor ERROR 07-25 03:09:53 [core.py:586] self.collective_rpc("init_device") ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc ERROR 07-25 03:09:53 [core.py:586] answer = run_method(self.driver_worker, method, args, kwargs) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/utils/__init__.py", line 2736, in run_method ERROR 07-25 03:09:53 [core.py:586] return func(*args, **kwargs) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm/vllm/worker/worker_base.py", line 606, in init_device ERROR 07-25 03:09:53 [core.py:586] self.worker.init_device() # type: ignore ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 132, in init_device ERROR 07-25 03:09:53 [core.py:586] NPUPlatform.set_device(device) ERROR 07-25 03:09:53 [core.py:586] File "/vllm-workspace/vllm-ascend/vllm_ascend/platform.py", line 98, in set_device ERROR 07-25 03:09:53 [core.py:586] torch.npu.set_device(device) ERROR 07-25 03:09:53 [core.py:586] File "/usr/local/python3.10.17/lib/python3.10/site-packages/torch_npu/npu/utils.py", line 80, in set_device ERROR 07-25 03:09:53 [core.py:586] torch_npu._C._npu_setDevice(device_id) ERROR 07-25 03:09:53 [core.py:586] RuntimeError: SetPrecisionMode:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:156 NPU function error: at_npu::native::AclSetCompileopt(aclCompileOpt::ACL_PRECISION_MODE, precision_mode), error code is 500001 ERROR 07-25 03:09:53 [core.py:586] [ERROR] 2025-07-25-03:09:53 (PID:977, Device:0, RankID:-1) ERR00100 PTA call acl api failed ERROR 07-25 03:09:53 [core.py:586] [Error]: The internal ACL of the system is incorrect. ERROR 07-25 03:09:53 [core.py:586] Rectify the fault based on the error information in the ascend log. ERROR 07-25 03:09:53 [core.py:586] EC0010: [PID: 977] 2025-07-25-03:09:53.177.260 Failed to import Python module [AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead..]. ERROR 07-25 03:09:53 [core.py:586] Solution: Check that all required components are properly installed and the specified Python path matches the Python installation directory. (If the path does not match the directory, run set_env.sh in the installation package.) ERROR 07-25 03:09:53 [core.py:586] TraceBack (most recent call last): ERROR 07-25 03:09:53 [core.py:586] AOE Failed to call InitCannKB[FUNC:Initialize][FILE:python_adapter_manager.cc][LINE:47] ERROR 07-25 03:09:53 [core.py:586] Failed to initialize TeConfigInfo. ERROR 07-25 03:09:53 [core.py:586] [GraphOpt][InitializeInner][InitTbeFunc] Failed to init tbe.[FUNC:InitializeTeFusion][FILE:tbe_op_store_adapter.cc][LINE:1889] ERROR 07-25 03:09:53 [core.py:586] [GraphOpt][InitializeInner][InitTeFusion]: Failed to initialize TeFusion.[FUNC:InitializeInner][FILE:tbe_op_store_adapter.cc][LINE:1856] ERROR 07-25 03:09:53 [core.py:586] [SubGraphOpt][PreCompileOp][InitAdapter] InitializeAdapter adapter [tbe_op_adapter] failed! Ret [4294967295][FUNC:InitializeAdapter][FILE:op_store_adapter_manager.cc][LINE:79] ERROR 07-25 03:09:53 [core.py:586] [SubGraphOpt][PreCompileOp][Init] Initialize op store adapter failed, OpsStoreName[tbe-custom].[FUNC:Initialize][FILE:op_store_adapter_manager.cc][LINE:120] ERROR 07-25 03:09:53 [core.py:586] [FusionMngr][Init] Op store adapter manager init failed.[FUNC:Initialize][FILE:fusion_manager.cc][LINE:115] ERROR 07-25 03:09:53 [core.py:586] PluginManager InvokeAll failed.[FUNC:Initialize][FILE:ops_kernel_manager.cc][LINE:83] ERROR 07-25 03:09:53 [core.py:586] OpsManager initialize failed.[FUNC:InnerInitialize][FILE:gelib.cc][LINE:259] ERROR 07-25 03:09:53 [core.py:586] GELib::InnerInitialize failed.[FUNC:Initialize][FILE:gelib.cc][LINE:184] ERROR 07-25 03:09:53 [core.py:586] GEInitialize failed.[FUNC:GEInitialize][FILE:ge_api.cc][LINE:371] ERROR 07-25 03:09:53 [core.py:586] [Initialize][Ge]GEInitialize failed. ge result = 4294967295[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] ERROR 07-25 03:09:53 [core.py:586] [Init][Compiler]Init compiler failed[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] ERROR 07-25 03:09:53 [core.py:586] [Set][Options]OpCompileProcessor init failed![FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] ERROR 07-25 03:09:53 [core.py:586] Process EngineCore_0: Traceback (most recent call last): File "/usr/local/python3.10.17/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/python3.10.17/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 590, in run_engine_core raise e File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 577, in run_engine_core engine_core = EngineCoreProc(*args, **kwargs) File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 404, in __init__ super().__init__(vllm_config, executor_class, log_stats, File "/vllm-workspace/vllm/vllm/v1/engine/core.py", line 75, in __init__ self.model_executor = executor_class(vllm_config) File "/vllm-workspace/vllm/vllm/executor/executor_base.py", line 53, in __init__ self._init_executor() File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 47, in _init_executor self.collective_rpc("init_device") File "/vllm-workspace/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) File "/vllm-workspace/vllm/vllm/utils/__init__.py", line 2736, in run_method return func(*args, **kwargs) File "/vllm-workspace/vllm/vllm/worker/worker_base.py", line 606, in init_device self.worker.init_device() # type: ignore File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 132, in init_device NPUPlatform.set_device(device) File "/vllm-workspace/vllm-ascend/vllm_ascend/platform.py", line 98, in set_device torch.npu.set_device(device) File "/usr/local/python3.10.17/lib/python3.10/site-packages/torch_npu/npu/utils.py", line 80, in set_device torch_npu._C._npu_setDevice(device_id) RuntimeError: SetPrecisionMode:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:156 NPU function error: at_npu::native::AclSetCompileopt(aclCompileOpt::ACL_PRECISION_MODE, precision_mode), error code is 500001 [ERROR] 2025-07-25-03:09:53 (PID:977, Device:0, RankID:-1) ERR00100 PTA call acl api failed [Error]: The internal ACL of the system is incorrect. Rectify the fault based on the error information in the ascend log. EC0010: [PID: 977] 2025-07-25-03:09:53.177.260 Failed to import Python module [AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead..]. Solution: Check that all required components are properly installed and the specified Python path matches the Python installation directory. (If the path does not match the directory, run set_env.sh in the installation package.) TraceBack (most recent call last): AOE Failed to call InitCannKB[FUNC:Initialize][FILE:python_adapter_manager.cc][LINE:47] Failed to initialize TeConfigInfo. [GraphOpt][InitializeInner][InitTbeFunc] Failed to init tbe.[FUNC:InitializeTeFusion][FILE:tbe_op_store_adapter.cc][LINE:1889] [GraphOpt][InitializeInner][InitTeFusion]: Failed to initialize TeFusion.[FUNC:InitializeInner][FILE:tbe_op_store_adapter.cc][LINE:1856] [SubGraphOpt][PreCompileOp][InitAdapter] InitializeAdapter adapter [tbe_op_adapter] failed! Ret [4294967295][FUNC:InitializeAdapter][FILE:op_store_adapter_manager.cc][LINE:79] [SubGraphOpt][PreCompileOp][Init] Initialize op store adapter failed, OpsStoreName[tbe-custom].[FUNC:Initialize][FILE:op_store_adapter_manager.cc][LINE:120] [FusionMngr][Init] Op store adapter manager init failed.[FUNC:Initialize][FILE:fusion_manager.cc][LINE:115] PluginManager InvokeAll failed.[FUNC:Initialize][FILE:ops_kernel_manager.cc][LINE:83] OpsManager initialize failed.[FUNC:InnerInitialize][FILE:gelib.cc][LINE:259] GELib::InnerInitialize failed.[FUNC:Initialize][FILE:gelib.cc][LINE:184] GEInitialize failed.[FUNC:GEInitialize][FILE:ge_api.cc][LINE:371] [Initialize][Ge]GEInitialize failed. ge result = 4294967295[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] [Init][Compiler]Init compiler failed[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] [Set][Options]OpCompileProcessor init failed![FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145] Traceback (most recent call last): File "/usr/local/python3.10.17/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/python3.10.17/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1495, in <module> uvloop.run(run_server(args)) File "/usr/local/python3.10.17/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run return loop.run_until_complete(wrapper()) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/usr/local/python3.10.17/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper return await main File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1431, in run_server await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 1451, in run_server_worker async with build_async_engine_client(args, client_config) as engine_client: File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 199, in __aenter__ return await anext(self.gen) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 158, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 199, in __aenter__ return await anext(self.gen) File "/vllm-workspace/vllm/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args async_llm = AsyncLLM.from_vllm_config( File "/vllm-workspace/vllm/vllm/v1/engine/async_llm.py", line 162, in from_vllm_config return cls( File "/vllm-workspace/vllm/vllm/v1/engine/async_llm.py", line 124, in __init__ self.engine_core = EngineCoreClient.make_async_mp_client( File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 96, in make_async_mp_client return AsyncMPClient(*client_args) File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 666, in __init__ super().__init__( File "/vllm-workspace/vllm/vllm/v1/engine/core_client.py", line 403, in __init__ with launch_core_engines(vllm_config, executor_class, File "/usr/local/python3.10.17/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/vllm-workspace/vllm/vllm/v1/engine/utils.py", line 434, in launch_core_engines wait_for_engine_startup( File "/vllm-workspace/vllm/vllm/v1/engine/utils.py", line 484, in wait_for_engine_startup raise RuntimeError("Engine core initialization failed. " RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {} [ERROR] 2025-07-25-03:09:59 (PID:707, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception 分析报错
07-26
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值