Assertion `input_val >= zero && input_val <= one` failed.

问题描述:

2024-06-24 01:10:42,141 - mmdet - INFO - Epoch [17][1550/8015]    lr: 9.963e-03, eta: 7 days, 8:40:21, time: 0.285, data_time: 0.165, memory: 6272, loss_cls: 0.6743, loss_bbox: 3.3448, loss_obj: 5.4422, loss: 9.4612
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [0,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [1,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [2,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [3,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [4,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [5,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [6,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [7,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [8,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [9,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [10,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [11,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [12,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [13,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [14,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [15,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [16,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [17,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [18,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [19,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [20,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [21,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [22,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [23,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [24,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [25,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [26,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [27,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [28,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [29,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [30,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [471,0,0], thread: [31,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [32,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [33,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [34,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [35,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [36,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [37,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [38,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [39,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [40,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [41,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [42,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [43,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [44,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [45,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [46,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [47,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [48,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [49,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [50,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [51,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [52,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [53,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [54,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [55,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [56,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [57,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [58,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [59,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [60,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [61,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [62,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [477,0,0], thread: [63,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/root/mmdetection/mmdet/core/bbox/assigners/sim_ota_assigner.py:73: UserWarning: OOM RuntimeError is raised due to the huge memory cost during label assignment. CPU mode is applied in this batch. If you want to avoid this issue, try to reduce the batch size or image size.
  warnings.warn('OOM RuntimeError is raised due to the huge memory '
Loading and preparing results...
DONE (t=0.56s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=10.72s).
Accumulating evaluation results...
DONE (t=3.18s).
Traceback (most recent call last):
  File "/root/mmdetection/mmdet/core/bbox/assigners/sim_ota_assigner.py", line 67, in assign
    assign_result = self._assign(pred_scores, priors, decoded_bboxes,
  File "/root/mmdetection/mmdet/core/bbox/assigners/sim_ota_assigner.py", line 169, in _assign
    self.dynamic_k_matching(
  File "/root/mmdetection/mmdet/core/bbox/assigners/sim_ota_assigner.py", line 235, in dynamic_k_matching
    _, pos_idx = torch.topk(
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "tools/train.py", line 220, in <module>
    main()
  File "tools/train.py", line 209, in main
    train_detector(
  File "/root/mmdetection/mmdet/apis/train.py", line 208, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
    self.run_iter(data_batch, train_mode=True, **kwargs)
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
    outputs = self.model.train_step(data_batch, self.optimizer,
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 75, in train_step
    return self.module.train_step(*inputs[0], **kwargs[0])
  File "/root/mmdetection/mmdet/models/detectors/base.py", line 248, in train_step
    losses = self(**data)
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 110, in new_func
    return old_func(*args, **kwargs)
  File "/root/mmdetection/mmdet/models/detectors/base.py", line 172, in forward
    return self.forward_train(img, img_metas, **kwargs)
  File "/root/mmdetection/mmdet/models/detectors/yolox.py", line 95, in forward_train
    losses = super(YOLOX, self).forward_train(img, img_metas, gt_bboxes,
  File "/root/mmdetection/mmdet/models/detectors/single_stage.py", line 83, in forward_train
    losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes,
  File "/root/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 335, in forward_train
    losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 198, in new_func
    return old_func(*args, **kwargs)
  File "/root/mmdetection/mmdet/models/dense_heads/yolox_head.py", line 382, in loss
    num_fg_imgs) = multi_apply(
  File "/root/mmdetection/mmdet/core/utils/misc.py", line 30, in multi_apply
    return tuple(map(list, zip(*map_results)))
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/root/mmdetection/mmdet/models/dense_heads/yolox_head.py", line 462, in _get_target_single
    assign_result = self.assigner.assign(
  File "/root/mmdetection/mmdet/core/bbox/assigners/sim_ota_assigner.py", line 77, in assign
    torch.cuda.empty_cache()
  File "/root/miniconda3/envs/MyYolox/lib/python3.8/site-packages/torch/cuda/memory.py", line 114, in empty_cache
    torch._C._cuda_emptyCache()
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
 

解决方案:


 

 

 

把samples_per_gpu=8改为samples_per_gpu=5

接着命令行加上--auto-resume参数

resume from the latest checkpoint automatically

nohup python tools/train.py configs/yolox/new_yolox_s_8x8_300e_coco.py --auto-resume >newonlytrain_log.out 2>&1 

 

发现从报错的地方修改之后,继续会从latest.pth开始训练

pytorch训练过程中出现错误:Assertion input_val >= zero && input_val <= one failed._test 最后一个batch补全-CSDN博客

### 关于启动时出现 ASSERTION FAILED 错误的原因及解决方法 #### 1. CMake 配置不正确导致的错误 如果在配置 GTSAM 库的过程中未设置正确的选项,可能会引发运行时错误。例如,在使用 `cmake` 构建项目时,如果没有关闭特定优化标志(如 `-DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF`),可能导致生成的目标二进制文件无法正常工作[^1]。 解决方案: 重新执行 `cmake` 命令并显式禁用该选项: ```bash cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF .. ``` --- #### 2. PyTorch 训练中的输入范围验证失败 在 PyTorch 的训练流程中,某些操作会对张量的取值范围进行断言检查。例如,当调用函数 `F.cross_entropy` 并传入不符合预期的数据时,可能出现类似于以下的错误提示: > Assertion input_val >= zero && input_val <= one failed. 这种错误通常表明输入数据超出了允许的范围,或者目标标签与预测输出之间的维度匹配存在问题[^2]。 解决方案: - 检查输入张量的内容及其形状是否满足要求。 - 如果涉及概率分布,则需确认其数值位于 `[0, 1]` 范围内。 - 对于分类任务,确保目标标签属于合法类别索引集合 `{0, ..., n_classes-1}`[^4]。 代码示例调整如下: ```python import torch.nn.functional as F outputs = model(inputs) # 获取模型前向传播的结果 targets = labels.long() # 将目标转换为整数类型 loss = F.cross_entropy(outputs, targets) loss.backward() ``` --- #### 3. OpenCV 程序中的对象释放异常 OpenCV 开发者常遇到的一类问题是由于资源管理不当引起的崩溃。比如尝试销毁尚未初始化的对象实例时,会产生类似下面的日志警告: > (pic:6130): GLib-GObject-CRITICAL **: g_object_unref: assertion 'G_IS_OBJECT (object)' failed. 这说明当前正试图解除引用一个非法或已失效的指针变量[^3]。 修复建议包括但不限于以下几个方面: - 在创建任何视觉组件之前完成必要的初始化过程; - 使用智能指针代替原始裸指针来自动处理生命周期问题; - 及早捕获潜在隐患并通过单元测试加以验证。 修正后的片段展示如下: ```cpp cv::Mat image; if (!image.empty()) { cv::imshow("Example", image); } else { std::cerr << "Image loading failed!" << std::endl; } ``` --- #### 4. 像素级语义分割场景下的越界访问检测 针对像素级别标注的任务而言,“t >= 0 && t < n_classes”的条件判断主要用于防止超出有效索引边界的情况发生。一旦违反此约束便会触发致命终止信号[^5]。 具体分析路径可以按照下列思路展开探索: - 审核 ground truth 掩码矩阵是否存在负值或其他非法标记项; - 核实网络架构设计阶段所定义的最大类别数目参数是否一致; - 加强预处理环节的质量控制措施以剔除不合格样本记录; 以下是改进版实现方式之一: ```python def validate_masks(masks, num_classes): assert masks.min() >= 0 and masks.max() < num_classes, \ f"Mask values must be within range [0,{num_classes})" validate_masks(true_mask.numpy(), NUM_CLASSES) sub_cross_entropy = F.cross_entropy( pred.unsqueeze(dim=0), true_mask.unsqueeze(dim=0).squeeze(1) ).item() ``` --- #### 总结 上述四种情形分别对应不同技术领域内的典型故障模式。无论是构建依赖库还是编写机器学习脚本亦或是图像处理应用开发均有可能遭遇此类挑战。通过仔细阅读官方文档以及借鉴社区经验分享往往能够快速定位根本诱因进而采取针对性补救手段恢复正常运作状态。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值