存储节点osd 踢出集群格式化后重新加入集群——osd新加入集群同理

osd信息记录

======================================================================

注意看命令后面注释信息

[root@stor07 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/rhel_stor07-root 500G 9.2G 491G 2% /

devtmpfs 30G 0 30G 0% /dev

tmpfs 31G 0 31G 0% /dev/shm

tmpfs 31G 61M 31G 1% /run

tmpfs 31G 0 31G 0% /sys/fs/cgroup

/dev/sdh1 5.5T 4.2T 1.4T 76% /var/lib/ceph/osd/ceph-48

/dev/sdi1 5.5T 4.0T 1.6T 73% /var/lib/ceph/osd/ceph-49

/dev/sdf1 5.5T 4.0T 1.6T 72% /var/lib/ceph/osd/ceph-55

/dev/sda1 5.5T 3.7T 1.8T 68% /var/lib/ceph/osd/ceph-50【记录盘序号:/dev/sda1】

/dev/sdc1 5.5T 3.7T 1.8T 68% /var/lib/ceph/osd/ceph-52

/dev/sdd1 5.5T 3.8T 1.8T 69% /var/lib/ceph/osd/ceph-53

/dev/sde1 5.5T 3.5T 2.1T 63% /var/lib/ceph/osd/ceph-54

/dev/sdb1 5.5T 4.1T 1.4T 75% /var/lib/ceph/osd/ceph-51

/dev/sdj2 1014M 169M 846M 17% /boot

/dev/mapper/rhel_stor07-home 50G 33M 50G 1% /home

tmpfs 6.2G 0 6.2G 0% /run/user/0

[root@stor07 ~]# cd /var/lib/ceph/osd/ceph-50 【进入需处理osd路径】

[root@stor07 ceph-50]# ls -l

total 80

-rw-r–r–. 1 root root 776 Jan 29 2018 activate.monmap

-rw-r–r–. 1 root root 3 Jan 29 2018 active

-rw-r–r–. 1 root root 37 Jan 29 2018 ceph_fsid

drwxr-xr-x. 508 root root 24576 Mar 28 13:49 current

-rw-r–r–. 1 root root 37 Jan 29 2018 fsid

lrwxrwxrwx 1 root root 9 Dec 29 2019 journal -> /dev/sdg3 【查看日志盘序号:/dev/sdg3】

-rw-------. 1 root root 57 Jan 29 2018 keyring

-rw-r–r–. 1 root root 21 Jan 29 2018 magic

-rw-r–r–. 1 root root 6 Jan 29 2018 ready

-rw-r–r–. 1 root root 4 Jan 29 2018 store_version

-rw-r–r–. 1 root root 53 Jan 29 2018 superblock

-rw-r–r–. 1 root root 0 Jan 29 2018 sysvinit

-rw-r–r–. 1 root root 3 Jan 29 2018 whoami

[root@stor07 ceph-50]#

重建osd

====================================================================

这儿正在做重建osd.50操作

1、osd所属存储节点上操作【osd进入停机维护】


  • [root@stor07 ceph-50]# ceph osd set noout【停机维护 OSD 时让 CRUSH 停止自动重均衡】

  • [root@stor07 ceph-50]# ceph osd set nodeep-scrub【有时候在集群恢复时,scrub操作会影响到恢复的性能,和noscrub一起设置来停止scrub。】

  • [root@stor07 ceph-50]# ceph osd tree【查看信息】

  • 停止osd服务【已停止可跳过,这儿的已停止是指之前这osd已经失效的】

[root@stor07 ceph-50]# ceph osd stop osd.50

2、在任意mon节点上操作【踢出集群】


如果不知道mon节点是什么的,看这篇博客:openstack查看mon节点方法

  • 踢出集群故障的osd.50

[root@stor02 ~]# ceph osd out osd.50

[root@stor02 ~]# ceph osd crush remove osd.50

[root@stor02 ~]# ceph auth del osd.50

[root@stor02 ~]# ceph osd rm osd.50

【即使osd down了,以上操作也会同步数据】

3、在osd所属节点上操作


[root@stor07 ceph-50]# umount /var/lib/ceph/osd/ceph-50 【卸载 osd50】

[root@stor07 ceph-50]# ceph -s 【查看】

[root@stor07 ceph-50]# ceph osd tree【查看】


对日志盘进行分区【跳过,不要做】


这只是演示,如果误删了日志盘后怎么弄而已。

还有就是日志盘符可能会发生变化,日志盘符发生变化了,该osd就up不起来,具体的看这篇博客的说明把:https://cuichongxin.blog.csdn.net/article/details/111516678

在osd所属节点操作

lsblk【记录日志信息,如下图】

在这里插入图片描述

dd if=/dev/zero of=/dev/sdg bs=1M count=10 oflag=sync 【重建,能正常使用最好不要执行该命令】

parted -s /dev/sdg mklabel gpt【格式化】

parted -s /dev/sdg mkpart primary 2048s 20G 【开始分区,一个日志盘20G】

parted -s /dev/sdg mkpart primary 20G 40G 【多少个硬盘,就执行多少次,一个sdg对应一个硬盘的日志盘,创建后用lsblk查看会自动成为sdg1,sdg2…】

在这里插入图片描述

对日志盘10个分区依次设置标签,下面以sdg1 ,2为例【跳过】

sgdisk --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/sdg【除了=后面的序号和最后面的/dev/要变,其他是固定的格式内容】

sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/sdg【除了=后面的序号和最后面的/dev/要变,其他是固定的格式内容】

【设置好之后输入partprobe 分区设备名刷新下磁盘信息,或重启下系统】

删除日志盘记录信息及恢复

一般没人删除这玩意,误删除后计算前后的结束扇区,然后用下面命令创建吧;

比如误删了sdg3的日志盘,计算方法是sdg2的结束扇区+1,sdg4的起始扇区-1;

【如果要删除日志盘,先使用fdisk查看指定的分区起始和结束扇区,然后用下面命令创建】【osd所属存储节点执行】

parted -s /dev/sdg mkpart primary 206176256s 374865919s 日志盘创建分区【按指定分区位置和大小创建】


4、先注释osd.50以前的分区分区信息


在osd所属存储节点操作

[root@stor07 ~]# vi /etc/fstab

/etc/fstab

Created by anaconda on Wed Jan 10 20:18:28 2018

Accessible filesystems, by reference, are maintained under ‘/dev/disk’

See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

/dev/mapper/rhel_stor07-root / xfs defaults 0 0

UUID=397e7ef7-6de7-4058-860f-353109622220 /boot xfs defaults 0 0

/dev/mapper/rhel_stor07-home /home xfs defaults 0 0

/dev/mapper/rhel_stor07-swap swap swap defaults 0 0

UUID=203285cc-6edc-4f96-9784-728a6dc701e7 /var/lib/ceph/osd/ceph-48 xfs defaults 0 0

UUID=1afa842a-fca3-42f3-8ad0-f89c4e77a998 /var/lib/ceph/osd/ceph-49 xfs defaults 0 0

#UUID=43be7d9d-7484-4fa9-b15a-cb55211e1222 /var/lib/ceph/osd/ceph-50 xfs defaults 0 0【注释】

5、对节点上osd.50磁盘进行分区


在osd所属存储节点操作

[root@stor07 ~]# lsblk

[root@stor07 ~]# dd if=/dev/zero of=/dev/sda bs=1M count=10 oflag=sync【会删除原分区,注意/dev/sda切换为需要格式化的盘,别误操作了】

[root@stor07 ~]# parted /dev/sda mklabel gpt 【格式化】

[root@stor07 ~]# parted /dev/sda mkpart primary 2048s 100%【创建大小】

[root@stor07 ~]# mkfs.xfs /dev/sda1 【格式化】

[root@stor07 ~]# blkid /dev/sda1【查看UUID】

/dev/sda1: UUID=“8ab9c12a-363b-4d98-9202-cfd64d52abc8” TYPE=“xfs” PARTLABEL=“primary” PARTUUID=“ebf17b0a-eb90-479b-af0d-f97ac405e4e4”

[root@stor07 ~]# vi /etc/fstab 【修改之前注释的osd信息】

UUID=8ab9c12a-363b-4d98-9202-cfd64d52abc8 /var/lib/ceph/osd/ceph-50 xfs defaults 0 0 【替换/etc/fstab内以前的UUID】

[root@stor07 ~]# sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d /dev/sda【设置格式,除了typecode=后面的1(分区序号,1表示第一个分区)和/dev/会变,其他是固定的】】

The operation has completed successfully.

6、存储mon节点上执行【加入集群】


注:加入现在操作的是stor07上的osd.50,重新加入集群以后,osd.50名称就变了,从0开始,系统自动给未使用的编号,如:osd.50重新加入集群后可能就变成osd.0,或者osd.1了,这是正常的,这个序号如果不重新加入集群是永久有效的。

[root@stor02 ~]# pwd

/root

[root@stor02 ~]# ceph-de【然后tab一下,确定有没有该命令】

ceph-debugpack ceph-dencoder ceph-deploy 02 ~]#

[root@stor02 ~]#ceph-deploy --overwrite-conf osd prepare stor07:/dev/sda1:/dev/sdg3【加入集群,前面是固定格式,最后一行分是要变动的硬盘信息,分别表示为:osd所属存储主机名(如果没添加解析需要换成IP):故障盘(格式化后的):日志盘】

[root@stor02 ~]#ceph-deploy --overwrite-conf osd activate stor07:/dev/sda1:/dev/sdg3 【加入集群,前面是固定格式,最后一行分是要变动的硬盘信息,分别表示为:osd所属存储主机名(如果没添加解析需要换成IP):故障盘(格式化后的):日志盘】

[root@stor02 ~]#ceph -s 【查看是否开始同步】

7、取消集群标签 【同步完成之后再取消】


集群内任意存储节点上都可以执行【包括mon节点】

[root@stor02 ~]# ceph osd unset noout

[root@stor02 ~]# ceph osd unset nodeep-scrub

上面步骤6的全部命令过程


【mount节点执行】【操作6】

[root@controller01 ~]# ssh 【mon节点IP】

root@10.'s password:

Last login: Mon Mar 29 21:06:43 2021 from controller01

Authorized users only. All activity may be monitored and reported

[root@stor02 ~]# pwd

/root

[root@stor02 ~]# ceph-deploy --overwrite-conf osd prepare stor07:/dev/sda1:/dev/sdg3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy --overwrite-conf osd prepare stor07:/dev/sda1:/dev/sdg3

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] disk : [(‘stor07’, ‘/dev/sda1’, ‘/dev/sdg3’)]

[ceph_deploy.cli][INFO ] dmcrypt : False

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : True

[ceph_deploy.cli][INFO ] subcommand : prepare

[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x3fff8442e5f0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] fs_type : xfs

[ceph_deploy.cli][INFO ] func : <function osd at 0x3fff84424cf8>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] zap_disk : False

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks stor07:/dev/sda1:/dev/sdg3

Authorized users only. All activity may be monitored and reported

Authorized users only. All activity may be monitored and reported

[stor07][DEBUG ] connected to host: stor07

[stor07][DEBUG ] detect platform information from remote host

[stor07][DEBUG ] detect machine type

[stor07][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.3 Maipo

[ceph_deploy.osd][DEBUG ] Deploying osd to stor07

[stor07][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.osd][DEBUG ] Preparing host stor07 disk /dev/sda1 journal /dev/sdg3 activate False

[stor07][INFO ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs – /dev/sda1 /dev/sdg3

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type

[stor07][WARNIN] DEBUG:ceph-disk:Journal /dev/sdg3 is a partition

[stor07][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/blkid -p -o udev /dev/sdg3

[stor07][WARNIN] WARNING:ceph-disk:Journal /dev/sdg3 was not prepared with ceph-disk. Symlinking directly.

[stor07][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition

[stor07][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sda1

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -f – /dev/sda1

[stor07][DEBUG ] meta-data=/dev/sda1 isize=512 agcount=32, agsize=45780928 blks

[stor07][DEBUG ] = sectsz=4096 attr=2, projid32bit=1

[stor07][DEBUG ] = crc=1 finobt=0, sparse=0

[stor07][DEBUG ] data = bsize=4096 blocks=1464989696, imaxpct=5

[stor07][DEBUG ] = sunit=64 swidth=64 blks

[stor07][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1

[stor07][DEBUG ] log =internal log bsize=4096 blocks=521728, version=2

[stor07][DEBUG ] = sectsz=4096 sunit=1 blks, lazy-count=1

[stor07][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0

[stor07][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.N0i3Ne with options

rw,noexec,nodev,noatime,nodiratime,nobarrier

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,nobarrier – /dev/sda1

/var/lib/ceph/tmp/mnt.N0i3Ne

[stor07][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.N0i3Ne

[stor07][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.N0i3Ne/journal -> /dev/sdg3

[stor07][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.N0i3Ne

[stor07][WARNIN] INFO:ceph-disk:Running command: /bin/umount – /var/lib/ceph/tmp/mnt.N0i3Ne

[stor07][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sda1

[stor07][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sda1

[stor07][WARNIN] partx: /dev/sda: error adding partition 1

[stor07][INFO ] checking OSD status…

[stor07][INFO ] Running command: ceph --cluster=ceph osd stat --format=json

[stor07][WARNIN] there are 10 OSDs down

[stor07][WARNIN] there are 10 OSDs out

[ceph_deploy.osd][DEBUG ] Host stor07 is now ready for osd use.

[root@stor02 ~]# ceph-deploy --overwrite-conf osd activate stor07:/dev/sda1:/dev/sdg3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy --overwrite-conf osd activate stor07:/dev/sda1:/dev/sdg3

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : True

[ceph_deploy.cli][INFO ] subcommand : activate

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x3fff9c60e5f0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function osd at 0x3fff9c604cf8>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [(‘stor07’, ‘/dev/sda1’, ‘/dev/sdg3’)]

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks stor07:/dev/sda1:/dev/sdg3

Authorized users only. All activity may be monitored and reported

Authorized users only. All activity may be monitored and reported

[stor07][DEBUG ] connected to host: stor07

[stor07][DEBUG ] detect platform information from remote host

[stor07][DEBUG ] detect machine type

[stor07][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.3 Maipo

[ceph_deploy.osd][DEBUG ] activating host stor07 disk /dev/sda1

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[stor07][INFO ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sda1

[stor07][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue – /dev/sda1

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[stor07][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.o3FDsc with options

rw,noexec,nodev,noatime,nodiratime,nobarrier

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,nobarrier – /dev/sda1

/var/lib/ceph/tmp/mnt.o3FDsc

[stor07][WARNIN] DEBUG:ceph-disk:Cluster uuid is f5bf95c8-94ee-4a95-8e18-1e7f4a1db07a

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[stor07][WARNIN] DEBUG:ceph-disk:Cluster name is ceph

[stor07][WARNIN] DEBUG:ceph-disk:OSD uuid is 24ba59e8-b124-4769-8365-10b54d9fc559

[stor07][WARNIN] DEBUG:ceph-disk:Allocating OSD id…

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring

/var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 24ba59e8-b124-4769-8365-10b54d9fc559

[stor07][WARNIN] DEBUG:ceph-disk:OSD id is 0

[stor07][WARNIN] DEBUG:ceph-disk:Initializing OSD…

[stor07][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring

自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。

深知大多数Java工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年Java开发全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Java开发知识点,真正体系化!

由于文件比较大,这里只是将部分目录截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且会持续更新!

如果你觉得这些内容对你有帮助,可以扫码获取!!(备注Java获取)

img

Ending

Tip:由于文章篇幅有限制,下面还有20个关于MySQL的问题,我都复盘整理成一份pdf文档了,后面的内容我就把剩下的问题的目录展示给大家看一下

如果觉得有帮助不妨【转发+点赞+关注】支持我,后续会为大家带来更多的技术类文章以及学习类文章!(阿里对MySQL底层实现以及索引实现问的很多)

吃透后这份pdf,你同样可以跟面试官侃侃而谈MySQL。其实像阿里p7岗位的需求也没那么难(但也不简单),扎实的Java基础+无短板知识面+对某几个开源技术有深度学习+阅读过源码+算法刷题,这一套下来p7岗差不多没什么问题,还是希望大家都能拿到高薪offer吧。

《一线大厂Java面试题解析+核心总结学习笔记+最新讲解视频+实战项目源码》点击传送门即可获取!
目、讲解视频,并且会持续更新!**

如果你觉得这些内容对你有帮助,可以扫码获取!!(备注Java获取)

img

Ending

Tip:由于文章篇幅有限制,下面还有20个关于MySQL的问题,我都复盘整理成一份pdf文档了,后面的内容我就把剩下的问题的目录展示给大家看一下

如果觉得有帮助不妨【转发+点赞+关注】支持我,后续会为大家带来更多的技术类文章以及学习类文章!(阿里对MySQL底层实现以及索引实现问的很多)

[外链图片转存中…(img-clx0U4ua-1712429108010)]

[外链图片转存中…(img-jNdM0KT1-1712429108010)]

吃透后这份pdf,你同样可以跟面试官侃侃而谈MySQL。其实像阿里p7岗位的需求也没那么难(但也不简单),扎实的Java基础+无短板知识面+对某几个开源技术有深度学习+阅读过源码+算法刷题,这一套下来p7岗差不多没什么问题,还是希望大家都能拿到高薪offer吧。

《一线大厂Java面试题解析+核心总结学习笔记+最新讲解视频+实战项目源码》点击传送门即可获取!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值