一起来学ceph 02.ceph pool

ceph pool

环境

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin

192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###所有节点内核版本要求4.5以上
uname -r
5.2.2-1.el7.elrepo.x86_64
[root@ceph-admin ~]# ceph -s
  cluster:
    id:     8a83b874-efa4-4655-b070-704e63553839
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 34s)
    mgr: ceph04(active, since 18s), standbys: ceph03
    osd: 8 osds: 8 up (since 20s), 8 in (since 23h)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   8.0 GiB used, 64 GiB / 72 GiB avail
    pgs:     

创建一个pool

pg计算

Total PGs = (Total_number_of_OSD * 100) / max_replication_count
8*100/3/4=66.6667

结算的结果往上取靠近2的N次方的值。比如总共OSD数量是8,复制份数3,pool数量也是4,那么按上述公式计算出的结果是66.6667。取跟它接近的2的N次方是64,那么每个pool分配的PG数量就是64。

[root@ceph-admin ~]# ceph osd pool create pool1 64 64
pool 'pool1' created
[root@ceph-admin ~]# ceph osd pool  ls
pool1

检查pool里已存在的PG和PGP数量

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool get pool1 pg_num
pg_num: 64
[cephadm@ceph-admin ceph-cluster]$ ceph osd pool get pool1 pgp_num
pgp_num: 64

利用api上传一个文件

[cephadm@ceph-admin ceph-cluster]$ echo "this a test" > test.txt
[cephadm@ceph-admin ceph-cluster]$ pwd
/home/cephadm/ceph-cluster
[cephadm@ceph-admin ceph-cluster]$ rados put test.txt  /home/cephadm/ceph-cluster/test.txt --pool=pool1

查看pool中的文件

[cephadm@ceph-admin ceph-cluster]$ rados ls --pool=pool1
test.txt

查看文件在osd位置

[cephadm@ceph-admin ceph-cluster]$ ceph osd map pool1 test.txt
osdmap e43 pool 'pool1' (1) object 'test.txt' -> pg 1.8b0b6108 (1.8) -> up ([5,4,6], p5) acting ([5,4,6], p5)

删除文件

[cephadm@ceph-admin ceph-cluster]$ rados rm test.txt -p pool1

创建rbd

先建立一个pool

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool create pool2 64 64
pool 'pool2' created

启用pool rbd功能

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool application enable pool2 rbd
enabled application 'rbd' on pool 'pool2'

初始化rbd存储池

[cephadm@ceph-admin ceph-cluster]$ rbd pool  init -p pool2

创建image

[cephadm@ceph-admin ceph-cluster]$ rbd create --size 2G pool2/img1
[cephadm@ceph-admin ceph-cluster]$ rbd ls -p pool2
img1
[cephadm@ceph-admin ceph-cluster]$ rbd info pool2/img1
rbd image 'img1':
	size 2 GiB in 512 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: acb9b87fa25e
	block_name_prefix: rbd_data.acb9b87fa25e
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Sat Jul 13 14:55:07 2019
	access_timestamp: Sat Jul 13 14:55:07 2019
	modify_timestamp: Sat Jul 13 14:55:07 2019

创建radosgw

ceph01上创建radosgw进程

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy  rgw create ceph01
[cephadm@ceph-admin ceph-cluster]$ ceph -s
  cluster:
    id:     8a83b874-efa4-4655-b070-704e63553839
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 60m)
    mgr: ceph04(active, since 60m), standbys: ceph03
    osd: 8 osds: 8 up (since 60m), 8 in (since 40h)
    rgw: 1 daemon active (ceph01)
 
  data:
    pools:   6 pools, 160 pgs
    objects: 192 objects, 1.4 KiB
    usage:   8.1 GiB used, 64 GiB / 72 GiB avail
    pgs:     160 active+clean

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool ls
pool1
pool2
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log

访问ceph01:7480

创建cephfs

ceph02创建mds进程

[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mds create ceph02
[cephadm@ceph-admin ceph-cluster]$ ceph mds stat
 1 up:standby

创建pool

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool create pool3  64 64 
pool 'pool3' created

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool create pool4  64 64 
pool 'pool4' created
[cephadm@ceph-admin ceph-cluster]$ ceph fs new cephfs pool3 pool4 
new fs with metadata pool 7 and data pool 8
[cephadm@ceph-admin ceph-cluster]$ ceph fs status cephfs
cephfs - 0 clients
======
+------+--------+--------+---------------+-------+-------+
| Rank | State  |  MDS   |    Activity   |  dns  |  inos |
+------+--------+--------+---------------+-------+-------+
|  0   | active | ceph02 | Reqs:    0 /s |   10  |   13  |
+------+--------+--------+---------------+-------+-------+
+-------+----------+-------+-------+
|  Pool |   type   |  used | avail |
+-------+----------+-------+-------+
| pool3 | metadata | 1536k | 20.1G |
| pool4 |   data   |    0  | 20.1G |
+-------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)

ceph基本命令

查看集群信息

[cephadm@ceph-admin ceph-cluster]$ ceph -s
  cluster:
    id:     8a83b874-efa4-4655-b070-704e63553839
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 76m)
    mgr: ceph04(active, since 76m), standbys: ceph03
    mds: cephfs:1 {0=ceph02=up:active}
    osd: 8 osds: 8 up (since 76m), 8 in (since 40h)
    rgw: 1 daemon active (ceph01)
 
  data:
    pools:   8 pools, 288 pgs
    objects: 214 objects, 3.6 KiB
    usage:   8.1 GiB used, 64 GiB / 72 GiB avail
    pgs:     288 active+clean

查看pg信息

[cephadm@ceph-admin ceph-cluster]$ ceph pg stat
288 pgs: 288 active+clean; 3.6 KiB data, 81 MiB used, 64 GiB / 72 GiB avail

查看pool信息

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool stats
pool pool1 id 1
  nothing is going on

pool pool2 id 2
  nothing is going on

pool .rgw.root id 3
  nothing is going on

pool default.rgw.control id 4
  nothing is going on

pool default.rgw.meta id 5
  nothing is going on

pool default.rgw.log id 6
  nothing is going on

pool pool3 id 7
  nothing is going on

pool pool4 id 8
  nothing is going on

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool ls
pool1
pool2
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
pool3
pool4

查看存储信息

[cephadm@ceph-admin ceph-cluster]$ ceph df
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED 
    hdd       72 GiB     64 GiB     81 MiB      8.1 GiB         11.22 
    TOTAL     72 GiB     64 GiB     81 MiB      8.1 GiB         11.22 
 
POOLS:
    POOL                    ID     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    pool1                    1         0 B           0         0 B         0        20 GiB 
    pool2                    2       197 B           5     576 KiB         0        20 GiB 
    .rgw.root                3     1.2 KiB           4     768 KiB         0        20 GiB 
    default.rgw.control      4         0 B           8         0 B         0        20 GiB 
    default.rgw.meta         5         0 B           0         0 B         0        20 GiB 
    default.rgw.log          6         0 B         175         0 B         0        20 GiB 
    pool3                    7     2.2 KiB          22     1.5 MiB         0        20 GiB 
    pool4                    8         0 B           0         0 B         0        20 GiB 

[cephadm@ceph-admin ceph-cluster]$ ceph df detail 
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED 
    hdd       72 GiB     64 GiB     81 MiB      8.1 GiB         11.22 
    TOTAL     72 GiB     64 GiB     81 MiB      8.1 GiB         11.22 
 
POOLS:
    POOL                    ID     STORED      OBJECTS     USED        %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY     USED COMPR     UNDER COMPR 
    pool1                    1         0 B           0         0 B         0        20 GiB     N/A               N/A                 0            0 B             0 B 
    pool2                    2       197 B           5     576 KiB         0        20 GiB     N/A               N/A                 5            0 B             0 B 
    .rgw.root                3     1.2 KiB           4     768 KiB         0        20 GiB     N/A               N/A                 4            0 B             0 B 
    default.rgw.control      4         0 B           8         0 B         0        20 GiB     N/A               N/A                 8            0 B             0 B 
    default.rgw.meta         5         0 B           0         0 B         0        20 GiB     N/A               N/A                 0            0 B             0 B 
    default.rgw.log          6         0 B         175         0 B         0        20 GiB     N/A               N/A               175            0 B             0 B 
    pool3                    7     2.2 KiB          22     1.5 MiB         0        20 GiB     N/A               N/A                22            0 B             0 B 
    pool4                    8         0 B           0         0 B         0        20 GiB     N/A               N/A                 0            0 B             0 B 

查看osd信息

[cephadm@ceph-admin ceph-cluster]$ ceph osd stat
8 osds: 8 up (since 83m), 8 in (since 40h); epoch: e77
[cephadm@ceph-admin ceph-cluster]$ ceph osd dump
epoch 77
fsid 8a83b874-efa4-4655-b070-704e63553839
created 2019-07-11 22:13:58.072667
modified 2019-07-13 15:20:43.064716
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 17
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 1 'pool1' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 43 flags hashpspool stripe_width 0
pool 2 'pool2' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 60 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
	removed_snaps [1~3]
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 64 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 66 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 68 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 70 flags hashpspool stripe_width 0 application rgw
pool 7 'pool3' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 77 flags hashpspool stripe_width 0 application cephfs
pool 8 'pool4' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 77 flags hashpspool stripe_width 0 application cephfs
max_osd 8
osd.0 up   in  weight 1 up_from 46 up_thru 74 down_at 44 last_clean_interval [36,43) [v2:192.168.48.11:6800/1828,v1:192.168.48.11:6801/1828] [v2:192.168.126.101:6800/1828,v1:192.168.126.101:6801/1828] exists,up d9349281-5ae9-49d7-8c8c-ca3774320fbd
osd.1 up   in  weight 1 up_from 48 up_thru 74 down_at 45 last_clean_interval [37,44) [v2:192.168.48.12:6804/1831,v1:192.168.48.12:6805/1831] [v2:192.168.126.102:6804/1831,v1:192.168.126.102:6805/1831] exists,up bfb7bbd4-0d96-40e7-99c5-ceb47a4a7ec8
osd.2 up   in  weight 1 up_from 50 up_thru 74 down_at 47 last_clean_interval [38,44) [v2:192.168.48.13:6804/1873,v1:192.168.48.13:6805/1873] [v2:192.168.126.103:6804/1873,v1:192.168.126.103:6805/1873] exists,up faf41352-628b-40cf-8132-12623c471e77
osd.3 up   in  weight 1 up_from 52 up_thru 74 down_at 45 last_clean_interval [39,44) [v2:192.168.48.14:6800/1826,v1:192.168.48.14:6801/1826] [v2:192.168.126.104:6800/1826,v1:192.168.126.104:6801/1826] exists,up 3903cfac-b669-49e4-b9d0-8cd4d69109ec
osd.4 up   in  weight 1 up_from 47 up_thru 74 down_at 45 last_clean_interval [36,44) [v2:192.168.48.11:6804/1826,v1:192.168.48.11:6805/1826] [v2:192.168.126.101:6804/1826,v1:192.168.126.101:6805/1826] exists,up e3102a32-dfb3-42c7-8d6f-617c030808f7
osd.5 up   in  weight 1 up_from 48 up_thru 74 down_at 45 last_clean_interval [37,44) [v2:192.168.48.12:6800/1830,v1:192.168.48.12:6801/1830] [v2:192.168.126.102:6800/1830,v1:192.168.126.102:6801/1830] exists,up e3d90a7c-f992-4451-b07e-6c620ec38d09
osd.6 up   in  weight 1 up_from 50 up_thru 74 down_at 48 last_clean_interval [38,44) [v2:192.168.48.13:6800/1872,v1:192.168.48.13:6801/1872] [v2:192.168.126.103:6800/1872,v1:192.168.126.103:6801/1872] exists,up cc6f6ece-34af-4f69-8d17-bf60a643d6f6
osd.7 up   in  weight 1 up_from 52 up_thru 74 down_at 45 last_clean_interval [40,44) [v2:192.168.48.14:6804/1828,v1:192.168.48.14:6805/1828] [v2:192.168.126.104:6804/1828,v1:192.168.126.104:6805/1828] exists,up 65d72a74-b01b-456a-a20c-3ef337a01c31

[cephadm@ceph-admin ceph-cluster]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07031 root default                            
-3       0.01758     host ceph01                         
 0   hdd 0.00879         osd.0       up  1.00000 1.00000 
 4   hdd 0.00879         osd.4       up  1.00000 1.00000 
-5       0.01758     host ceph02                         
 1   hdd 0.00879         osd.1       up  1.00000 1.00000 
 5   hdd 0.00879         osd.5       up  1.00000 1.00000 
-7       0.01758     host ceph03                         
 2   hdd 0.00879         osd.2       up  1.00000 1.00000 
 6   hdd 0.00879         osd.6       up  1.00000 1.00000 
-9       0.01758     host ceph04                         
 3   hdd 0.00879         osd.3       up  1.00000 1.00000 
 7   hdd 0.00879         osd.7       up  1.00000 1.00000 

查看mon信息

[cephadm@ceph-admin ceph-cluster]$ ceph mon stat
e1: 3 mons at {ceph01=[v2:192.168.48.11:3300/0,v1:192.168.48.11:6789/0],ceph02=[v2:192.168.48.12:3300/0,v1:192.168.48.12:6789/0],ceph03=[v2:192.168.48.13:3300/0,v1:192.168.48.13:6789/0]}, election epoch 12, leader 0 ceph01, quorum 0,1,2 ceph01,ceph02,ceph03

[cephadm@ceph-admin ceph-cluster]$ ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 8a83b874-efa4-4655-b070-704e63553839
last_changed 2019-07-11 22:13:46.511105
created 2019-07-11 22:13:46.511105
min_mon_release 14 (nautilus)
0: [v2:192.168.48.11:3300/0,v1:192.168.48.11:6789/0] mon.ceph01
1: [v2:192.168.48.12:3300/0,v1:192.168.48.12:6789/0] mon.ceph02
2: [v2:192.168.48.13:3300/0,v1:192.168.48.13:6789/0] mon.ceph03

[cephadm@ceph-admin ceph-cluster]$ ceph quorum_status  --format=json-pretty

{
    "election_epoch": 12,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph01",
        "ceph02",
        "ceph03"
    ],
    "quorum_leader_name": "ceph01",
    "quorum_age": 5409,
    "monmap": {
        "epoch": 1,
        "fsid": "8a83b874-efa4-4655-b070-704e63553839",
        "modified": "2019-07-11 22:13:46.511105",
        "created": "2019-07-11 22:13:46.511105",
        "min_mon_release": 14,
        "min_mon_release_name": "nautilus",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph01",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.48.11:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.48.11:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.48.11:6789/0",
                "public_addr": "192.168.48.11:6789/0"
            },
            {
                "rank": 1,
                "name": "ceph02",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.48.12:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.48.12:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.48.12:6789/0",
                "public_addr": "192.168.48.12:6789/0"
            },
            {
                "rank": 2,
                "name": "ceph03",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.48.13:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.48.13:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.48.13:6789/0",
                "public_addr": "192.168.48.13:6789/0"
            }
        ]
    }
}

利用sock文件查看信息

[root@ceph01 ~]# cd /var/run/ceph/
[root@ceph01 ceph]# ls 
ceph-client.rgw.ceph01.4843.94191922840080.asok  ceph-mon.ceph01.asok  ceph-osd.0.asok  ceph-osd.4.asok

[root@ceph01 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok help
{
    "calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore",
    "compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "dump_blacklist": "dump blacklisted clients and times",
    "dump_blocked_ops": "show the blocked ops currently in flight",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration",
    "dump_historic_slow_ops": "show slowest recent ops",
    "dump_mempools": "get mempool stats",
    "dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore",
    "dump_op_pq_state": "dump op priority queue state",
    "dump_ops_in_flight": "show the ops currently in flight",
    "dump_pgstate_history": "show recent state history",
    "dump_reservations": "show recovery reservations",
    "dump_scrubs": "print scheduled scrubs",
    "dump_watchers": "show clients which have active watches, and on which objects",
    "flush_journal": "flush the journal to permanent store",
    "flush_store_cache": "Flush bluestore internal cache",
    "get_command_descriptions": "list available commands",
    "get_heap_property": "get malloc extension heap property",
    "get_latest_osdmap": "force osd to update the latest map from the mon",
    "get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.",
    "getomap": "output entire object map",
    "git_version": "get git sha1",
    "heap": "show heap usage info (available only if compiled with tcmalloc)",
    "help": "list available commands",
    "injectdataerr": "inject data error to an object",
    "injectfull": "Inject a full disk (optional count times)",
    "injectmdataerr": "inject metadata error to an object",
    "list_devices": "list OSD devices.",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "objecter_requests": "show in-progress osd requests",
    "ops": "show the ops currently in flight",
    "perf dump": "dump perfcounters value",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump perfcounters schema",
    "rmomapkey": "remove omap key",
    "send_beacon": "send OSD beacon to mon immediately",
    "set_heap_property": "update malloc extension heap property",
    "set_recovery_delay": "Delay osd recovery by specified seconds",
    "setomapheader": "set omap header",
    "setomapval": "set omap key",
    "smart": "probe OSD devices for SMART data.",
    "status": "high-level status of OSD",
    "trigger_deep_scrub": "Trigger a scheduled deep scrub ",
    "trigger_scrub": "Trigger a scheduled scrub ",
    "truncobj": "truncate object to length",
    "version": "get ceph version"
}

[root@ceph01 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok version
{"version":"14.2.1","release":"nautilus","release_type":"stable"}

[root@ceph01 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok status
{
    "cluster_fsid": "8a83b874-efa4-4655-b070-704e63553839",
    "osd_fsid": "d9349281-5ae9-49d7-8c8c-ca3774320fbd",
    "whoami": 0,
    "state": "active",
    "oldest_map": 1,
    "newest_map": 77,
    "num_pgs": 100
}

存储池快照

[cephadm@ceph-admin ceph-cluster]$ ceph osd pool mksnap  pool1 pool1-snap
created pool pool1 snap pool1-snap
[cephadm@ceph-admin ceph-cluster]$ rados -p pool1 lssnap
1	pool1-snap	2019.07.13 19:18:25
1 snaps

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值