mongodb3.4 集群搭建实录

我的所有数据都是存在mongodb里, 正常来说一台机器是没什么问题的, 结果在一台新开的机器上未指定mongodb的存储数据目录,结果很快就将一个50G左右的磁盘占满, 导致mongodb不可用, 丢失了太多的数据, 更换存储目录后尝试导出已存入的数据,花了好大的精力却没有成功,最终只能不了了之。随着数据量的增加,一块磁盘是无法存储所有数据,况且还有了前车之鉴,因此一个完整的可靠的集群的需求已是迫在眉睫。在经过一天的努力,一个完整的集群已搭建成功,在这里来记录一下我的搭建过程以供大家讨论和纠正。

mongodb集群是我玩过的数据库搭建集群时最复杂的一个,不像redis和cassandra那样简单的配置就可以使用,mongodb需要先搞懂几块东西才能着手搭建,分别是 mongos、config server、shard、replica set。

mongos 数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多个mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。

config server 顾名思义为配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据,防止数据丢失!

shard 分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。

仲裁者(Arbiter),是复制集中的一个MongoDB实例,它并不保存数据。仲裁节点使用最小的资源并且不要求硬件设备,不能将Arbiter部署在同一个数据集节点中,可以部署在其他应用服务器或者监视服务器中,也可部署在单独的虚拟机中。为了确保复制集中有奇数的投票成员(包括primary),需要添加仲裁节点做为投票,否则primary不能运行时不会自动切换primary。

replica set 副本集,其实就是shard的备份,防止shard挂掉之后数据丢失。复制提供了数据的冗余备份,并在多个服务器上存储数据副本,提高了数据的可用性, 并可以保证数据的安全性。

mongodb集群的启动顺序:

  1. config server
  2. shard
  3. mongos
  4. 启用shard

环境准备:

6台centos 6.2

  • 192.168.22.12
  • 192.168.22.13
  • 192.168.22.14
  • 192.168.22.15
  • 192.168.25.14
  • 192.168.25.15

端口分配:

  • mongos:20000
  • config:21000
  • shard01:27001
  • shard02:27002
  • shard03:27003
  • shard04:27004
  • shard05:27005
  • shard06:27006

下载安装包

 wget  http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.4.8.tgz

解压安装包

tar -zxvf mongodb-linux-x86_64-rhel62-3.4.8.tgz

移动到指定目录

mv mongodb-linux-x86_64-rhel62-3.4.8 mongodb
mv mongodb /home/usr/

创建所需的数据目录和日志目录,这里需要的文件太多,每一台都一个个创建太麻烦, 我写了一个shell脚本

vim mkfile.sh

#! /bin/bash

rm -rf /home/data
mkdir /home/data
mkdir /home/data/mongodb
mkdir /home/data/mongodb/config
mkdir /home/data/mongodb/config/data
mkdir /home/data/mongodb/config/log
mkdir /home/data/mongodb/mongos
mkdir /home/data/mongodb/mongos/log
mkdir /home/data/mongodb/shard01
mkdir /home/data/mongodb/shard02
mkdir /home/data/mongodb/shard03
mkdir /home/data/mongodb/shard03/data
mkdir /home/data/mongodb/shard03/log
mkdir /home/data/mongodb/shard01/log
mkdir /home/data/mongodb/shard01/data
mkdir /home/data/mongodb/shard02/data
mkdir /home/data/mongodb/shard02/log
mkdir /home/data/mongodb/shard04/
mkdir /home/data/mongodb/shard05/
mkdir /home/data/mongodb/shard06/
mkdir /home/data/mongodb/shard06/log
mkdir /home/data/mongodb/shard05/log
mkdir /home/data/mongodb/shard04/log
mkdir /home/data/mongodb/shard04/data
mkdir /home/data/mongodb/shard05/data
mkdir /home/data/mongodb/shard06/data

同步到其他几台机器上并分别运行

scp mkfile.sh root@192.168.22.13:/home
scp mkfile.sh root@192.168.22.14:/home
scp mkfile.sh root@192.168.22.15:/home
scp mkfile.sh root@192.168.25.14:/home
scp mkfile.sh root@192.168.25.15:/home

bash mkfile.sh

配置config server

mongodb3.4以后要求配置服务器也创建副本集,不然集群搭建不成功。启动方式有两种,1, 命令行启动; 2, 配置文件启动

1, 用命令行来启动它,非第一次时需要加-logappend配置项,pidfilepath目录必须指定,不然会在配置mongos时会出错,

mongod -configsvr --replSet andy -port 21000 -dbpath /home/data/mongodb/config/data/ --logpath /home/data/mongodb/config/log/config.log --pidfilepath /home/data/mongodb/config/log/configsrv.pid -fork

2 , 配置文件启动
添加配置文件

mkdir /home/usr/mongodb/cong
vim /home/usr/mongodb/conf/config.conf

config.conf文件的添加内容如下

pidfilepath = /home/data/mongodb/config/log/configsrv.pid
dbpath = /home/data/mongodb/config/data/
logpath = /home/data/mongodb/config/log/config.log

logappend = true
bind_ip = 0.0.0.0
port = 21000
fork = true

configsvr = true

#副本集名称
replSet=andy

#设置最大连接数
maxConns=20000

然后同步但其他机台机器上,接着启动每台机器的配置服务器

[root@andy3 mongodb]# mongod -f conf/config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 14295
child process started successfully, parent exiting
[root@andy3 mongodb]#

不管哪一种,看到这些输出信息就证明配置服务器已启动成功。此时我们还需要随便登陆其中的一台机器初始化配置服务器的副本集

> config = {
    _id: "andy", 
    configsvr: true, 
    members:[
        {_id:1, host:"192.168.22.12:21000"}, 
        {_id:2, host:"192.168.22.13:21000"},
        {_id:3, host:"192.168.22.14:21000"}
    ]
}
> rs.initiate(config)
{ "ok" : 1 }
andy:SECONDARY> exit

看到 { "ok" : 1 } 说明初始化成功

启动分片副本集
分别在每台机器上启动的设计的分片shard01 - shard06, 也可以选择配置文件启动和命令行启动,这里我会以命令行演示,但我建议以配置文件启动,

1, 配置文件启动, 以启动 shard1为例
配置文件如下:

vim /home/usr/mongodb/conf/shard01.conf

pidfilepath = /home/data/mongodb/shard01/log/shard01.pid
dbpath = /home/data/mongodb/shard01/data
logpath = /home/data/mongodb/shard01/log/shard01.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true

#打开web监控
httpinterface=true
rest=true

#副本集名称
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

然后将此文件同步到其他几台机器相同的位置,并启动shard1

[root@andy3 mongodb]# mongod -f  conf/shard01.conf
about to fork child process, waiting until server is ready for connections.
forked process: 14817
child process started successfully, parent exiting
[root@andy3 mongodb]

看到这些信息就知道启动成功了, shard2,shard3,shard4,shard5,shard6同shard1的步骤一样,只是文件路径,端口号以及副本集名称修改为其对应的就可以了,比如shard2的配置文件如下:

vim /home/usr/mongodb/conf/shard02.conf

pidfilepath = /home/data/mongodb/shard02/log/shard02.pid
dbpath = /home/data/mongodb/shard02/data
logpath = /home/data/mongodb/shard02/log/shard02.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

#打开web监控
httpinterface=true
rest=true

#副本集名称
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

2, 命令行启动

shard1

mongod --shardsvr --replSet shard1 --port 27001 --dbpath /home/data/mongodb/shard01/data --logpath /home/ubuntu/data/mongodb/shard01/log/shard01.log  --pidfilepath /home/data/mongodb/shard01/log/shard01.pid --fork 

shard2

mongod --shardsvr --replSet shard2 --port 27002 --dbpath /home/data/mongodb/shard02/data --logpath /home/ubuntu/data/mongodb/shard02/log/shard02.log  --pidfilepath /home/data/mongodb/shard02/log/shard02.pid --fork 

shard3

mongod --shardsvr --replSet shard3--port 27003--dbpath /home/data/mongodb/shard03/data --logpath /home/ubuntu/data/mongodb/shard03/log/shard03.log  --pidfilepath /home/data/mongodb/shard03/log/shard03.pid --fork 

shard4

mongod --shardsvr --replSet shard4 --port 27004 --dbpath /home/data/mongodb/shard04/data --logpath /home/ubuntu/data/mongodb/shard04/log/shard04.log  --pidfilepath /home/data/mongodb/shard04/log/shard04.pid --fork 

shard5

mongod --shardsvr --replSet shard5 --port 27005 --dbpath /home/data/mongodb/shard05/data --logpath /home/ubuntu/data/mongodb/shard05/log/shard05.log  --pidfilepath /home/data/mongodb/shard05/log/shard05.pid --fork 

shard6

mongod --shardsvr --replSet shard6 --port 27006 --dbpath /home/data/mongodb/shard06/data --logpath /home/ubuntu/data/mongodb/shard06/log/shard06.log  --pidfilepath /home/data/mongodb/shard06/log/shard06.pid --fork 

这六个命令在每台机器上都要做,非第一次要加logappend参数,

初始化分片副本集
以下操作登陆集群中的任何一台机器即可

初始化shard1

mongo --port 27001


# 使用admin数据库
> use admin

# 定义副本集配置
> config = { _id:"shard1", members:[
    {_id:0,host:"192.168.22.12:27001"},
    {_id:1,host:"192.168.25.15:27001"},
    {_id:2,host:"192.168.22.15:27001"}]
}

# 初始化副本集配置
> rs.initiate(config)
{ "ok" : 1 }
shard1:OTHER> 

初始化shard2

mongo --port 27002


> use admin
> config = { _id:"shard2", members:[
    {_id:0,host:"192.168.22.13:27002"},
    {_id:1,host:"192.168.22.14:27002"},
    {_id:2,host:"192.168.25.14:27002"}]
}
> rs.initiate(config)
{ "ok" : 1 }
shard2:OTHER> 

初始化shard3

mongo --port 27003


> use admin
> config = { _id:"shard3", members:[
    {_id:0,host:"192.168.22.14:27003"},
    {_id:1,host:"192.168.25.14:27003"},
    {_id:2,host:"192.168.22.12:27003"}]
}
> rs.initiate(config)
{ "ok" : 1 }
shard3:OTHER> 

初始化shard4

mongo --port 27004


> use admin
> config = { _id:"shard4", members:[
    {_id:0,host:"192.168.22.15:27004"},
    {_id:1,host:"192.168.22.13:27004"},
    {_id:2,host:"192.168.25.15:27004"}]
}
> rs.initiate(config)
{ "ok" : 1 }
shard4:OTHER> 

初始化shard5

mongo --port 27005


> use admin
> config = { _id:"shard5", members:[
    {_id:0,host:"192.168.25.14:27005"},
    {_id:1,host:"192.168.22.12:27005"},
    {_id:2,host:"192.168.22.15:27005"}]
}
> rs.initiate(config)
{ "ok" : 1 }
shard5:OTHER> 

初始化shard6

mongo --port 27006


> use admin
> config = { _id:"shard6", members:[
    {_id:0,host:"192.168.25.15:27006"},
    {_id:1,host:"192.168.22.13:27006"},
    {_id:2,host:"192.168.22.14:27006"}]
}
> rs.initiate(config)
{ "ok" : 1 }
shard6:OTHER> 

配置和启动路由服务器 mongos

mongos配置文件

vim /home/usr/mongodb/conf/mongos.conf 



pidfilepath = /home/data/mongodb/mongos/log/mongos.pid
logpath = /home/data/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = andy/192.168.22.12:21000,192.168.22.13:21000,192.168.22.14:21000

#设置最大连接数
maxConns=20000

将此配置文件scp到其他机器上并启动每台机器的mongos server

[root@andy3 mongodb]# mongos -f conf/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 15246
child process started successfully, parent exiting
[root@andy3 mongodb]#

也可以以命令行启动

mongos --configdb andy/192.168.22.12:21000,192.168.22.13:21000,192.168.22.14:21000 --port 20000 --logpath /home/data/mongodb/mongos/log/mongos.log --pidfilepath /home/data/mongodb/mongos/log/mongos.pid --fork

注意configdb参数后边的ip之间不能有空格,负责会报错(参数冗余)。

启用分片副本集

现在已经搭建了mongodb的 config server、mongos server,shard,不过应用程序连接到mongos路由服务器并不能使用shard机制,还需要在程序里设置分片配置,让shard生效。

[root@andy3 mongodb]#  mongo --port 20000
mongos> use  admin
switched to db admin
mongos> sh.addShard("shard1/192.168.22.12:27001,192.168.25.15:27001,192.168.25.15:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/192.168.22.13:27002,192.168.22.14:27002,192.168.25.14:27002")
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/192.168.22.14:27003,192.168.25.14:27003,192.168.22.12:27003")
{ "shardAdded" : "shard3", "ok" : 1 }
mongos> sh.addShard("shard4/192.168.22.15:27004,192.168.22.13:27004,192.168.25.15:27004")
{ "shardAdded" : "shard4", "ok" : 1 }
mongos> sh.addShard("shard5/192.168.25.14:27005,192.168.22.12:27005,192.168.22.15:27005")
{ "shardAdded" : "shard5", "ok" : 1 }
mongos> sh.addShard("shard6/192.168.25.15:27006,192.168.22.13:27006,192.168.22.14:27006")
{ "shardAdded" : "shard6", "ok" : 1 }

以上配置和初始化shard机制的ip和端口一直

测试集群

[root@andy3 conf]# mongo --port 20000
MongoDB shell version v3.4.8
connecting to: mongodb://127.0.0.1:20000/
MongoDB server version: 3.4.8
Server has startup warnings: 
2017-09-21T14:47:11.702+0800 I CONTROL  [main] 
2017-09-21T14:47:11.709+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2017-09-21T14:47:11.709+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2017-09-21T14:47:11.709+0800 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2017-09-21T14:47:11.709+0800 I CONTROL  [main] 
mongos> use admin
switched to db admin
mongos> db.runCommand( { enablesharding :"andytest"});
{ "ok" : 1 }
mongos> db.runCommand( { shardcollection : "andytest.larry",key : {id: 1} } )
{ "collectionsharded" : "andytest.larry", "ok" : 1 }
mongos> use andytest
switched to db andytest
mongos> db.larry.status;
andytest.larry.status
mongos> db.larry.stats();
{
    "sharded" : true,
    "capped" : false,
    "ns" : "andytest.larry",
    "count" : 0,
    "size" : 0,
    "storageSize" : 4096,
    "totalIndexSize" : 8192,
    "indexSizes" : {
        "_id_" : 4096,
        "id_1" : 4096
    },
    "avgObjSize" : 0,
    "nindexes" : 2,
    "nchunks" : 1,
    "shards" : {
        "shard1" : {
            "ns" : "andytest.larry",
            "size" : 0,
            "count" : 0,
            "storageSize" : 4096,
            "capped" : false,
            "wiredTiger" : {
                "metadata" : {
                    "formatVersion" : 1
                },
                "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
                "type" : "file",
                "uri" : "statistics:table:collection-17--4211418449623313846",
                "LSM" : {
                    "bloom filter false positives" : 0,
                    "bloom filter hits" : 0,
                    "bloom filter misses" : 0,
                    "bloom filter pages evicted from cache" : 0,
                    "bloom filter pages read into cache" : 0,
                    "bloom filters in the LSM tree" : 0,
                    "chunks in the LSM tree" : 0,
                    "highest merge generation in the LSM tree" : 0,
                    "queries that could have benefited from a Bloom filter that did not exist" : 0,
                    "sleep for LSM checkpoint throttle" : 0,
                    "sleep for LSM merge throttle" : 0,
                    "total size of bloom filters" : 0
                },
                "block-manager" : {
                    "allocations requiring file extension" : 0,
                    "blocks allocated" : 0,
                    "blocks freed" : 0,
                    "checkpoint size" : 0,
                    "file allocation unit size" : 4096,
                    "file bytes available for reuse" : 0,
                    "file magic number" : 120897,
                    "file major version number" : 1,
                    "file size in bytes" : 4096,
                    "minor version number" : 0
                },
                "btree" : {
                    "btree checkpoint generation" : 401,
                    "column-store fixed-size leaf pages" : 0,
                    "column-store internal pages" : 0,
                    "column-store variable-size RLE encoded values" : 0,
                    "column-store variable-size deleted values" : 0,
                    "column-store variable-size leaf pages" : 0,
                    "fixed-record size" : 0,
                    "maximum internal page key size" : 368,
                    "maximum internal page size" : 4096,
                    "maximum leaf page key size" : 2867,
                    "maximum leaf page size" : 32768,
                    "maximum leaf page value size" : 67108864,
                    "maximum tree depth" : 0,
                    "number of key/value pairs" : 0,
                    "overflow pages" : 0,
                    "pages rewritten by compaction" : 0,
                    "row-store internal pages" : 0,
                    "row-store leaf pages" : 0
                },
                "cache" : {
                    "bytes currently in the cache" : 165,
                    "bytes read into cache" : 0,
                    "bytes written from cache" : 0,
                    "checkpoint blocked page eviction" : 0,
                    "data source pages selected for eviction unable to be evicted" : 0,
                    "hazard pointer blocked page eviction" : 0,
                    "in-memory page passed criteria to be split" : 0,
                    "in-memory page splits" : 0,
                    "internal pages evicted" : 0,
                    "internal pages split during eviction" : 0,
                    "leaf pages split during eviction" : 0,
                    "modified pages evicted" : 0,
                    "overflow pages read into cache" : 0,
                    "overflow values cached in memory" : 0,
                    "page split during eviction deepened the tree" : 0,
                    "page written requiring lookaside records" : 0,
                    "pages read into cache" : 0,
                    "pages read into cache requiring lookaside entries" : 0,
                    "pages requested from the cache" : 0,
                    "pages written from cache" : 0,
                    "pages written requiring in-memory restoration" : 0,
                    "tracked dirty bytes in the cache" : 0,
                    "unmodified pages evicted" : 0
                },
                "cache_walk" : {
                    "Average difference between current eviction generation when the page was last considered" : 0,
                    "Average on-disk page image size seen" : 0,
                    "Clean pages currently in cache" : 0,
                    "Current eviction generation" : 0,
                    "Dirty pages currently in cache" : 0,
                    "Entries in the root page" : 0,
                    "Internal pages currently in cache" : 0,
                    "Leaf pages currently in cache" : 0,
                    "Maximum difference between current eviction generation when the page was last considered" : 0,
                    "Maximum page size seen" : 0,
                    "Minimum on-disk page image size seen" : 0,
                    "On-disk page image sizes smaller than a single allocation unit" : 0,
                    "Pages created in memory and never written" : 0,
                    "Pages currently queued for eviction" : 0,
                    "Pages that could not be queued for eviction" : 0,
                    "Refs skipped during cache traversal" : 0,
                    "Size of the root page" : 0,
                    "Total number of pages currently in cache" : 0
                },
                "compression" : {
                    "compressed pages read" : 0,
                    "compressed pages written" : 0,
                    "page written failed to compress" : 0,
                    "page written was too small to compress" : 0,
                    "raw compression call failed, additional data available" : 0,
                    "raw compression call failed, no additional data available" : 0,
                    "raw compression call succeeded" : 0
                },
                "cursor" : {
                    "bulk-loaded cursor-insert calls" : 0,
                    "create calls" : 1,
                    "cursor-insert key and value bytes inserted" : 0,
                    "cursor-remove key bytes removed" : 0,
                    "cursor-update value bytes updated" : 0,
                    "insert calls" : 0,
                    "next calls" : 1,
                    "prev calls" : 1,
                    "remove calls" : 0,
                    "reset calls" : 2,
                    "restarted searches" : 0,
                    "search calls" : 0,
                    "search near calls" : 0,
                    "truncate calls" : 0,
                    "update calls" : 0
                },
                "reconciliation" : {
                    "dictionary matches" : 0,
                    "fast-path pages deleted" : 0,
                    "internal page key bytes discarded using suffix compression" : 0,
                    "internal page multi-block writes" : 0,
                    "internal-page overflow keys" : 0,
                    "leaf page key bytes discarded using prefix compression" : 0,
                    "leaf page multi-block writes" : 0,
                    "leaf-page overflow keys" : 0,
                    "maximum blocks required for a page" : 0,
                    "overflow values written" : 0,
                    "page checksum matches" : 0,
                    "page reconciliation calls" : 0,
                    "page reconciliation calls for eviction" : 0,
                    "pages deleted" : 0
                },
                "session" : {
                    "object compaction" : 0,
                    "open cursor count" : 1
                },
                "transaction" : {
                    "update conflicts" : 0
                }
            },
            "nindexes" : 2,
            "totalIndexSize" : 8192,
            "indexSizes" : {
                "_id_" : 4096,
                "id_1" : 4096
            },
            "ok" : 1
        }
    },
    "ok" : 1
}
mongos> 

至此我们的集群已经安装测试完毕,你可以选择分片然后插入数据观察和不选择分片的区别。如有问题请加QQ群相互讨论,

QQ群号:526855734
参考:
MongoDB3.4副本集分片集群搭建
MongoDB官网

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值