存储+调优:存储-IP-SAN-EXTENSION

存储+调优:存储-IP-SAN-EXTENSION

文件系统的锁标记
GFS(锁表空间)

        -----------        ------------        -------------
节点        | ndoe1  |             | node2   |        |  node3    |
        ----------               ------------              -------------
              \                     /                     /
               \                  /                    /
                 \              /                     /
                    交换机-----------------------------
                   /         \                        \
                 /            \                        \
               /               \                         \
         ---------          ----------                 ------------
存储        | node4 |           |  node5 |                 |  node6  |
        ----------          ----------                  -----------

准备工作

IP:    node1     172.16.1.1/24
    node2    172.16.1.2/24
    node3    172.16.1.3/24
    node4    172.16.1.4/24
    node5    172.16.1.5/24
    node6    172.16.1.6/24

hostname
/etc/hosts
iptables 
selinux
yum

1.配置node1,node2集群节点

    安装集群相关软件包
[root@node1 ~]# yum install cman openais
[root@node1 ~]# yum install system-config-cluster

    使用system-config-cluster配置集群
[root@node1 ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0" ?>
<cluster config_version="2" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@node1 ~]# scp /etc/cluster/cluster.conf node2:/etc/cluster/
[root@node1 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]

[root@node1 ~]# cman_tool status
Version: 6.2.0
Config Version: 2
Cluster Name: iscsi_cluster
Cluster Id: 26292
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Quorum: 1  
Active subsystems: 7
Flags: 2node Dirty 
Ports Bound: 0  
Node name: node1.uplooking.com
Node ID: 1
Multicast addresses: 239.192.102.27 
Node addresses: 172.16.1.1 

2.配置node4,node5存储节点
[root@node4 ~]# mkdir /iscsi
[root@node4 ~]# dd if=/dev/zero of=/iscsi/disk-node4 bs=1M count=500
[root@node4 ~]# yum install scsi-target-utils

[root@node4 ~]# vim /etc/tgt/targets.conf 

default-driver iscsi


# Continue if tgtadm exits with non-zero code (equivalent of
# --ignore-errors command line option)
#ignore-errors yes


# Sample target with one LUN only. Defaults to allow access for all initiators:

<target iqn.2012-02.com.uplooking:node4.target1>
    backing-store /iscsi/disk-node4
    write-cache off
    vendor_id node4
    product_id storage4
    initiator-address 172.16.1.1
    initiator-address 172.16.1.2
</target>
 
[root@node4 ~]# service tgtd start
Starting SCSI target daemon: Starting target framework daemon

[root@node4 ~]# tgt-admin --show
Target 1: iqn.2012-02.com.uplooking:node4.target1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: None
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 524 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: /iscsi/disk-node4
    Account information:
    ACL information:
        172.16.1.1
        172.16.1.2
         

3.集群节点node1,node2发现并登陆node4,node5存储

[root@node1 ~]# yum install iscsi-initiator-utils
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.4:3260
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Could not scan /sys/class/iscsi_transport.
iscsiadm: Could not scan /sys/class/iscsi_transport.
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Cannot perform discovery. Initiatorname required.
iscsiadm: Discovery process to 172.16.1.4:3260 failed to create a discovery session.
iscsiadm: Could not perform SendTargets discovery.
[root@node1 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.4:3260
172.16.1.4:3260,1 iqn.2012-02.com.uplooking:node4.target1
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.5:3260
172.16.1.5:3260,1 iqn.2012-02.com.uplooking:node5.target1
[root@node1 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node4.target1 -l
Logging in to [iface: default, target: iqn.2012-02.com.uplooking:node4.target1, portal: 172.16.1.4,3260]
Login to [iface: default, target: iqn.2012-02.com.uplooking:node4.target1, portal: 172.16.1.4,3260]: successful
[root@node1 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node5.target1 -l
Logging in to [iface: default, target: iqn.2012-02.com.uplooking:node5.target1, portal: 172.16.1.5,3260]
Login to [iface: default, target: iqn.2012-02.com.uplooking:node5.target1, portal: 172.16.1.5,3260]: successful
[root@node1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        2610    20860402+  8e  Linux LVM

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 524 MB, 524288000 bytes
17 heads, 59 sectors/track, 1020 cylinders
Units = cylinders of 1003 * 512 = 513536 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 524 MB, 524288000 bytes
17 heads, 59 sectors/track, 1020 cylinders
Units = cylinders of 1003 * 512 = 513536 bytes

Disk /dev/sdd doesn't contain a valid partition table


4.集群节点node1,node2使用udev创建设备的别名
[root@node1 ~]# udevinfo -a -p /sys/block/sdc
[root@node1 ~]# udevinfo -a -p /sys/block/sdd
[root@node1 ~]# vim /etc/udev/rules.d/80-iscsi.rules
[root@node1 ~]# cat /etc/udev/rules.d/80-iscsi.rules
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage4", SYSFS{vendor}=="node4", SYMLINK="iscsi/node4"
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage5", SYSFS{vendor}=="node5", SYMLINK="iscsi/node5"
[root@node1 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@node1 ~]# ll /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node4 -> ../sdc
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node5 -> ../sdd


5.集群节点node1,node2,使用存储创建lvm 并创建GFS2文件系统,挂载/iscsi成功
[root@node1 ~]# pvcreate /dev/iscsi/node4 
[root@node1 ~]# pvcreate /dev/iscsi/node5
[root@node1 ~]# vgcreate vg-iscsi /dev/iscsi/node5 /dev/iscsi/node4
[root@node1 ~]# lvcreate -l 125 -n lv-iscsi vg-iscsi

[root@node1 ~]# yum install gfs2-utils kmod-gfs
[root@node1 ~]# modprobe gfs2
[root@node1 ~]# lsmod | grep gfs2
gfs2                  349833  1 lock_dlm

[root@node1 ~]# mkfs.gfs2 -t iscsi_cluster:table1 -p lock_dlm -j 2 /dev/vg-iscsi/lv-iscsi 
This will destroy any data on /dev/vg-iscsi/lv-iscsi.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/vg-iscsi/lv-iscsi
Blocksize:                 4096
Device Size                0.49 GB (128000 blocks)
Filesystem Size:           0.49 GB (127997 blocks)
Journals:                  2
Resource Groups:           2
Locking Protocol:          "lock_dlm"
Lock Table:                "iscsi_cluster:table1"
UUID:                      E010CF07-13CF-F783-0A9A-8DB10E6D3444

[root@node1 ~]# mkdir /iscsi
[root@node1 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi
[root@node1 ~]# echo "iscsi test" > /iscsi/file1

[root@node2 ~]# pvscan 
  Couldn't find device with uuid 'fOykMs-ByjL-X0Zh-oKOW-D8Yc-ZenO-fQ6AHJ'.
  PV /dev/sdd         VG vg-iscsi     lvm2 [496.00 MB / 0    free]
  PV /dev/sdc         VG vg-iscsi     lvm2 [496.00 MB / 492.00 MB free]
  PV /dev/sda2        VG VolGroup00   lvm2 [19.88 GB / 0    free]
  Total: 5 [60.84 GB] / in use: 5 [60.84 GB] / in no VG: 0 [0   ]

[root@node2 ~]# vgchange -ay vg-iscsi
  1 logical volume(s) in volume group "vg-iscsi" now active
[root@node2 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
[root@node2 ~]# cat /iscsi/file1 
iscsi test


6.扩展存储节点node6,集群节点node1 node2发现并登录node6存储,使用udev给node6存储创建别名,并在线扩容lv-iscsi 1G
[root@node6 ~]# yum install scsi-target-utils
[root@node6 ~]# mkdir /iscsi
[root@node6 ~]# dd if=/dev/zero of=/iscsi/disk-node6 bs=1M count=5000
[root@node6 ~]# vim /etc/tgt/targets.conf 
# Set the driver. If not specified, defaults to "iscsi".

default-driver iscsi


# Continue if tgtadm exits with non-zero code (equivalent of
# --ignore-errors command line option)
#ignore-errors yes


# Sample target with one LUN only. Defaults to allow access for all initiators:

<target iqn.2012-02.com.uplooking:node6.target1>
    backing-store /iscsi/disk-node6
    write-cache off
    vendor_id node6
    product_id storage6
    initiator-address 172.16.1.1
    initiator-address 172.16.1.2
</target>

[root@node6 ~]# service tgtd start
Starting SCSI target daemon: Starting target framework daemon

[root@node6 ~]# tgt-admin --show


[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.6:3260
[root@node1 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node6.target1 -l
[root@node1 ~]# udevinfo -a -p /sys/block/sde
[root@node1 ~]# cat /etc/udev/rules.d/80-iscsi.rules
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage4", SYSFS{vendor}=="node4", SYMLINK="iscsi/node4"
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage5", SYSFS{vendor}=="node5", SYMLINK="iscsi/node5"
SUBSYSTEM=="block", SYSFS{size}=="2048000", SYSFS{model}=="storage6", SYSFS{vendor}=="node6", SYMLINK="iscsi/node6"
[root@node1 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@node1 ~]# ll /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node4 -> ../sdc
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node5 -> ../sdd
lrwxrwxrwx 1 root root 6 Feb 29 01:43 node6 -> ../sde


[root@node1 ~]# pvcreate /dev/iscsi/node6 
  Physical volume "/dev/iscsi/node6" successfully created
[root@node1 ~]# vgextend vg-iscsi /dev/iscsi/node6
  /dev/cdrom: open failed: Read-only file system
  /dev/cdrom: open failed: Read-only file system
  Attempt to close device '/dev/cdrom' which is not open.
  Volume group "vg-iscsi" successfully extended
[root@node1 ~]# lvextend -l 1246 /dev/vg-iscsi/lv-iscsi 
  /dev/cdrom: open failed: Read-only file system
  Extending logical volume lv-iscsi to 1000.00 MB
  Logical volume lv-iscsi successfully resized

[root@node1 ~]# df -h /iscsi
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg--iscsi-lv--iscsi
                      500M  259M  242M  52% /iscsi

[root@node1 ~]# gfs2_grow -v /iscsi

[root@node1 ~]# df -h /iscsi/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg--iscsi-lv--iscsi
                      4.4G  259M  4.2G   6% /iscsi


7.扩展集群节点node3
    修改存储节点node4,node5,node6配置文件,并在node3上发现并登录成功,使用udev设置存储别名
[root@node4 ~]# vim /etc/tgt/targets.conf    

    initiator-address 172.16.1.1
    initiator-address 172.16.1.2
    initiator-address 172.16.1.3

[root@node4 ~]# tgt-admin --update ALL --force
[root@node4 ~]# tgt-admin --show

[root@node3 ~]# yum install iscsi-initiator-utils
[root@node3 ~]# service iscsi start
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.4:3260
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.5:3260
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.6:3260

[root@node3 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node4.target1 -l
[root@node3 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node5.target1 -l
[root@node3 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node6.target1 -l


[root@node3 ~]# scp node1:/etc/udev/rules.d/80-iscsi.rules /etc/udev/rules.d/
[root@node3 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@node3 ~]# ll /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Feb 29 02:26 node4 -> ../sdb
lrwxrwxrwx 1 root root 6 Feb 29 02:25 node5 -> ../sdc
lrwxrwxrwx 1 root root 6 Feb 29 02:25 node6 -> ../sdd

    使node3加入集群,并挂载存储成功
[root@node3 ~]# pvscan 

[root@node3 ~]# vgchange -ay vg-iscsi

[root@node3 ~]# yum install gfs-utils kmod-gfs

[root@node3 ~]# yum install gfs-utils kmod-gfs
[root@node3 ~]# mkdir /iscsi
[root@node3 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: gfs_controld not running
/sbin/mount.gfs2: error mounting lockproto lock_dlm

===================================================================
[root@node1 ~]# vim /etc/cluster/cluster.conf 
[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster config_version="2" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.uplooking.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@node1 ~]# scp /etc/cluster/cluster.conf node3:/etc/cluster/

[root@node3 ~]# yum install cman openais
[root@node3 ~]# ls /etc/cluster/
cluster.conf
[root@node3 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... failed
cman not started: Can't find local node name in cluster.conf /usr/sbin/cman_tool: aisexec daemon didn't start
                                                           [FAILED]
[root@node3 ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster config_version="2" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>


===================================================================================

[root@node1 ~]# vim /etc/cluster/cluster.conf

[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.uplooking.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

[root@node1 ~]# ccs_tool update /etc/cluster/cluster.conf 
Config file updated from version 2 to 3

Update complete.

[root@node3 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... failed
cman not started: two_node set but there are more than 2 nodes /usr/sbin/cman_tool: aisexec daemon didn't start
                                                           [FAILED]
================================================================================================


[root@node1 ~]# vim /etc/cluster/cluster.conf
[root@node1 ~]# 
[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="4" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.uplooking.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" />
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@node1 ~]# ccs_tool update /etc/cluster/cluster.conf
Config file updated from version 3 to 4

Update complete.

[root@node3 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]
[root@node3 ~]# cman_tool status
Version: 6.2.0
Config Version: 4
Cluster Name: iscsi_cluster
Cluster Id: 26292
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 3
Expected votes: 1
Total votes: 3
Quorum: 2  
Active subsystems: 7
Flags: Dirty 
Ports Bound: 0  
Node name: node3.uplooking.com
Node ID: 3
Multicast addresses: 239.192.102.27 
Node addresses: 172.16.1.3 
==================================================================


[root@node3 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
/sbin/mount.gfs2: error mounting /dev/mapper/vg--iscsi-lv--iscsi on /iscsi: Invalid argument

[root@node3 ~]# cat /var/log/messages 

Feb 29 02:45:54 node3 kernel: GFS2: fsid=: Trying to join cluster "lock_dlm", "iscsi_cluster:table1"
Feb 29 02:45:54 node3 kernel: dlm: Using TCP for communications
Feb 29 02:45:54 node3 kernel: dlm: got connection from 1
Feb 29 02:45:54 node3 kernel: dlm: got connection from 2
Feb 29 02:45:54 node3 kernel: GFS2: fsid=iscsi_cluster:table1.2: Joined cluster. Now mounting FS...
Feb 29 02:45:55 node3 kernel: GFS2: fsid=iscsi_cluster:table1.2: can't mount journal #2
Feb 29 02:45:55 node3 kernel: GFS2: fsid=iscsi_cluster:table1.2: there are only 2 journals (0 - 1)

[root@node1 ~]# gfs2_tool journals /iscsi
journal1 - 128MB
journal0 - 128MB
2 journal(s) found.

[root@node1 ~]# gfs2_jadd -j 1 /iscsi
Filesystem:            /iscsi
Old Journals           2
New Journals           3

[root@node3 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
[root@node3 ~]# cat /iscsi/file1 
iscsi test

  • 22
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

温柔-的-女汉子

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值