MySQL5.7 MM+Keepalive测试验证

环境

软件版本
centos7.4-1708
mysql5.7.29
keepalived2.0.10
主机名IP
node1192.168.183.102mysql1
node2192.168.183.103mysql2
vip192.168.183.200虚拟IP

MySQL安装

安装过程:
https://blog.csdn.net/zz_aiytag/article/details/89917832

两个节点分别安装,过程一样。
MySQL默认安装在/var/lib/mysql目录下
配置文件为/etc/my.cnf,默认配置如下

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

设置互为主备

修改配置

node1:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

# 开启mysql binlog功能
log-bin=mysql-bin 
# binlog记录内容的方式
binlog_format=mixed
# 服务的唯一编号(另一个写2)
server-id=1
# 字符集编码
character-set-server=utf8
# 自增长字段从哪个数开始(另一个写2)
auto_increment_offset=1
# 自增长字段每次递增的量
auto_increment_increment=2
# 以下参数根据需要来设置
# logs-slave-updates
# expire_logs_days = 10 默认为0
# max_binlog_size = 100M 默认1G
# sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
# log_bin_trust_function_creators=0 默认为0
[client]
# 字符集编码
default-character-set=utf8

node2与node1保持一致,除和server-idauto_increment_offset属性。

修改完配置后,两节点均重启mysql服务。

[root@worker opt]# systemctl restart mysqld

互相授权配置

node1做主,node2做从的情况
先在node1上操作:

# 自动创建用户
mysql> grant replication slave,replication client on *.* to 'sync'@'%' identified by 'bigdata';
Query OK, 0 rows affected, 1 warning (0.00 sec)
# 查看node1的master状态
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      454 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

在node2上操作:

mysql> change master to master_host='192.168.183.103',master_user='sync',master_password='bigdata',master_log_file='mysql-bin.000001',master_log_pos=454;
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.183.103
                  Master_User: sync
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 454
               Relay_Log_File: master-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

查看Slave_IO_Running和Slave_SQL_Running状态为Yes说明设置成功。

node2做主,node1做从的情况
先在node2上操作:

mysql> grant replication slave,replication client on *.* to 'sync'@'%' identified by 'bigdata';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      462 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

在node1上操作:

mysql> change master to master_host='192.168.183.102',master_user='sync',master_password='bigdata',master_log_file='mysql-bin.000001',master_log_pos=462;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.183.102
                  Master_User: sync
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 462
               Relay_Log_File: worker-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

测试

在node1上创建sync01数据库

mysql> create database sync01;
Query OK, 1 row affected (0.01 sec)

在node2上查看数据库

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sync01             |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

(此处遗留一个问题,就是如果防火墙firewalld关闭的话,两库之间是同步的,如果firewalld不关闭,需要增加策略?)

已解决:
防火墙开启的状态下,需要将mysql服务放行,使用firewall-cmd工具

# 查看firewall-cmd版本
[root@master services]# firewall-cmd --version
0.4.4.4
# 查看已经放行的服务
[root@master services]# firewall-cmd --list-service
ssh dhcpv6-client
# 永久放行
[root@master services]# firewall-cmd --add-service=mysql --permanent
success
# 重新加载使生效,不用重启
[root@master services]# firewall-cmd --reload
success
# 重新查看已放行的服务
[root@master services]# firewall-cmd --list-service
ssh dhcpv6-client mysql

Keepalive安装

下载Keepalive安装包,就是源码包
https://www.keepalived.org/software/keepalived-2.0.10.tar.gz
下载后上传到节点1和节点2,以节点1为例上传到/opt目录(自定义)并解压

yum源的设置

cd /etc/yum.repos.d/
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bakup        # 复制原有的源文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo              # 通过wget获取阿里云的源文件
yum clean all                                   # 清除缓存
yum makecache 

安装keepalive所需要环境

yum install -y openssl-devel popt-devel  curl gcc  libnl3-devel net-snmp-devel  libnfnetlink-devel

进入安装目录,编译,安装。

cd /opt/keepalived-2.0.10
[root@master keepalived-2.0.10]# ./configure --prefix=/opt/keepalived
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
......
Keepalived configuration
------------------------
Keepalived version       : 2.0.10
Compiler                 : gcc
Preprocessor flags       :  -I/usr/include/libnl3 
Compiler flags           : -Wall -Wunused -Wstrict-prototypes -Wextra -Winit-self -g -D_GNU_SOURCE -fPIE -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -O2  
Linker flags             :  -pie
Extra Lib                :  -lcrypto  -lssl  -lnl-genl-3 -lnl-3 
Use IPVS Framework       : Yes
IPVS use libnl           : Yes
IPVS syncd attributes    : No
IPVS 64 bit stats        : No
HTTP_GET regex support   : No
fwmark socket support    : Yes
Use VRRP Framework       : Yes
Use VRRP VMAC            : Yes
Use VRRP authentication  : Yes
With ip rules/routes     : Yes
Use BFD Framework        : No
SNMP vrrp support        : No
SNMP checker support     : No
SNMP RFCv2 support       : No
SNMP RFCv3 support       : No
DBUS support             : No
SHA1 support             : No
Use Json output          : No
libnl version            : 3
Use IPv4 devconf         : No
Use libiptc              : No
Use libipset             : No
init type                : systemd
Strict config checks     : No
Build genhash            : Yes
Build documentation      : No
# 编译
[root@master keepalived-2.0.10]# make
Making all in lib
make[1]: Entering directory `/opt/keepalived-2.0.10/lib'
make  all-am
make[2]: Entering directory `/opt/keepalived-2.0.10/lib'
  CC       memory.o
  CC       utils.o
utils.c: In function ‘fopen_safe’:
utils.c:792:6: warning: unused variable ‘flags’ [-Wunused-variable]
  int flags = O_NOFOLLOW | O_CREAT | O_CLOEXEC;
      ^
  CC       notify.o
  CC       timer.o
...
make[1]: Leaving directory `/opt/keepalived-2.0.10/genhash'
Making all in bin_install
make[1]: Entering directory `/opt/keepalived-2.0.10/bin_install'
make[1]: Leaving directory `/opt/keepalived-2.0.10/bin_install'
make[1]: Entering directory `/opt/keepalived-2.0.10'
  EDIT     README
make[1]: Leaving directory `/opt/keepalived-2.0.10'

############################################### 安装
[root@master keepalived-2.0.10]# make install
Making install in lib
make[1]: Entering directory `/opt/keepalived-2.0.10/lib'
make  install-am
make[2]: Entering directory `/opt/keepalived-2.0.10/lib'
make[3]: Entering directory `/opt/keepalived-2.0.10/lib'
make[2]: Nothing to be done for `install-exec-am'.
 /usr/bin/mkdir -p '/opt/keepalived/share/doc/keepalived'
 /usr/bin/install -c -m 644 README '/opt/keepalived/share/doc/keepalived'
make[2]: Leaving directory `/opt/keepalived-2.0.10'
make[1]: Leaving directory `/opt/keepalived-2.0.10

安装后目录如下

[root@master keepalived-2.0.10]# cd /opt/keepalived
[root@master keepalived]# ll
total 0
drwxr-xr-x. 2 root root 21 Jul 17 14:48 bin
drwxr-xr-x. 4 root root 41 Jul 17 14:48 etc
drwxr-xr-x. 2 root root 24 Jul 17 14:48 sbin
drwxr-xr-x. 5 root root 40 Jul 17 14:48 share

node2节点的安装方法一致

Keepalive配置

前期配置

# 可执行文件直接执行
[root@master keepalived]# cp /opt/keepalived/sbin/keepalived /usr/sbin/
# 加入开机启动
[root@master keepalived]# cp /opt/keepalived-2.0.10/keepalived/etc/init.d/keepalived /etc/init.d/
# 加入配置 网卡?  
[root@master keepalived]# cp /opt/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
# 创建keepalive配置文件夹并拷贝配置文件
[root@master keepalived]# mkdir -p /etc/keepalived
[root@master keepalived]# cp /opt/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
# 开机启动文件增加可执行权限
[root@master keepalived]# chmod +x /etc/init.d/keepalived 

编辑Keepalived关闭脚本

[root@master keepalived]# touch /var/lib/mysql/killkeepalived.sh

#!/bin/sh  
systemctl stop keepalived

# 保存退出后给脚本增加执行权限
[root@master mysql]# cd /var/lib/mysql
[root@master mysql]# chmod +x killkeepalived.sh

以上操作node1node2一样,脚本内容也一样

编辑keepalived.conf文件,先编辑node1节点的

[root@master mysql]# cd /etc/keepalived/
[root@master keepalived]# vim keepalived.conf

内容如下:

! Configuration File for keepalived

global_defs {
   router_id MySQL-HA
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.183.200
    }
}

virtual_server 192.168.183.200 3306 {
    delay_loop 6
#    lb_algo wrr
#    lb_kind DR
    persistence_timeout 50
    protocol TCP
    nat_mask 255.255.255.0

    real_server 192.168.183.102 3306 {
        weight 3
        notify_down /var/lib/mysql/killkeepalived.sh
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 3306
        }
    }
}

再编辑node2节点的,内容如下

! Configuration File for keepalived

global_defs {
   router_id MySQL-HA
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 80
    advert_int 1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.183.200
    }
}

virtual_server 192.168.183.200 3306 {
    delay_loop 6
#    lb_algo wrr
#    lb_kind DR
    persistence_timeout 50
    protocol TCP
    nat_mask 255.255.255.0

    real_server 192.168.183.103 3306 {
        weight 3
        notify_down /var/lib/mysql/killkeepalived.sh
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 3306
        }
    }
}

两份文件只有priority参数和real_server参考不一样,别的都一样。
这里判断MySQL服务是否正常是通过TCP_CHECK,没有通过脚本判断。

Keepalived启动

两节点启动keepalived服务并查看,启动指令如下:

[root@master keepalived]# systemctl start keepalived
[root@master keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-07-18 15:45:14 CST; 18min ago
  Process: 10644 ExecStart=/opt/keepalived/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 10645 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─10645 /opt/keepalived/sbin/keepalived -D
           ├─10646 /opt/keepalived/sbin/keepalived -D
           └─10647 /opt/keepalived/sbin/keepalived -D

Jul 18 15:46:16 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master.learn.bigdata Keepalived_vrrp[10647]: (VI_1) Sending/queueing gratuitous ARPs on eth0 for 192.168.183.200
Jul 18 15:46:21 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master.learn.bigdata Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200

查看node1是否有虚拟IP

[root@master keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:88:a5:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.183.102/24 brd 192.168.183.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.183.200/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe88:a535/64 scope link 
       valid_lft forever preferred_lft forever

设置Keepalived开机启动

[root@worker mysql]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

验证节点漂移

两个节点同时打开查看日志/var/log/message

[root@master keepalived]# tail -1000f /var/log/messages
Jul 18 15:45:20 master Keepalived_healthcheckers[10646]: TCP connection to [192.168.183.102]:tcp:3306 success.
Jul 18 15:46:15 master Keepalived_vrrp[10647]: (VI_1) Backup received priority 0 advertisement
Jul 18 15:46:16 master Keepalived_vrrp[10647]: (VI_1) Receive advertisement timeout
Jul 18 15:46:16 master Keepalived_vrrp[10647]: (VI_1) Entering MASTER STATE
Jul 18 15:46:16 master Keepalived_vrrp[10647]: (VI_1) setting VIPs.
Jul 18 15:46:16 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master Keepalived_vrrp[10647]: (VI_1) Sending/queueing gratuitous ARPs on eth0 for 192.168.183.200
Jul 18 15:46:16 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:16 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master Keepalived_vrrp[10647]: (VI_1) Sending/queueing gratuitous ARPs on eth0 for 192.168.183.200
Jul 18 15:46:21 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 15:46:21 master Keepalived_vrrp[10647]: Sending gratuitous ARP on eth0 for 192.168.183.200
Jul 18 16:01:01 master systemd: Started Session 8 of user root.
Jul 18 16:01:01 master systemd: Starting Session 8 of user root.
Jul 18 16:06:45 master systemd: Reloading.

在节点1上关闭mysql服务,再次查看两节点虚拟IP和日志信息。
不再絮述。

脑裂现象

在另一个环境中,两个节点开启keepalived服务,使用ip a指令查看IP时,竟然都出现虚拟IP
节点一:

[root@master services]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 34:48:ed:f3:76:a4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.45/24 brd 192.168.3.255 scope global noprefixroute em1
       valid_lft forever preferred_lft forever
    inet 192.168.3.43/32 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::b55e:98fd:5b2c:30c3/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

节点二:

[root@slave keepalived]# vim keepalived.conf 
[root@slave keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 34:48:ed:f3:61:38 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.46/24 brd 192.168.3.255 scope global noprefixroute em1
       valid_lft forever preferred_lft forever
    inet 192.168.3.43/32 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::f764:b214:bfa1:da7d/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

查看节点一的日志

Aug  1 17:31:27 localhost Keepalived_vrrp[200826]: (VI_1) Receive advertisement timeout
Aug  1 17:31:27 localhost Keepalived_vrrp[200826]: (VI_1) Entering MASTER STATE
Aug  1 17:31:27 localhost Keepalived_vrrp[200826]: (VI_1) setting VIPs.
Aug  1 17:31:27 localhost Keepalived_vrrp[200826]: Sending gratuitous ARP on em1 for 192.168.3.43

节点二的,就是说两个节点都说自己是master

Aug  1 17:33:00 localhost Keepalived_vrrp[97161]: VRRP sockpool: [ifindex(2), family(IPv4), proto(112), unicast(0), fd(10,11)]
Aug  1 17:33:04 localhost Keepalived_vrrp[97161]: (VI_1) Receive advertisement timeout
Aug  1 17:33:04 localhost Keepalived_vrrp[97161]: (VI_1) Entering MASTER STATE
Aug  1 17:33:04 localhost Keepalived_vrrp[97161]: (VI_1) setting VIPs.
Aug  1 17:33:04 localhost Keepalived_vrrp[97161]: Sending gratuitous ARP on em1 for 192.168.3.43

解决方案:
此问题应该是防火墙将vrrp广播给过滤掉了,导致backup接受不到master的广播,然后自己去争抢了vip,因此可以将防火墙关闭,也可以通过增加策略来解决,两个节点都执行

# 指令中em1根据自己的情况更改
[root@master services]# firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface em1 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
success
[root@master services]# firewall-cmd --reload
success

再次观察两个节点的ip a,发现神奇的好了。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值