Keepalived实现Nginx双主高可用负载均衡集群

实验环境:

两台Nginx proxy(双主Nginx,各需要两块网卡,ens224连接内网,ens192连接外网)、两台web server(请求的负载均衡)、node1 proxy_server  外网网卡ens192:192.168.170.8/24  内网网卡ens224:192.168.70.253/24
node2 proxy_server  外网网卡ens192:192.168.170.9/24   内网网卡ens224:192.168.70.254/24
node3 Realserver1  内网网卡ens224:192.168.70.10/24
node4 Realserver2  内网网卡ens224:192.168.70.11/24

实验拓扑图:

注意:为了不影响实验结果,在实验开始前先关闭iptables和selinux,同时做ntp时间同步操作。

操作步骤:

一、配置内外网卡IP地址

1.配置ndoe1主机的IP
[root@node1 network-scripts]# vi ifcfg-ens192 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens192"
UUID="11910310-b000-4a81-bd33-ca6e2079f9c7"
DEVICE="ens192"
ONBOOT="yes"
IPADDR="192.168.170.8"
PREFIX="24"
GATEWAY="192.168.170.254"
DNS1="8.8.8.8"
IPV6_PRIVACY="no"
[root@node1 ~]#  ip addr add dev ens224 192.168.70.8/24

2.配置node2主机的IP
[root@node2 network-scripts]# vi ifcfg-ens192 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens192
UUID=e1bc4f4e-1515-4cb5-ae4e-a5b0670c96f7
DEVICE=ens192
ONBOOT=yes
IPADDR=192.168.170.9
PREFIX=24
GATEWAY=192.168.170.254
DNS1=8.8.8.8
IPV6_PRIVACY=no
[root@node2 ~]# ip addr add dev ens224 192.168.70.9/24

3.配置node3主机的IP
[root@node3 ~]#  ip addr add dev ens224 192.168.70.10/24

4.配置node4主机的IP
[root@node4 ~]#  ip addr add dev ens224 192.168.70.11/24

二、配置web服务(node3和node4主机都做同样配置,只需修改默认主页中的信息,以示区别)

node2 192.168.170.10

1.安装apache
[root@node3 ~]# yum -y install apache

2.创建默认主页
[root@node3 ~]# vim /var/www/html/index.html
<h1>Realserver1</h1>

3.启动apache
[root@node3 ~]# systemctl restart httpd

客户端测试正常
[root@node6 ~]# curl http://192.168.170.10
<h1>Realserver1</h1>
[root@node6 ~]# 

node4 192.168.170.11

1.安装apache
[root@node4 ~]# yum -y install apache

2.创建默认主页
[root@node4 ~]# vim /var/www/html/index.html
<h1>Realserver2</h1>

3.启动httpd服务器
[root@node4 ~]# systemctl start httpd.service

客户端测试正常
[root@node6 ~]# curl http://192.168.170.11
<h1>Realserver2</h1>
[root@node6 ~]# 

三、配置sorry_server(此服务配置于Nginx proxy主机上,两台Nginx proxy都做同样配置,只需修改默认主页中的IP地址为本机的IP即可,以示区别)

1.安装apache
[root@node1 ~]# yum -y install apache

2.创建默认主页
[root@node1 ~]# vim /var/www/html/index.html
<h1>sorry_server1</h1>

3.修改监听端口为8080,以免与nginx所监听的端口冲突
[root@node1 ~]# vim /etc/httpd/conf/httpd.conf
Listen 8080

四、配置代理(两台Nginx proxy都做同样配置)

这里仅对node1 proxy_server : 192.168.170.8/24来做了配置,node2主机同node1,这里不再阐述。

1.安装nginx
[root@node1 ~]# yum -y install nginx

2.定义upstream集群组,在http{}段中定义;
[root@node1 ~]# vim /etc/nginx/nginx.conf
http {
    upstream websrvs {
    server 192.168.70.10:80;
    server 192.168.70.11:80;
    server 127.0.0.1:8080 backup;
    }
}

3.调用定义的集群组,在server{}段的location{}段中调用;
[root@node1 ~]# vim /etc/nginx/conf.d/default.conf
server {
    location / {
    proxy_pass http://websrvs;
    index index.html;
    }
}
4.检测启动脚本,启动服务并查看状态
[root@node1 nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node1 nginx]# systemctl start nginx
[root@node1 nginx]# ss -tunlp | grep 80
tcp    LISTEN     0      128       *:80                    *:*                   users:(("nginx",pid=31466,fd=6),("nginx",pid=31465,fd=6),("nginx",pid=31464,fd=6),("nginx",pid=31463,fd=6))
tcp    LISTEN     0      128      :::80                   :::*                   users:(("nginx",pid=31466,fd=7),("nginx",pid=31465,fd=7),("nginx",pid=31464,fd=7),("nginx",pid=31463,fd=7))
[root@node1 nginx]# 

浏览器测试:
[root@node6 nginx]# for i in {1..20}; do curl http://192.168.170.88; done
<h1>Realserver1</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
[root@node2 nginx]# 

五、配置keepalived

node1主机上安装并配置

1.安装keepalived
[root@node1 ~]# yum -y install keepalived

2.编辑node1主机的配置文件/etc/keepalived/keepalived.conf,作如下配置:
[root@node1 keepalived]# vi keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from root@localhost
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30
   router_id node1
   vrrp_mcast_group4 224.0.100.80
}
vrrp_script chk_down {
    script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
    interval 1
    weight -10
    fall 1
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.170.88/24 dev ens192 label ens192:0
    }
    track_script {
        chk_down
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
配置keepalived服务器状态通知脚本:
[root@node1 keepalived]# vi notify.sh 
#!/bin/bash
#
contact='root@localhost'

notify() {
        local mailsubject="$(hostname) to be $1, vip floating"
        local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
        echo "$mailbody" | mail -s "$mailsubject" $contact
}

case $1 in
master)
        notify master
        systemctl start nginx
        ;;
backup)
        notify backup
        systemctl stop nginx
        ;;
fault)
        notify fault
        systemctl stop nginx
        ;;
*)
        echo "Usage: $(basename $0) {master|backup|fault}"
        exit 1
        ;;
esac

node2主机也作同样配置如下:
[root@node2 keepalived]# vi keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from root@localhost
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30
   router_id node2
   vrrp_mcast_group4 224.0.100.80
}
vrrp_script chk_down {
    script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
    interval 1
    weight -10
    fall 1
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens192
    virtual_router_id 51
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.170.88/24 dev ens192 label ens192:0
    }
    track_script {
        chk_down
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}

配置keepalived服务状态通知脚本:
[root@node2 keepalived]# vi notify.sh 
#!/bin/bash
#
contact='root@localhost'

notify() {
        local mailsubject="$(hostname) to be $1, vip floating"
        local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
        echo "$mailbody" | mail -s "$mailsubject" $contact
}

case $1 in
master)
        notify master
        systemctl start nginx
        ;;
backup)
        notify backup
        systemctl stop nginx
        ;;
fault)
        notify fault
        systemctl stop nginx
        ;;
*)
        echo "Usage: $(basename $0) {master|backup|fault}"
        exit 1
        ;;
esac

六、模拟故障,验证结果

主节点上停止keepalived服务
[root@node1 keepalived]# systemctl stop keepalived

备节点先停止nginx服务再停止keepalived服务
[root@node2 keepalived]# systemctl stop nginx
[root@node2 keepalived]# ss -tunlp |grep 80
[root@node2 keepalived]# systemctl stop keepalived
在备节点启动keepalived服务查看状态
[root@node2 keepalived]# systemctl start keepalived
[root@node2 keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2019-04-27 10:02:13 CST; 10s ago
  Process: 807 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 808 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─808 /usr/sbin/keepalived -D
           ├─809 /usr/sbin/keepalived -D
           └─810 /usr/sbin/keepalived -D

Apr 27 10:02:17 node2 Keepalived_vrrp[810]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 10:02:17 node2 Keepalived_vrrp[810]: Opening script file /etc/keepalived/notify.sh
Apr 27 10:02:18 node2 Keepalived_vrrp[810]: VRRP_Script(chk_nginx) succeeded
Apr 27 10:02:19 node2 Keepalived_vrrp[810]: VRRP_Instance(VI_1) Changing effective priority from 93 to 98
Apr 27 10:02:22 node2 Keepalived_vrrp[810]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 10:02:22 node2 Keepalived_vrrp[810]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens192 for 192.168.170.88
Apr 27 10:02:22 node2 Keepalived_vrrp[810]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 10:02:22 node2 Keepalived_vrrp[810]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 10:02:22 node2 Keepalived_vrrp[810]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 10:02:22 node2 Keepalived_vrrp[810]: Sending gratuitous ARP on ens192 for 192.168.170.88
查看虚拟ip地址状态
[root@node2 keepalived]# ifconfig ens192:0
ens192:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.88  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:ba:d9:07  txqueuelen 1000  (Ethernet)
查看nginx进程已经启动,
[root@node2 keepalived]# ps -aux | grep nginx
root       861  0.0  0.0 125116  2268 ?        Ss   10:02   0:00 nginx: master process /usr/sbin/nginx
nginx      862  0.0  0.0 125496  3176 ?        S    10:02   0:00 nginx: worker process
nginx      863  0.0  0.0 125496  3176 ?        S    10:02   0:00 nginx: worker process
nginx      864  0.0  0.0 125496  3176 ?        S    10:02   0:00 nginx: worker process
root       972  0.0  0.0 112708   984 pts/1    S+   10:02   0:00 grep --color=auto nginx

客户端测试
[root@node5 ~]# for i in {1..20}; do curl http://192.168.170.88; done
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
[root@node5 ~]# 

七、添加keepalived服务调用nginx做健康状态检查,

在主节点node1和从节点node2节点都要添加,此处以node1为例。
[root@node1 keepalived]# vi keepalived.conf
vrrp_script chk_nginx {
    script "killall -0 nginx && exit 0 || exit 1"
    interval 1
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    track_script {
        chk_nginx
    }
}

测试keepalived服务是否正常

在两台Nginx proxy的keepalived服务关闭keepalived
[root@node1 keepalived]# systemctl stop keepalived
[root@node2 keepalived]# systemctl stop keepalived

启动两台Nginx proxy的keepalived服务
[root@node2 keepalived]# systemctl start keepalived.service
[root@node2 keepalived]# ifconfig ens192:0
ens192:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.88  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:ba:d9:07  txqueuelen 1000  (Ethernet)

[root@node2 keepalived]# 

[root@node1 keepalived]# systemctl start keepalived
[root@node1 keepalived]# ifconfig ens192:0
ens192:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.88  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:50:56:a3:50:cf  txqueuelen 1000  (Ethernet)

客户端测试;
[root@node5 ~]# for i in {1..20}; do curl http://192.168.170.88; done
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>

测试故障转移:

停止nginx服务同时启动httpd服务
[root@node1 keepalived]# systemctl stop nginx && systemctl start httpd

主节点上虚拟ip地址转移到备节点
[root@node2 keepalived]# ifconfig ens192:0
ens192:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.88  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:ba:d9:07  txqueuelen 1000  (Ethernet)

[root@node2 keepalived]# 

客户端测试服务正常:
[root@node5 ~]# for i in {1..20}; do curl http://192.168.170.88; done
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>

Keepalived实现Nginx双主高可用

增加另一个实例在两个keepalived节点上:
[root@node1 keepalived]# vi keepalived.conf
vrrp_instance VI_2 {
    state BACKUP
    interface ens192
    virtual_router_id 52
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.170.99/24 dev ens192 label ens192:1
    }
    track_script {
        chk_down
        chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}    
[root@node2 keepalived]# vi keepalived.conf
vrrp_instance VI_2 {
    state MASTER
    interface ens192
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.170.99/24 dev ens192 label ens192:1
    }
    track_script {
        chk_down
        chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}

六、模拟故障,验证结果

1.启动两台Nginx proxy的keepalived服务

在node1重新启动keepalived服务查看状态
[root@node1 keepalived]# systemctl restart keepalived
[root@node1 keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2019-04-27 12:33:30 CST; 2s ago
  Process: 5837 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 5838 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─5838 /usr/sbin/keepalived -D
           ├─5839 /usr/sbin/keepalived -D
           └─5840 /usr/sbin/keepalived -D

Apr 27 12:33:30 node1 Keepalived_vrrp[5840]: VRRP_Instance(VI_1) Transition to MASTER STATE
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: VRRP_Instance(VI_1) Entering MASTER STATE
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: VRRP_Instance(VI_1) setting protocol VIPs.
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens192 for 192.168.170.88
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:31 node1 Keepalived_vrrp[5840]: Opening script file /etc/keepalived/notify.sh
[root@node1 keepalived]# ifconfig ens192:0
ens192:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.88  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:50:56:a3:50:cf  txqueuelen 1000  (Ethernet)

[root@node1 keepalived]# 

在node2重新启动keepalived服务查看状态
[root@node2 keepalived]#  systemctl restart keepalived
[root@node2 keepalived]#  systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2019-04-27 12:33:22 CST; 18s ago
  Process: 3045 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 3046 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─3046 /usr/sbin/keepalived -D
           ├─3047 /usr/sbin/keepalived -D
           └─3048 /usr/sbin/keepalived -D

Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens192 for 192.168.170.88
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: Sending gratuitous ARP on ens192 for 192.168.170.88
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: Opening script file /etc/keepalived/notify.sh
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 98
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: VRRP_Instance(VI_1) Entering BACKUP STATE
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: VRRP_Instance(VI_1) removing protocol VIPs.
Apr 27 12:33:30 node2 Keepalived_vrrp[3048]: Opening script file /etc/keepalived/notify.sh


[root@node2 keepalived]# ifconfig
ens192:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.99  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:ba:d9:07  txqueuelen 1000  (Ethernet)



2.访问192.168.170.88,结果应是后端的web server轮询响应请求
[root@node5 ~]# for i in {1..20}; do curl http://192.168.170.88; done
<h1>Realserver1</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver2</h1>
3.访问192.168.170.99,结果应是后端的web server轮询响应请求
[root@node5 ~]# for i in {1..20}; do curl http://192.168.170.99; done
<h1>Realserver1</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
<h1>Realserver2</h1>
<h1>Realserver1</h1>
4.将后端的web server关闭一台,访问192.168.20.100或192.168.20.200,响应请求的将只是另一台正常运行web server的主机

5.将后端的web server都关闭,此时访问192.168.20.100或192.168.20.200,响应请求的将只是Nginx proxy中定义的主server中的sorry_server

6.关闭一台Nginx proxy 的nginx服务,备server将把IP地址添加到本机,继续提供服务,此时访问192.168.20.100或192.168.20.200并不会有任何察觉

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值