redis 集群三 · cluster模式

一、简介

sentinel模式基本可以满足一般生产的需求,具备高可用性。 但是当数量过大到一台服务器存放不下时,主从模式或者sentinel模式就不能满足需求了。
这个时候就需要对存储的数据进行分片,将数据存储到多个redis实例中。cluster模式的出现就是为了解决容量有限问题。
Cluster可以说是sentinel和主从模式的结合体,通过cluster可以实现主从和master重选功能,所以如果配置两个副本三个分片的话,就需要6个redis实例。
Cluster要求至少需要3个Master才能组成一个集群,同时每个master至少需要一个slave节点
Cluster中读写请求都是在master上完成。slave节点只是数据备份,当master发生宕机,对应slave才会提拔为master对外提供服务。
需要额外说明一些配置:

绑定地址:        bind 10.2.33.* 不能绑定127.0.0.1或localhost,否则客户端重定向时会报Connection refused的错误
开启Cluster:    cluster-enabled yes
集群配置文件: cluster-config-file nodes_7001.conf 这个是redis运行时保存配置的文件,我们不可以修改这个文件
集群超时时间为:cluster-node-timeout 15000 超时多久认为是宕机了
槽是否全覆盖:  cluster-require-full-coverage no 默认为yes,只要有节点宕机导致16384个槽没全备覆盖,整个集群就全部停止服务,所以一定要为no
后台运行:    daemonize yes
输出日志:    logfile "/data/redis/7001/redis_7001.log"
监听端口:    port 7001

二、服务器:

10.2.33.99 7001-7012
10.2.33.98 7001-7012
10.2.33.97 7001-7012

三、配置步骤

####下载安装redis本文略过,需要的去找前文

cd /data/redis
cat creat_redis.sh    ###建立各实例目录&配置文件
#!/bin/bash
nodes="12"
port="7001"
path="/data/redis"
cd $path
for((n=0;n<$nodes;n++));
do
	dirname=$(($port+$n))
	mkdir  $dirname
#	rm -rf  $path/$dirname
	filename="redis_"$dirname".conf"
	logname="redis_"$dirname".log"
	cp redis.conf $dirname/$filename
	ip_n=`ip addr show  |grep eth0 |grep inet |cut -c 10-19`  ###获取ip,自己调试
	sed -i "s/127.0.0.1/$ip_n/g" $dirname/$filename	##监听自己的ip,127或localhost会在配置集群中报错"Could not connect to Redis at ip:7001: Connection refused"
	sed -i 's/daemonize no/daemonize yes/g' $dirname/$filename
	sed -i "s/port 6379/port $dirname/g" $dirname/$filename
	sed -i 's/appendonly no/appendonly yes/g' $dirname/$filename
	sed -i 's/logfile/#logfile/g' $dirname/$filename
	echo "logfile $path/$dirname/$logname" >> $dirname/$filename
	echo 'cluster-enabled yes' >> $dirname/$filename
	echo "cluster-config-file $path/$dirname/nodes-$dirname.conf" >> $dirname/$filename
	echo 'cluster-node-timeout 15000' >> $dirname/$filename
done

cd /data/
wget https://cache.ruby-lang.org/pub/ruby/2.7/ruby-2.7.0.tar.gz
tar -zxvf ruby-2.7.0.tar.gz
cd ruby-2.7.0/
./configure 
make && make install 
ln -s /usr/local/bin/ruby /usr/bin/ruby
cd /data/
wget https://rubygems.org/rubygems/rubygems-3.1.2.tgz
tar -zxvf rubygems-3.1.2.tgz 
cd rubygems-3.1.2/
ruby setup.rb
ln -s /usr/local/bin/gem /usr/bin/gem
gem install redis
#如安装不成功,更换源gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/

cd /data/redis
cat start_all.sh
#!/bin/bash
nodes="12"
port="7001"
path="/data/redis"
cd $path
for((n=0;n<$nodes;n++));
do
	dirname=$(($port+$n))
	filename="redis_"$dirname".conf"
	redis-server $path/$dirname/$filename &
done

ps -ef|grep redis 
root     30133     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7001 [cluster]
root     30134     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7003 [cluster]
root     30135     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7004 [cluster]
root     30136     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7002 [cluster]
root     30137     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7005 [cluster]
root     30138     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7006 [cluster]
root     30139     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7007 [cluster]
root     30140     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7008 [cluster]
root     30141     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7011 [cluster]
root     30142     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7012 [cluster]
root     30144     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7009 [cluster]
root     30162     1  0 14:55 ?        00:00:00 redis-server 10.2.33.99:7010 [cluster]

 其它2台 10.2.33.98 - 97 同样配置 ,都启动完成后,开始集群配置,很简单

redis5 以前版本使用ruby,也就是前文下载安装的2个包

redis-trib.rb create --replicas 1 10.2.33.99:7001 10.2.33.99:7002 10.2.33.99:7003 10.2.33.99:7004 10.2.33.99:7005 10.2.33.99:7006 10.2.33.99:7007 10.2.33.99:7008 10.2.33.99:7009 10.2.33.99:7010 10.2.33.99:7011 10.2.33.99:7012 10.2.33.98:7001 10.2.33.98:7002 10.2.33.98:7003 10.2.33.98:7004 10.2.33.98:7005 10.2.33.98:7006 10.2.33.98:7007 10.2.33.98:7008 10.2.33.98:7009 10.2.33.98:7010 10.2.33.98:7011 10.2.33.98:7012 10.2.33.97:7001 10.2.33.97:7002 10.2.33.97:7003 10.2.33.97:7004 10.2.33.97:7005 10.2.33.97:7006 10.2.33.97:7007 10.2.33.97:7008 10.2.33.97:7009 10.2.33.97:7010 10.2.33.97:7011 10.2.33.97:7012

本文redis_version:6.2.6 直接创建即可

redis-cli --cluster create  10.2.33.99:7001 10.2.33.99:7002 10.2.33.99:7003 10.2.33.99:7004 10.2.33.99:7005 10.2.33.99:7006 10.2.33.99:7007 10.2.33.99:7008 10.2.33.99:7009 10.2.33.99:7010 10.2.33.99:7011 10.2.33.99:7012 10.2.33.98:7001 10.2.33.98:7002 10.2.33.98:7003 10.2.33.98:7004 10.2.33.98:7005 10.2.33.98:7006 10.2.33.98:7007 10.2.33.98:7008 10.2.33.98:7009 10.2.33.98:7010 10.2.33.98:7011 10.2.33.98:7012 10.2.33.97:7001 10.2.33.97:7002 10.2.33.97:7003 10.2.33.97:7004 10.2.33.97:7005 10.2.33.97:7006 10.2.33.97:7007 10.2.33.97:7008 10.2.33.97:7009 10.2.33.97:7010 10.2.33.97:7011 10.2.33.97:7012  --cluster-replicas 1

注:--cluster-replicas 1,表示每个主节点一个备份节点,即一主一从

Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..............
>>> Performing Cluster Check (using node 10.2.33.99:7001)
M: 16199e4522ec3e3917e922106ef952e625101108 10.2.33.99:7001
   slots:[0-909] (910 slots) master
   1 additional replica(s)
S: 08837dac086a372c81f08ac0502182903d024268 10.2.33.99:7007
   slots: (0 slots) slave
   replicates 449c981a6bcf2b56e544376a0fb0e8087a7172f7
M: ff3f65e20d28268f04a3ffb03d27d69292230ee4 10.2.33.98:7004
   slots:[9102-10011] (910 slots) master
   1 additional replica(s)
M: c5537d8d2f98e52bcb4e490198d5f3c547f38453 10.2.33.98:7003
   slots:[6372-7281] (910 slots) master
   1 additional replica(s)
S: d6c31f4cba9b83a067d275afb6c347f0408e3e4f 10.2.33.98:7011
   slots: (0 slots) slave
   replicates 02ee3438f7b6dba90218f19d6e333b57284b617a
M: 589578cc5d7ac2a3855104fc04a50dfab218e34c 10.2.33.98:7001
   slots:[910-1819] (910 slots) master
   1 additional replica(s)
S: 035b557eba9a7e1651a7573d65941d8cb8bf0c81 10.2.33.99:7008
   slots: (0 slots) slave
   replicates ee14b2481dbb8cc5cc14ec018b13506514ec9ab0
M: 449c981a6bcf2b56e544376a0fb0e8087a7172f7 10.2.33.97:7006
   slots:[15474-16383] (910 slots) master
   1 additional replica(s)
M: aac971872eb32bebd35c67712d13a12a457fdf46 10.2.33.99:7003
   slots:[5461-6371] (911 slots) master
   1 additional replica(s)
M: e007b6251e39dda9826131823e26c9e6c7e39076 10.2.33.98:7002
   slots:[3641-4550] (910 slots) master
   1 additional replica(s)
M: c9ffbaf72cf3b7c527f346f486c2cd04f6aab5c7 10.2.33.99:7002
   slots:[2731-3640] (910 slots) master
   1 additional replica(s)
S: aaa638883904338e934f5f17366ace654e4f290c 10.2.33.98:7012
   slots: (0 slots) slave
   replicates 6704e73a119afac449423b21bccb27eb51b89dc3
M: 36638350b7f02cff50256b7022c6449389ee3f7c 10.2.33.98:7006
   slots:[14564-15473] (910 slots) master
   1 additional replica(s)
S: 74bc6a0d4f852e51d9e1e469050eaa6a791c5091 10.2.33.97:7009
   slots: (0 slots) slave
   replicates c5537d8d2f98e52bcb4e490198d5f3c547f38453
M: ee14b2481dbb8cc5cc14ec018b13506514ec9ab0 10.2.33.97:7001
   slots:[1820-2730] (911 slots) master
   1 additional replica(s)
M: ccdb1fa7b62d05bc13eef95079595a8bf35adbf2 10.2.33.98:7005
   slots:[11833-12742] (910 slots) master
   1 additional replica(s)
S: 01e76676422bf795772fefa429fae245a1059586 10.2.33.98:7008
   slots: (0 slots) slave
   replicates c9ffbaf72cf3b7c527f346f486c2cd04f6aab5c7
M: 48f2496e9b796510e102004f45a10b695469dc03 10.2.33.97:7004
   slots:[10012-10922] (911 slots) master
   1 additional replica(s)
S: c484dd79a9a589fef3c8ae56abc34c72c1e88ca8 10.2.33.99:7010
   slots: (0 slots) slave
   replicates 55b2f51633338e2e5b53a143949a5ed73dc9bf2a
M: 55b2f51633338e2e5b53a143949a5ed73dc9bf2a 10.2.33.97:7003
   slots:[7282-8191] (910 slots) master
   1 additional replica(s)
S: 6ca01a7a10dfcba441981e408e5a86a8163dca9e 10.2.33.98:7010
   slots: (0 slots) slave
   replicates c71c180dd2616d8cb778434ed4bbf898249471aa
S: 68dc65a324759a383e260f4d0e6ca2203a1a5415 10.2.33.97:7010
   slots: (0 slots) slave
   replicates ff3f65e20d28268f04a3ffb03d27d69292230ee4
M: ee6dba34f20d7e0983f6e8ae272b5840ceaaed60 10.2.33.97:7005
   slots:[12743-13652] (910 slots) master
   1 additional replica(s)
S: 20785e8285089d52e9c3dc8dcf9225b8d647ec1f 10.2.33.98:7009
   slots: (0 slots) slave
   replicates aac971872eb32bebd35c67712d13a12a457fdf46
S: e60e7584f3803b5215257979b7b64bbaf9bea670 10.2.33.97:7008
   slots: (0 slots) slave
   replicates e007b6251e39dda9826131823e26c9e6c7e39076
S: 4731491c19f88399437d420a074dde9372075cfd 10.2.33.98:7007
   slots: (0 slots) slave
   replicates 16199e4522ec3e3917e922106ef952e625101108
M: c71c180dd2616d8cb778434ed4bbf898249471aa 10.2.33.99:7004
   slots:[8192-9101] (910 slots) master
   1 additional replica(s)
S: 255891184b7c76b1ebc7e5a0b14b8e569e52462e 10.2.33.99:7011
   slots: (0 slots) slave
   replicates 48f2496e9b796510e102004f45a10b695469dc03
M: 02ee3438f7b6dba90218f19d6e333b57284b617a 10.2.33.99:7005
   slots:[10923-11832] (910 slots) master
   1 additional replica(s)
S: 8e132a2b94c27ed68e1bd606f3427f88b2e51978 10.2.33.97:7012
   slots: (0 slots) slave
   replicates 36638350b7f02cff50256b7022c6449389ee3f7c
M: 6704e73a119afac449423b21bccb27eb51b89dc3 10.2.33.99:7006
   slots:[13653-14563] (911 slots) master
   1 additional replica(s)
S: 1cb7ac47f51482fd4f5a8ff01932db49a17c77e0 10.2.33.99:7012
   slots: (0 slots) slave
   replicates ee6dba34f20d7e0983f6e8ae272b5840ceaaed60
S: f550bb10934d5e730a744c9569753b21de82059c 10.2.33.97:7011
   slots: (0 slots) slave
   replicates ccdb1fa7b62d05bc13eef95079595a8bf35adbf2
S: b95bf68de296985ba4d63f2fd5f3a897c0bc0e9e 10.2.33.99:7009
   slots: (0 slots) slave
   replicates b845440f9eba55225d15c7bb96bfebbb96b7e35b
S: 6723094f3a3d1a5a8d7ed7b2f3862da8a22a0d9d 10.2.33.97:7007
   slots: (0 slots) slave
   replicates 589578cc5d7ac2a3855104fc04a50dfab218e34c
M: b845440f9eba55225d15c7bb96bfebbb96b7e35b 10.2.33.97:7002
   slots:[4551-5460] (910 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

创建完成后,查看集群节点信息

redis-cli -h 10.2.33.99 -p 7001 cluster nodes 
08837dac086a372c81f08ac0502182903d024268 10.2.33.99:7007@17007 slave 449c981a6bcf2b56e544376a0fb0e8087a7172f7 0 1638169155000 30 connected
ff3f65e20d28268f04a3ffb03d27d69292230ee4 10.2.33.98:7004@17004 master - 0 1638169155000 16 connected 9102-10011
c5537d8d2f98e52bcb4e490198d5f3c547f38453 10.2.33.98:7003@17003 master - 0 1638169156583 15 connected 6372-7281
d6c31f4cba9b83a067d275afb6c347f0408e3e4f 10.2.33.98:7011@17011 slave 02ee3438f7b6dba90218f19d6e333b57284b617a 0 1638169156168 5 connected
589578cc5d7ac2a3855104fc04a50dfab218e34c 10.2.33.98:7001@17001 master - 0 1638169152000 13 connected 910-1819
035b557eba9a7e1651a7573d65941d8cb8bf0c81 10.2.33.99:7008@17008 slave ee14b2481dbb8cc5cc14ec018b13506514ec9ab0 0 1638169158203 25 connected
449c981a6bcf2b56e544376a0fb0e8087a7172f7 10.2.33.97:7006@17006 master - 0 1638169157186 30 connected 15474-16383
aac971872eb32bebd35c67712d13a12a457fdf46 10.2.33.99:7003@17003 master - 0 1638169153000 3 connected 5461-6371
e007b6251e39dda9826131823e26c9e6c7e39076 10.2.33.98:7002@17002 master - 0 1638169154000 14 connected 3641-4550
c9ffbaf72cf3b7c527f346f486c2cd04f6aab5c7 10.2.33.99:7002@17002 master - 0 1638169156000 2 connected 2731-3640
aaa638883904338e934f5f17366ace654e4f290c 10.2.33.98:7012@17012 slave 6704e73a119afac449423b21bccb27eb51b89dc3 0 1638169152000 6 connected
36638350b7f02cff50256b7022c6449389ee3f7c 10.2.33.98:7006@17006 master - 0 1638169152514 18 connected 14564-15473
74bc6a0d4f852e51d9e1e469050eaa6a791c5091 10.2.33.97:7009@17009 slave c5537d8d2f98e52bcb4e490198d5f3c547f38453 0 1638169153000 15 connected
ee14b2481dbb8cc5cc14ec018b13506514ec9ab0 10.2.33.97:7001@17001 master - 0 1638169154000 25 connected 1820-2730
ccdb1fa7b62d05bc13eef95079595a8bf35adbf2 10.2.33.98:7005@17005 master - 0 1638169153000 17 connected 11833-12742
01e76676422bf795772fefa429fae245a1059586 10.2.33.98:7008@17008 slave c9ffbaf72cf3b7c527f346f486c2cd04f6aab5c7 0 1638169153632 2 connected
48f2496e9b796510e102004f45a10b695469dc03 10.2.33.97:7004@17004 master - 0 1638169153000 28 connected 10012-10922
c484dd79a9a589fef3c8ae56abc34c72c1e88ca8 10.2.33.99:7010@17010 slave 55b2f51633338e2e5b53a143949a5ed73dc9bf2a 0 1638169157000 27 connected
55b2f51633338e2e5b53a143949a5ed73dc9bf2a 10.2.33.97:7003@17003 master - 0 1638169151082 27 connected 7282-8191
6ca01a7a10dfcba441981e408e5a86a8163dca9e 10.2.33.98:7010@17010 slave c71c180dd2616d8cb778434ed4bbf898249471aa 0 1638169155000 4 connected
68dc65a324759a383e260f4d0e6ca2203a1a5415 10.2.33.97:7010@17010 slave ff3f65e20d28268f04a3ffb03d27d69292230ee4 0 1638169155566 16 connected
ee6dba34f20d7e0983f6e8ae272b5840ceaaed60 10.2.33.97:7005@17005 master - 0 1638169153000 29 connected 12743-13652
20785e8285089d52e9c3dc8dcf9225b8d647ec1f 10.2.33.98:7009@17009 slave aac971872eb32bebd35c67712d13a12a457fdf46 0 1638169154000 3 connected
e60e7584f3803b5215257979b7b64bbaf9bea670 10.2.33.97:7008@17008 slave e007b6251e39dda9826131823e26c9e6c7e39076 0 1638169154000 14 connected
4731491c19f88399437d420a074dde9372075cfd 10.2.33.98:7007@17007 slave 16199e4522ec3e3917e922106ef952e625101108 0 1638169156068 1 connected
c71c180dd2616d8cb778434ed4bbf898249471aa 10.2.33.99:7004@17004 master - 0 1638169157600 4 connected 8192-9101
255891184b7c76b1ebc7e5a0b14b8e569e52462e 10.2.33.99:7011@17011 slave 48f2496e9b796510e102004f45a10b695469dc03 0 1638169153000 28 connected
02ee3438f7b6dba90218f19d6e333b57284b617a 10.2.33.99:7005@17005 master - 0 1638169155000 5 connected 10923-11832
8e132a2b94c27ed68e1bd606f3427f88b2e51978 10.2.33.97:7012@17012 slave 36638350b7f02cff50256b7022c6449389ee3f7c 0 1638169154000 18 connected
6704e73a119afac449423b21bccb27eb51b89dc3 10.2.33.99:7006@17006 master - 0 1638169155000 6 connected 13653-14563
1cb7ac47f51482fd4f5a8ff01932db49a17c77e0 10.2.33.99:7012@17012 slave ee6dba34f20d7e0983f6e8ae272b5840ceaaed60 0 1638169156000 29 connected
16199e4522ec3e3917e922106ef952e625101108 10.2.33.99:7001@17001 myself,master - 0 1638169153000 1 connected 0-909
f550bb10934d5e730a744c9569753b21de82059c 10.2.33.97:7011@17011 slave ccdb1fa7b62d05bc13eef95079595a8bf35adbf2 0 1638169154000 17 connected
b95bf68de296985ba4d63f2fd5f3a897c0bc0e9e 10.2.33.99:7009@17009 slave b845440f9eba55225d15c7bb96bfebbb96b7e35b 0 1638169154000 26 connected
6723094f3a3d1a5a8d7ed7b2f3862da8a22a0d9d 10.2.33.97:7007@17007 slave 589578cc5d7ac2a3855104fc04a50dfab218e34c 0 1638169155566 13 connected
b845440f9eba55225d15c7bb96bfebbb96b7e35b 10.2.33.97:7002@17002 master - 0 1638169157000 26 connected 4551-5460

 以上,配置完成;

四、故障处理、添加删除节点


1、新节点添加入集群,如10.2.33.96:7001作为从节点-cluster-slave,指定master节点集群id -cluster-master-id

redis-cli -h 10.2.33.99 -p 7005 --cluster add-node 10.2.33.96:7008 10.2.33.99:7005 --cluster-slave --cluster-master-id 36638350b7f02cff50256b7022c6449389ee3f7c

 2、当有节点宕机,重新恢复后,步骤如下

#注:如10.2.33.99 的7002挂掉,重启后,如下操作
#查看进程状态
redis-cli -h 10.2.33.99 -p 7002 cluster nodes
2b47046801c5096fa7788210288870ffbae820cc 10.2.33.99:7002@17002 myself,master - 0 0 0 connected

#检测节点与集群状态
 redis-cli --cluster check 10.2.33.99:7002
10.2.33.99:7002 (2b470468...) -> 0 keys | 0 slots | 0 slaves.
[OK] 0 keys in 1 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.2.33.99:7002)
M: 2b47046801c5096fa7788210288870ffbae820cc 10.2.33.99:7002
   slots: (0 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.

#修复节点
 redis-cli --cluster fix  10.2.33.99:7002
10.2.33.99:7002 (2b470468...) -> 0 keys | 0 slots | 0 slaves.
[OK] 0 keys in 1 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.2.33.99:7002)
M: 2b47046801c5096fa7788210288870ffbae820cc 10.2.33.99:7002
   slots: (0 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.

>>> Fixing slots coverage...
The following uncovered slots have no keys across the cluster:
[0-16383]
Fix these slots by covering with a random node? (type 'yes' to accept): yes
>>> Covering slot 14876 with 10.2.33.99:7002
>>> Covering slot 13993 with 10.2.33.99:7002
>>> Covering slot 10272 with 10.2.33.99:7002
>>> Covering slot 694 with 10.2.33.99:7002


#再次查看
 redis-cli --cluster check 10.2.33.99:7002
10.2.33.99:7002 (2b470468...) -> 0 keys | 16384 slots | 0 slaves.
[OK] 0 keys in 1 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.2.33.99:7002)
M: 2b47046801c5096fa7788210288870ffbae820cc 10.2.33.99:7002
   slots:[0-16383] (16384 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covere

#查看集群
redis-cli -h 10.2.33.99 -p 7002 cluster nodes
2b47046801c5096fa7788210288870ffbae820cc 10.2.33.99:7002@17002 myself,master - 0 0 1 connected 0-16383


 3、从集群中关闭节点(关闭实例进程)

redis-cli -h 10.2.33.97 -p 7007 shutdown

4、从集群中移除节点

redis-cli --cluster del-node 10.2.33.97:7007 6723094f3a3d1a5a8d7ed7b2f3862da8a22a0d9d

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

疯飙的蜗牛

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值