k8s集群搭建

集群环境初始化

所有节点都做相同的操作

准备三台机器
分别为

k8s-master 10.14.2.150
k8s-node01 10.14.2.151 
k8s-node02 10.14.2.152

1.设置系统主机名

[root@localhost ~]# hostnamectl  set-hostname k8s-master01
[root@localhost ~]# hostnamectl set-hostname k8s-node01
[root@localhost ~]# hostnamectl set-hostname k8s-node02

2.修改Hosts文件的相互解析

[root@localhost ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.14.2.150 k8s-master01
10.14.2.151  k8s-node01
10.14.2.152  k8s-node02

3.测试主机之间的连通性
[

root@localhost ~]# ping k8s-node01
PING k8s-node01 (10.14.2.141) 56(84) bytes of data.
64 bytes from k8s-node01 (10.14.2.141): icmp_seq=1 ttl=64 time=0.158 ms
64 bytes from k8s-node01 (10.14.2.141): icmp_seq=2 ttl=64 time=0.254 ms
64 bytes from k8s-node01 (10.14.2.141): icmp_seq=3 ttl=64 time=0.223 ms

[root@localhost ~]# ping k8s-master
PING k8s-master (10.14.2.140) 56(84) bytes of data.
64 bytes from k8s-master (10.14.2.140): icmp_seq=1 ttl=64 time=0.151 ms
64 bytes from k8s-master (10.14.2.140): icmp_seq=2 ttl=64 time=0.142 ms

4.安装依赖包

[root@localhost ~]# yum -y install conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

5.设置防火墙为IPtables并设置空规则

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable  firewalld
[root@localhost ~]# yum -y install iptables-services
[root@localhost ~]# systemctl start iptables
[root@localhost ~]# systemctl enable iptables
[root@localhost ~]#  iptables -F 
[root@localhost ~]# service iptables save

6.关闭swap分区

[root@localhost ~]# swapoff -a

#这一行或者注释掉这一行mnt/swap swap swap defaults 0 0

[root@localhost ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab


#确认,是否关闭
[root@localhost ~]# free -m

7.调整 swappiness 参数

 # 临时生效
[root@localhost ~]# echo 0 > /proc/sys/vm/swappiness
# 永久生效
[root@localhost ~]# vim /etc/sysctl.conf
vm.swappiness=0

# 使配置生效
[root@localhost ~]# sysctl -p

8.关闭selinux

[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

9.调整内核参数,对于k8s

[root@localhost ~]#cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
#禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
#不检查物理内存是否够用
vm.overcommit_memory=1
# 开启 OOM
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
[root@localhost ~]# cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
[root@localhost ~]# sysctl -p /etc/sysctl.d/kubernetes.conf

10.调整系统时区

# 设置系统时区为 中国/上海
[root@localhost ~]# timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
[root@localhost ~]# timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
[root@localhost ~]# systemctl restart rsyslog
[root@localhost ~]# systemctl restart crond

11.关闭系统不需要服务

[root@localhost ~]# systemctl stop postfix
[root@localhost ~]# systemctl disable postfix

12.设置 rsyslogd 和 systemd journald

# 持久化保存日志的目录
[root@localhost ~]# mkdir /var/log/journal
[root@localhost ~]# mkdir /etc/systemd/journald.conf.d
[root@localhost ~]#cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
[root@localhost ~]# systemctl restart systemd-journald

13.升级系统内核为最新版本

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定

[root@localhost ~]# wget http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
[root@localhost ~]#rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装
一次!
[root@localhost ~]#yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
[root@localhost ~]# grub2-set-default 'CentOS Linux (5.4.113-1.el7.elrepo.x86_64) 7 (Core)'


[root@localhost ~]# reboot

查看内核版本
[root@k8s-master01 ~]# uname -r
5.4.113-1.el7.elrepo.x86_64

14.开启ipvs 支持

kube-proxy开启ipvs的前置条件

[root@k8s-master01 ~]#modprobe br_netfilter
[root@k8s-master01 ~]#cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@k8s-master01 ~]#chmod 755 /etc/sysconfig/modules/ipvs.modules 
[root@k8s-master01 ~]#bash /etc/sysconfig/modules/ipvs.modules
[root@k8s-master01 ~]#lsmod | grep -e ip_vs -e nf_conntrack_ipv4

15.安装 Docker 软件

[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@k8s-master01 ~]# yum update -y && yum install -y docker-ce

重启之后注意检查内核版本,如果不是最新的,重新设置一下,重启
[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# uname -r
3.10.0-1160.24.1.el7.x86_64
[root@k8s-master01 ~]# grub2-set-default 'CentOS Linux (5.4.113-1.el7.elrepo.x86_64) 7 (Core)'
[root@k8s-master01 ~]# uname -r
5.4.113-1.el7.elrepo.x86_64
[root@k8s-master01 ~]# systemctl start docker
[root@k8s-master01 ~]# systemctl enable docker

16.## 创建 /etc/docker 目录

[root@k8s-master01 ~]#mkdir /etc/docker
# 配置 daemon.
[root@k8s-master01 ~]#cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
[root@k8s-master01 ~]#mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
[root@k8s-master01 ~]#systemctl daemon-reload && systemctl restart docker && systemctl enable docker

18.安装 Kubeadm (主从配置)

[root@k8s-master01 ~]#cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


[root@k8s-master01 ~]#yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
[root@k8s-master01 ~]#systemctl enable kubelet.service

安装kubernets

上传kubaadm到服务器
软件包下载
kubeadm-basic.images

[root@k8s-master01 ~]# ll
总用量 235628
-rw-------. 1 root root      1332 4月  17 14:54 anaconda-ks.cfg
-rw-r--r--. 1 root root      8656 7月  24 2017 elrepo-release-7.0-3.el7.elrepo.noarch.rpm
-rw-r--r--  1 root root 241260752 8月   5 2019 kubeadm-basic.images.tar.gz
-rw-r--r--. 1 root root       482 4月  17 15:07 kubernetes.conf
[root@k8s-master01 ~]# tar -xf kubeadm-basic.images.tar.gz 

写一个脚本导入镜像

[root@k8s-master01 ~]# vim load-images.sh
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/images-list.txt
cd /root/kubeadm-basic.images

for i in $( cat /tmp/images-list.txt )

do
       docker load -i $i

done

rm -rf /tmp/images-list.txt

导入镜像

[root@k8s-master01 ~]# chmod +x load-images.sh
[root@k8s-master01 ~]# ./load-images.sh 
fe9a8b4f1dcc: Loading layer [==================================================>]  43.87MB/43.87MB
d1e1f61ac9f3: Loading layer [==================================================>]  164.5MB/164.5MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.15.1
fb61a074724d: Loading layer [==================================================>]  479.7kB/479.7kB
c6a5fc8a3f01: Loading layer [==================================================>]  40.05MB/40.05MB
Loaded image: k8s.gcr.io/coredns:1.3.1
8a788232037e: Loading layer [==================================================>]   1.37MB/1.37MB
30796113fb51: Loading layer [==================================================>]    232MB/232MB
6fbfb277289f: Loading layer [==================================================>]  24.98MB/24.98MB
Loaded image: k8s.gcr.io/etcd:3.3.10
aa3154aa4a56: Loading layer [==================================================>]  116.4MB/116.4MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.15.1
e17133b79956: Loading layer [==================================================>]  744.4kB/744.4kB
Loaded image: k8s.gcr.io/pause:3.1
15c9248be8a9: Loading layer [==================================================>]  3.403MB/3.403MB
00bb677df982: Loading layer [==================================================>]  36.99MB/36.99MB
Loaded image: k8s.gcr.io/kube-proxy:v1.15.1
e8d95f5a4f50: Loading layer [==================================================>]  38.79MB/38.79MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.15.1
[root@k8s-master01 ~]# 

再把kubeadm文件远程复制到其他节点

[root@k8s-node01 ~]# scp  root@10.14.2.150:/root/kubeadm-basic.images.tar.gz ./
[root@k8s-node01 ~]# scp  root@10.14.2.150:/root/load-images.sh ./

[root@k8s-node02 ~]# scp  root@10.14.2.150:/root/kubeadm-basic.images.tar.gz ./
[root@k8s-node02 ~]# scp  root@10.14.2.150:/root/load-images.sh ./

其他节点也需要做镜像的导入

[root@k8s-node01 ~]# tar -xf kubeadm-basic.images.tar.gz 
[root@k8s-node01 ~]# ./load-images.sh
[root@k8s-node02 ~]# tar -xf kubeadm-basic.images.tar.gz 
[root@k8s-node02 ~]# ./load-images.sh 

初始化主节点

显示默认的初始化模板,并打印到kubeadm-config.yaml

[root@k8s-master01 ~]# kubeadm config print init-defaults > kubeadm-config.yaml

修改模板

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.14.2.150
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
   dnsDomain: cluster.local
   #声明pod的网段
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

#将默认的调度模式修改为IPVS
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

开始初始化
指定初始化的配置文件

[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

看到这样的信息就说明初始化成功了

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.14.2.150:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ba8a610a41766901f4870a337c35dc4ec593d8d5672cec9d4e3b6683fc820678 

执行一下下面三条命令

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   NotReady   master   2m9s   v1.15.1

整理一下文件,把k8s初始化配置和日志保存起来。

[root@k8s-master01 ~]# mkdir install-k8s
[root@k8s-master01 ~]# mv kubeadm-config.yaml  kubeadm-init.log  install-k8s/
[root@k8s-master01 ~]# cd install-k8s/
[root@k8s-master01 install-k8s]# mkdir core
[root@k8s-master01 install-k8s]# mv kubeadm-* core/
[root@k8s-master01 install-k8s]# mkdir plugin
[root@k8s-master01 install-k8s]# mkdir flannel
[root@k8s-master01 install-k8s]# cd flannel/

部署网络

[root@k8s-master01 flannel]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master01 flannel]# kubectl create -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-4hmsx               1/1     Running   0          6m1s
coredns-5c98db65d4-qw8lf               1/1     Running   0          6m1s
etcd-k8s-master01                      1/1     Running   0          4m54s
kube-apiserver-k8s-master01            1/1     Running   0          5m5s
kube-controller-manager-k8s-master01   1/1     Running   0          5m3s
kube-flannel-ds-752lb                  1/1     Running   0          45s
kube-proxy-fcxnl                       1/1     Running   0          6m
kube-scheduler-k8s-master01            1/1     Running   0          4m52s
[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS   ROLES    AGE     VERSION
k8s-master01   Ready    master   6m47s   v1.15.1

查看网卡信息,就会发现多了一个flannel

[root@k8s-master01 flannel]# ifconfig 
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255
        ether 3a:05:ea:fa:f9:f0  txqueuelen 1000  (Ethernet)
        RX packets 435  bytes 31725 (30.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 430  bytes 144366 (140.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:94:d0:2f:70  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.14.2.150  netmask 255.255.255.0  broadcast 10.14.2.255
        ether 00:0c:29:83:4f:89  txqueuelen 1000  (Ethernet)
        RX packets 271324  bytes 351494825 (335.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 163255  bytes 494502420 (471.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 10.244.0.0
        ether 0e:ef:b9:98:8d:f2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 71179  bytes 14223191 (13.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 71179  bytes 14223191 (13.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1b77f93d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether 56:f9:d5:dc:f3:1b  txqueuelen 0  (Ethernet)
        RX packets 220  bytes 19144 (18.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 221  bytes 72555 (70.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1d6c1672: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether 4e:e9:cc:7d:83:50  txqueuelen 0  (Ethernet)
        RX packets 215  bytes 18671 (18.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 216  bytes 72105 (70.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

这样就说安装成功

把子节点加入进来
在初始化成功后有出来一条提示,把子节点加入只需要在子节点上执行该命令,那就在node01,node02上执行一下
kubeadm join 10.14.2.150:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:ba8a610a41766901f4870a337c35dc4ec593d8d5672cec9d4e3b6683fc820678

看到这样的信息就说明加入成功

[root@k8s-node01 ~]# kubeadm join 10.14.2.150:6443 --token ww6iai.v149eb6e6pp9y4lq     --discovery-token-ca-cert-hash sha256:69960927c75b6bb4e11e30493308ac85c679511082e188a49bbab8f144b3f739
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster


[root@k8s-node02 ~]# kubeadm join 10.14.2.150:6443 --token ww6iai.v149eb6e6pp9y4lq     --discovery-token-ca-cert-hash sha256:69960927c75b6bb4e11e30493308ac85c679511082e188a49bbab8f144b3f739
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master查看节点

[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS   ROLES    AGE     VERSION
k8s-master01   Ready    master   14m     v1.15.1
k8s-node01     Ready    <none>   4m10s   v1.15.1
k8s-node02     Ready    <none>   46s     v1.15.1

这样集群就部署好了。
需要多少节点同样的方法加入即可。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Rio520

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值