CentOS7.6中kubeadm部署Kubernetes集群

1.1.Centos7.6

使用kubeadm快速部署kubernetes集群

1.2.Master节点

Master 节点主要包含了三个Kubernetes项目中最最最重要的组件:apiserver,scheduler,controller-manager!
apiserver:提供了管理集群的API接口
scheduler:负责分配调度Pod到集群内的node节点
controller-manager:由一系列的控制器组成,通过apiserver监控整个集群的状态

1.2.1.确认系统版本修改主机名

1.查看系统版本
[root@iZ2ze7ftggknd1fplnxygqZ ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)
2.修改主机名
hostnamectl set-hostname kubernetes01
3.别忘了修改/etc/hosts文件
[root@kubernetes01 ~]# cat /etc/hosts
127.0.0.1       localhost       localhost.localdomain   localhost4      localhost4.localdomain4
::1     localhost       localhost.localdomain   localhost6      localhost6.localdomain6
# kubernetes-cluster
10.5.0.206 kubernetes01
...

用swap分区
sudo swapoff -a
永久禁用
sudo vi /etc/fstab
把/dev/mapper/centos-swap swap这行注释掉

1.2.2.关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

1.2.3.检查selinux是否关闭

[root@kubernetes01 ~]# setenforce 0
setenforce: SELinux is disabled

1.2.4.提前处理路由问题

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1    
vm.swappiness=0
EOF
之后
sysctl --system

1.2.5.安装docker-ce, 注意docker-ce和kubernetes版本的兼容性!

yum安装docekr-ce,版本是v18.06.1
[root@kubernetes01 ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
[root@kubernetes01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@kubernetes01 ~]# yum -y install docker-ce-18.06.1.ce 安装Docker
[root@kubernetes01 ~]# systemctl start docker.service --启动Docker
[root@kubernetes01 ~]# systemctl enable docker   --设置Docker开机启动
[root@kubernetes01 ~]# docker --version          --查看版本号
Docker version 18.06.1-ce, build e68fc7a

1.2.6.安装kubelet kubeadm kubectl

1.配置为国内阿里云yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
2.安装key文件
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm -import rpm-package-key.gpg
3.安装
yum install -y kubelet-1.12.1
yum install -y kubectl-1.12.1
yum install -y kubeadm-1.12.1

systemctl enable kubelet.service

1.2.7.检查核验版本

[root@kubernetes01 ~]# kubelet --version
Kubernetes v1.12.1
[root@kubernetes01 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@kubernetes01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

v1.12.1版本kubeadm需要的kubernetes组件对应的docker镜像版本:
k8s.gcr.io/kube-apiserver:v1.12.1
k8s.gcr.io/kube-controller-manager:v1.12.1
k8s.gcr.io/kube-scheduler:v1.12.1
k8s.gcr.io/kube-proxy:v1.12.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

1.2.8.下载kubernetes相关组件的docker镜像

这里因国内节点所处网络环境的“特殊性”,另辟蹊径。
[root@kubernetes01 ~]# cat pull_k8s_images.sh 
#!/bin/bash
images=(kube-proxy:v1.12.1 kube-scheduler:v1.12.1 kube-controller-manager:v1.12.1
kube-apiserver:v1.12.1
etcd:3.2.24 coredns:1.2.2 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.${imageName}
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done

1.2.9.查看镜像信息

朋友们还记得开头提起过的scheduler,controller-manager,apiserver这三个组件的作用吗??别忘记呀~~
[root@kubernetes01 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.12.1             61afff57f010        5 months ago        96.6MB
k8s.gcr.io/kube-apiserver            v1.12.1             dcb029b5e3ad        5 months ago        194MB
k8s.gcr.io/kube-scheduler            v1.12.1             d773ad20fd80        5 months ago        58.3MB
k8s.gcr.io/kube-controller-manager   v1.12.1             aa2dd57c7329        5 months ago        164MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        6 months ago        220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        7 months ago        39.2MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        15 months ago       742kB

1.2.10.使用kubeadm部署kubernetes集群master节点

编写kubeadm.yaml
#
vim kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
controllerManagerExtraArgs:
  horizontal-pod-autoscaler-use-rest-clients: "true"
  horizontal-pod-autoscaler-sync-period: "10s"
  node-monitor-grace-period: "10s"
apiServerExtraArgs:
  runtime-config: "api/all=true"
kubernetesVersion: "v1.12.1"
#
[root@kubernetes01 ~]#kubeadm init --config kubeadm.yaml
使用yaml文件,或直接执行:
[root@kubernetes01 ~]# kubeadm init --kubernetes-version=v1.12.1 
preflight核验没有问题后过一段时间,看到这样的提示算是完成了对Kubernetes Master节点的部署。
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.240:6443 --token tmqcj2.mh9fcuc0sysex45l --discovery-token-ca-cert-hash sha256:41ddec5adf157213def47db08a43fd908f6fd915277f61e1fbb1089d72559543

注意:token默认生效时间为24小时

在开始使用之前,需要以常规用户身份运行以下命令: 上面那段英文中有说明注意查看!因为Kubernetes集群默认是需要加密访问的!
so执行这段命令?
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.2.11.健康状态检查

1.查看主要组件的健康状态
[root@kubernetes01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
2.查看master节点状态
[root@kubernetes01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
kubernetes01   NotReady   master   4m15s   v1.12.1

1.2.12.部署网络插件weave

Weave是一个比较热门的容器网络方案,具有良好的易用性功能也很强大。

[root@kubernetes01 ~]# kubectl apply -f https://git.io/weave-kube-1.6
serviceaccount/weave-net created
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
等一会儿,查看Master节点状态,STATUS已经变了,这是因为部署的网络组件生效了
[root@kubernetes01 ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
kubernetes-master   Ready    master   21m   v1.12.1

1.2.13查看Master节点上网络weave相关Pod的状态

[root@kubernetes01 ~]# kubectl get pods -n kube-system -l name=weave-net -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE
weave-net-vhs56   2/2     Running   0          6m59s   10.5.0.206   kubernetes-master   <none>

1.2.14部署可视化插件

1.获取可视化插件docker镜像,修改tag
docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0
docker tag  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0   k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker rmi  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 
2.获取并修改可视化插件YAML文件的最后部分,便于后期通过token登陆可视化页面,这里需要特别注意的是暴露了30001端口,这如果在生产环境是极不安全的!
[root@kubernetes01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
[root@kubernetes01 ~]# tail -n 20 kubernetes-dashboard.yaml
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
3.部署可视化插件
[root@kubernetes01 ~]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard configured
4.查看可视化插件对应的Pod状态
[root@kubernetes01 ~]# kubectl get pods -n kube-system |  grep dash
kubernetes-dashboard-65c76f6c97-f29nm   1/1     Running   0          3m8s
5.获取token值
[root@kubernetes01 ~]# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Name:  namespace-controller-token-6m6t2
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi02bTZ0MiIsI
mt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwODk4Mjc3LTg1ZDktMTFlOS04MmJkLTAwMGMyOTRiNWRkMyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.PxqD49V-lOSgHBaXJ0vIINGuuHbFG8b7nyfu1-nHFrYX1blPK0-wzatuR1-q66XcbfKO5JhEOw5uZKGyC-kmYQ1fB7Pa2gt7ayIV9saJlqAFhuPoKiFhUbcs7UBq8rkskuwZj59_bFUiEFyxaBqVdkpC4uvhdvajqzTCdKvD86NhmaFJKQUFvj0dOmXcDOr6f7ZPfO5AR_MQpifPk8amcxMNB0kD5hMyI5CNY4oq3nnCmxC_Q_vhabVe3o149Yx4oYb_lIe-EWK3Z9eGQXzGxBytAaWmncsJJ4gDpbeZVIz45EnabFk-JW-cGwY_tjlycABSQmcFHT3czPW-hWaBGg
访问https://192.168.1.240:30001通过token登陆控制面板,注意是https协议!

1.2.15部署容器存储插件

这里需要知道Rook项目是基于Ceph的Kubernetes存储插件,一个可用于生产级别的做持久化存储的插件,值得好好把玩。

cd /usr/local/src
yum -y install git
git clone https://github.com/rook/rook.git
cd /usr/local/src/rook/cluster/examples/kubernetes/ceph
kubectl apply -f operator.yaml
kubectl apply -f cluster.yaml 

1.3.Worker节点

和安装Master节点相似,首先把准备工作做好主机名修改,关闭防火墙,提前处理路由问题,配置yum源等等,由于节点数9个,所以这里简单的使用了下ansible playbook配合shell脚本进行安装,节省时间。

#!/bin/bash
#pre config
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1    
vm.swappiness=0
EOF
sysctl --system

#install docker-ce
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-18.06.1.ce
systemctl start docker.service 
systemctl enable docker

# install kubeadm
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm -import rpm-package-key.gpg
yum install -y kubelet-1.12.1
yum install -y kubectl-1.12.1
yum install -y kubeadm-1.12.1

systemctl enable kubelet.service

# install kube-proxy and pause
images=(kube-proxy:v1.12.1 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done

# join cluster
kubeadm join 192.168.1.240:6443 --token tmqcj2.mh9fcuc0sysex45l --discovery-token-ca-cert-hash sha256:41ddec5adf157213def47db08a43fd908f6fd915277f61e1fbb1089d72559543

注意:token时间过期,需要重新在主节点生成
1、重新生成新token,虽然无法翻墙,但也会根据本地版本生成6wrgu3.bq767up4bko5l6as
kubeadm token create
I0605 13:45:19.770133  116422 version.go:89] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.T
imeout exceeded while awaiting headers)I0605 13:45:19.770310  116422 version.go:94] falling back to the local client version: v1.12.1
6wrgu3.bq767up4bko5l6as
2、获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
sha256:41ddec5adf157213def47db08a43fd908f6fd915277f61e1fbb1089d72559543
3、节点加入集群,返回Node节点执行(注意改变新token 6wrgu3.bq767up4bko5l6as)
kubeadm join 192.168.1.240:6443 --token 6wrgu3.bq767up4bko5l6as --discovery-token-ca-cert-hash  sha256:41ddec5adf157213def47db08a43fd908f6fd915277f61e1fbb1089d72559543 --ignore-preflight-errors=Swap

注意脚本中可没写设置主机名!

1.4其它

遇到的一些问题:
kubeadmv1.12.1无法正确安装的问题,节点报错[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]:的问题,从k8s.gcr.io拉取镜像失败的问题,这些问题都很好解决,卡住了别怕!一点一点儿克服困难。

kubernetes集群.png

1.5补充

1.5.1非国内节点搭建

这里我使用了三台香港节点来部署!以下是master节点操作基本思路!
cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)
1.修改主机名
hostname kubernetes001
vim /etc/hosts
2.关闭防火墙
setenforce 0
3.修改内核文件
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1    
vm.swappiness=0
EOF
sysctl --system
4.yum安装docker-ce
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
/bin/systemctl start docker.service 
[root@kubernetes001 ~]# docker --version
Docker version 18.09.5, build e8ff056
5.yum安装kubeadm相关组件
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
yum install -y kubelet
yum install -y kubectl
yum install -y kubeadm
[root@kubernetes001 ~]# kubelet --version
Kubernetes v1.14.1
[root@kubernetes001 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@kubernetes001 ~]# kubeadm  version 
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
6.docker安装kubenetes相关的组件镜像
docker pull k8s.gcr.io/kube-proxy:v1.14.1
docker pull k8s.gcr.io/kube-apiserver:v1.14.1
docker pull k8s.gcr.io/kube-scheduler:v1.14.1
docker pull k8s.gcr.io/kube-controller-manager:v1.14.1
docker pull k8s.gcr.io/etcd:3.2.24
docker pull k8s.gcr.io/coredns:1.2.2
docker pull k8s.gcr.io/pause
7.初始化master节点
kubeadm init --kubernetes-version=v1.14.1
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
node节点怎么加入master节点以及node节点上的操作过程此处省略!!!
8.安装网络插件
kubectl apply -f https://git.io/weave-kube-1.6
9.查看状态
[root@kubernetes001 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE    VERSION
kubernetes001   Ready    master   165m   v1.14.1
kubernetes002   Ready    <none>   116m   v1.14.1
kubernetes003   Ready    <none>   115m   v1.14.1
[root@kubernetes001 ~]# kubectl get pods -n kube-system -l name=weave-net -o wide
NAME              READY   STATUS    RESTARTS   AGE    IP             NODE            NOMINATED NODE   READINESS GATES
weave-net-48kv8   2/2     Running   0          119m   172.31.5.117   kubernetes002   <none>           <none>
weave-net-pchlk   2/2     Running   0          118m   172.31.5.118   kubernetes003   <none>           <none>
weave-net-wcbr5   2/2     Running   0          167m   172.31.5.116   kubernetes001   <none>           <none>
10.安装dashboard面板
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
vim kubernetes-dashboard.yaml 编辑并修改最后几行!
kubectl apply -f kubernetes-dashboard.yaml
kubectl get pods -n kube-system |  grep dash
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
11.安装存储插件
cd /usr/local/src
yum -y install git
git clone https://github.com/rook/rook.git
cd /usr/local/src/rook/cluster/examples/kubernetes/ceph
kubectl apply -f common.yaml
kubectl apply -f operator.yaml
kubectl apply -f cluster.yaml 

1.5.2忘记生成的kubadm join那条命令怎么办?

[root@kubernetes001 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
nvv5wu.7e1v9oniyak5se3a   23h       2019-05-06T11:29:59+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
[root@kubernetes001 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
a90d6683aef10b826041a21de487d5274fc80a5aa6edb67abe638251ce59e3ed
在节点2上执行join操作!
[root@kubernetes002 ~]# kubeadm join 172.31.5.116:6443 --token nvv5wu.7e1v9oniyak5se3a --discovery-token-ca-cert-hash sha256:a90d6683aef10b826041a21de487d5274fc80a5aa6edb67abe638251ce59e3ed

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值