K8S学习之centos7系统下Kubeadm方式搭建k8s集群

curl --cacert /etc/kubernetes/pki/ca.pem https://10.240.0.2/version

前言

1、Kubernetes集群几种部署方式

minikube方式
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,适合尝试Kubernetes或日常开发的用户使用,但是不能用于生产环境。

kubeadm方式
Kubeadm也是一个工具,提供kubeadm init和kubeadm join,可用于快速部署Kubernetes集群。

二进制包方式
从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群,过程较为繁琐。

生产环境中部署Kubernetes集群,使用Kubeadm和二进制包部署两种方式。Kubeadm部署方式降低了部署门槛,但屏蔽了很多细节,遇到问题就很难排查。
实际生产环境中最好使用二进制包部署Kubernetes集群,虽然繁琐,但有利于了解其中很多工作原理,更有利于后期维护。

本文系初学使用kubeadm方式简单搭建k8s集群的学习笔记,如有错漏,欢迎批评指正,不胜感激。

2、基础环境配置说明

2.1 主机角色安排

IPHostnameRole
10.0.0.101k8smastermaster
10.0.0.102k8snode01node

2.2 主机配置

配置规格
内存配置2G
CPU配置2个
系统版本CentOS Linux release 7.4.1708 (Core)
kubelet版本1.14.0
docker版本docker-ce-18.09.4-3.el7.x86_64

部署步骤

1、所有节点的基础配置(本节在master与node同时执行)

1.1 修改主机名

10.0.0.101服务器:

[root@k8smaster ~]# hostname
k8smaster

10.0.0.102服务器:

[root@k8snode01 ~]# hostname
k8snode01

1.2 修改/etc/hosts文件,加入下面两行

10.0.0.101 k8smaster
10.0.0.102 k8snode01

1.3 关掉swap分区

[root@k8smaster ~]# swapoff -a

永久禁用swap分区:
注释掉/etc/fstab文件中“/dev/mapper/centos-swap”这一行:


[root@k8smaster ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Jul 31 23:03:49 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=ad34d4f1-a758-4924-8ae9-99d0d36939aa /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0    #注释掉这一行
[root@k8smaster ~]# 

1.4 关闭firewalld

[root@k8smaster ~]# systemctl stop firewalld
[root@k8smaster ~]# systemctl disable firewalld

1.5 关闭selinux

[root@k8smaster selinux]# cat /etc/selinux/config|grep "^SELINUX="
SELINUX=disabled
[root@k8smaster selinux]# 

1.6 修改sysctl内核参数

创建k8s.conf文件,写入下面的的内容:

[root@k8smaster ~]# cat /etc/sysctl.d/k8s.conf       #k8s.conf文件原来不存在,需要自己创建的
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1    
vm.swappiness=0
[root@k8smaster ~]# 

让参数修改生效:

sysctl --system

1.7 配置kubernetes yum源

创建kubernetes.repo文件并写入下面的内容:

[root@k8smaster ~]# cat /etc/yum.repos.d/kubernetes.repo   
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enable=1
[root@k8smaster ~]# 

1.8 安装docker18版本

centos7安装的docker一般是docker13版本,这里是将docker版本升级到了最新版。步骤如下:

1.保证内核版本在3.10及以上:uname -a 
2.删除旧版本:yum remove -y docker docker-common docker-selinux docker-engine #这一步骤在初次安装docker也最好执行一次,否则后面安装docker可能会报错
3.安装需要的软件包:yum install -y yum-utils device-mapper-persistent-data lvm2
4.设置Docker yum源:yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
5.查看所有仓库中所有docker版本:yum list docker-ce --showduplicates | sort -r
6.安装docker:yum install docker-ce -y #由于repo中默认只开启stable仓库,故这里一般会默认安装最新版本.如果要安装特定版本:yum install docker-ce-18.06.0.ce-3.el7 -y
7.设置为开机启动:systemctl enable docker
8.启动:systemctl start docker
9.查看启动状态:systemctl status docker
10.查看版本:docker version

1.9 安装kubeadm, kubelet and kubectl

[root@k8smaster yum.repos.d]# yum -y install  kubelet-1.14.0 kubeadm-1.14.0  kubectl-1.14.0 kubernetes-cni-0.7.5

检查所有服务版本:

[root@k8smaster yum.repos.d]# rpm -qa docker-ce  kubelet kubeadm kubectl  kubernetes-cni
docker-ce-18.09.4-3.el7.x86_64
kubectl-1.14.0-0.x86_64
kubelet-1.14.0-0.x86_64
kubernetes-cni-0.7.5-0.x86_64
kubeadm-1.14.0-0.x86_64
[root@k8smaster yum.repos.d]# 

1.10 启动docker和kubelet并设置为开机自启动

systemctl enable docker
systemctl enable kubelet.service
systemctl start docker
下面这不在kubeadm init 之后在操作
systemctl start kubelet

1.11 下载相关镜像

1.11.1 获取镜像列表

[root@k8smaster ~]# kubeadm config images list
I0403 09:50:32.449434   11098 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0403 09:50:32.449896   11098 version.go:97] falling back to the local client version: v1.14.0
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

1.11.2 生成默认kubeadm.conf文件

[root@k8smaster ~]#   kubeadm config print init-defaults > kubeadm.conf
[root@k8smaster ~]# ll
总用量 8
-rw-------. 1 root root 1502 12月 14 2017 anaconda-ks.cfg
-rw-r--r--  1 root root  870 4月   3 09:51 kubeadm.conf

1.11.3 修改kubeadm.conf文件的镜像地址

默认为google的镜像仓库地址k8s.gcr.io,国内无法访问,需要把地址修改为国内的地址,这里使用阿里云的镜像仓库地址。
编辑kubeadm.conf,将imageRepository修改为“registry.aliyuncs.com/google_containers” 。并确认Kubernetes版本是v1.14.0,和1.11.1中的镜像列表的版本保持一致

vim kubeadm.conf 

 

下载镜像:

[root@k8smaster ~]# kubeadm config images pull --config kubeadm.conf
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.14.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.14.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.14.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.3.1
[root@k8smaster ~]# 

修改tag:

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.14.0    k8s.gcr.io/kube-apiserver:v1.14.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0    k8s.gcr.io/kube-controller-manager:v1.14.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.14.0   k8s.gcr.io/kube-scheduler:v1.14.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.14.0   k8s.gcr.io/kube-proxy:v1.14.0
docker tag registry.aliyuncs.com/google_containers/pause:3.1    k8s.gcr.io/pause:3.1
docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10    k8s.gcr.io/etcd:3.3.10
docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1    k8s.gcr.io/coredns:1.3.1

再删除阿里云镜像:

docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.14.0
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.14.0
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.14.0
docker rmi registry.aliyuncs.com/google_containers/pause:3.1
docker rmi registry.aliyuncs.com/google_containers/etcd:3.3.10
docker rmi registry.aliyuncs.com/google_containers/coredns:1.3.1

或者使用脚本解决:

[root@k8smaster ~]# cat image.sh 
#!/bin/bash
images=(kube-proxy:v1.14.0 kube-scheduler:v1.14.0 kube-controller-manager:v1.14.0 kube-apiserver:v1.14.0 etcd:3.3.10 coredns:1.3.1 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag  registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi  registry.aliyuncs.com/google_containers/$imageName
done

最后下载留下的镜像是这些:

 

1.12 忽略swap错误

kubernetes集群不允许开启swap,所以我们需要忽略这个错误
编辑文件 /etc/sysconfig/kubelet,将文件里的“KUBELET_EXTRA_ARGS=”改成这样:KUBELET_EXTRA_ARGS="--fail-swap-on=false"
修改之后的文件:

[root@k8smaster ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@k8smaster ~]# 

2、master节点部署(本节在master节点执行)

2.1 初始化Kubernetes Master

根据后面要安装的Calico网络组件

 

这里定义先POD的网段为: 192.168.0.0/16,API Server地址为Master节点的IP地址。命令:

kubeadm init --kubernetes-version=v1.14.0 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.0.0.101

默认token的有效期为24小时,当过期之后,该token就不可用了。解决方法如下:--token-ttl 0 永久生效

kubeadm init --kubernetes-version=v1.14.0 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.0.0.101 --ignore-preflight-errors=all --token-ttl 0

执行结果:

[root@k8smaster ~]# kubeadm init --kubernetes-version=v1.14.0 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.0.0.101
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [10.0.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [10.0.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.005092 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: iiupdj.krewbkmn884mu5jc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.101:6443 --token iiupdj.krewbkmn884mu5jc \
    --discovery-token-ca-cert-hash sha256:5d982be883d5561abcd5f3aba79993ba024fcacae47090f82929f0b8e01c15d3 
[root@k8smaster ~]# 

初始化成功后,如下图,将最后两行内容记录下来,这个命令用来加入Worker节点时使用

kubeadm join 10.0.0.101:6443 --token iiupdj.krewbkmn884mu5jc \
    --discovery-token-ca-cert-hash sha256:5d982be883d5561abcd5f3aba79993ba024fcacae47090f82929f0b8e01c15d3 

 

文件驱动默认由systemd改成cgroupfs, 而我们安装的docker使用的文件驱动是systemd, 造成不一致, 导致镜像无法启动

docker info查看

Cgroup Driver: systemd

现在有两种方式, 一种是修改docker, 另一种是修改kubelet,

修改docker:#

修改或创建/etc/docker/daemon.json,加入下面的内容:

{ "exec-opts": ["native.cgroupdriver=systemd"]}

修改kubelet:

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

vim /etc/sysconfig/kubelet

KUBELET_CGROUP_ARGS="--cgroup-driver=$DOCKER_CGROUPS"

KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"

 

 

2.2 执行以下命令配置kubectl,作为普通用户管理集群并在集群上工作

上一步骤初始化的要求:“To start using your cluster, you need to run the following as a regular user”,需要执行以下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.3 获取pods列表

kubectl get pods --all-namespaces命令查看相关状态,可以看到coredns pod处于pending状态,这是因为还没有部署pod网络:

 

2.4 查看集群的健康状态

kubectl get cs命令查看健康状态:

2.5 部署Pod网络

根据Pod Network Add-on介绍

You must install a pod network add-on so that your pods can communicate with each other.
您必须安装一个pod网络附加组件,以便您的pod可以彼此通信。
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
pod网络附加组件是必须安装的,这样pod能够彼此通信,而且网络必须在任何应用程序之前部署。另外,CoreDNS在安装网络之前不会启动。kubeadm只支持基于容器网络接口(CNI)的网络。

如下图支持的Pod网络有JuniperContrail/TungstenFabric、Calico、Canal、Cilium、Flannel、Kube-router、Romana、Wave Net等:

这里我们部署Calico网络,Calico是一个纯三层的方案,其好处是它整合了各种云原生平台(Docker、Mesos 与 OpenStack 等),每个 Kubernetes 节点上通过 Linux Kernel 现有的 L3 forwarding 功能来实现 vRouter 功能。

根据Pod Network Add-on提示,安装Calico网络就两个步骤:

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

这两个步骤的执行结果如下:

 

使用 kubectl get pods --all-namespaces命令查看运行状态:
还没部署好的时候,状态是这样的,需要等个几分钟:

几分钟之后,所有容器就变成了running状态,可以进行下一步了:

3 worker节点加入(本节在worker节点操作)

3.1 在master节点上查看当前集群的所有节点,只有master

kubectl get nodes

 

3.2 在worker节点上将Worker节点加入集群

如果忘记了token和sha256信息可以在master查看

[root@walker-1 kubernetes]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
aa78f6.8b4cafc8ed26c34f   23h       2017-12-26T16:36:29+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

获取ca证书sha256编码hash值

[root@walker-1 kubernetes]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

然后执行下面的操作
 

在Worker节点上运行在Kubernetes Master初始化时生成的令牌:

[root@k8snode01 ~]#  kubeadm join 10.0.0.101:6443 --token iiupdj.krewbkmn884mu5jc \
>     --discovery-token-ca-cert-hash sha256:5d982be883d5561abcd5f3aba79993ba024fcacae47090f82929f0b8e01c15d3

 

 

3.3 在master节点上检查加入结果

再回到master节点上去查看所有node,发现就多了一个节点:

kubectl get nodes

补充:当在worker节点上刚刚执行完加入集群的令牌之后,中间有出现ErrImagePull的状态,等几分钟再看,就已经OK了

等几分钟查看pod状态,就OK了:

再查看节点,node节点已经就是ready状态了:

 

P.S k8s的pod默认不会调度到master节点,如果部署的是单节点的集群,就需要按照下面的方式修改pod的调度策略(https://v1-12.docs.kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network):
 

4 部署dashboard(1.10.1版本)

关于dashboard的介绍和部署方式可参考:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui
这里部署的是1.10.1版本

4.1 下载部署dashboard的yaml文件到本地并修改拉取镜像地址

由于yaml配置文件中指定镜像从google拉取,先下载yaml文件到本地,修改配置从阿里云仓库拉取镜像。

[root@k8smaster ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改112行拉取镜像地址为阿里云的地址:

image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-

4.2 部署dashboard

kubectl create -f kubernetes-dashboard.yaml

4.3 查看Pod 的状态为running说明dashboard已经部署成功

 kubectl get pods --all-namespaces
 kubectl get pod --namespace=kube-system -o wide | grep dashboard

同时,Dashboard 会在 kube-system namespace 中创建自己的 Deployment 和 Service:

 kubectl get deployment kubernetes-dashboard --namespace=kube-system
 kubectl get service kubernetes-dashboard --namespace=kube-system

4.4 配置使用nodeport方式访问dashport

访问dashboard的方式有很多,这里使用的是配置nodeport的方式来访问。

4.4.1 修改配置文件

修改文件kubernetes-dashboard.yaml,将service type和nodeport添加进去,注意k8s只支持30000以上的端口

[root@k8smaster ~]# vim kubernetes-dashboard.yaml  #加入下面两行配置,加入位置见下图
 type: NodePort     #添加Service的type为NodePort
 nodePort: 30006   # 添加映射到虚拟机的端口,k8s只支持30000以上的端口,端口自定义

 

4.4.2 修改后,重新应用配置文件

 kubectl apply -f kubernetes-dashboard.yaml

4.4.3 端口已经变成了30006

 kubectl get service -n kube-system | grep dashboard

4.4.4 获取登录dashboard的token

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o 

 

4.4.5 nodeport(https://nodeportIP:nodeport) 方式访问dashboard

登录之后界面OK:

 

后记:访问dashboard界面用Google浏览器会报这个错:

改用火狐浏览器就能正常访问的

至此,简单的集群搭建完毕

学习参考链接:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui
https://v1-12.docs.kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
https://www.kclouder.cn/centos7-kubernetes/
https://www.datayang.com/article/45
https://blog.csdn.net/networken/article/details/85607593

 

 

 

 

 

 

 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值