Kubernetes-1.18.10高可用集群部署

Kubernetes-1.18.10高可用集群部署

好了,前一篇介绍了Kubernetes的二进制安装部署,但是在实际企业上,没必要用二进制安装部署,作为一个运维大佬,二进制部署kubernetes是必须要会的,但是由于步骤太过太多,可以采用源码安装方式!

好了,下面开始高可用集群实战部署

一.环境准备

1.服务器规划

主机名               角色                ip地址                 配置
master1             master1           192.168.229.51          2u2g
master1             master1           192.168.229.51          2u2g
master1             master1    	      192.168.229.51          2u2g
node1               node1      	      192.168.229.51          2u2g

部署过程大纲:

1. 安装配置docker           # 所有节点
2. 安装软件                 # 所有节点
3. 安装负载均衡及高可用     # 所有 Master节点
4. 初台化Master1            # Master1节点    
5. 配置kubectl              # 所有需要的节点
6. 部署网络插件             # Master1节点
7. 加入Master节点           # 其它 Master节点
8. 加入Worker节点           # 所有 Node节点

2.环境准备

为节省时间所有采用配置好的脚本使用,并且采用xshell一键所有会话功能同步执行
在这里插入图片描述

1. 保证所有节点配置好yum源,并且主机相互能ping接成功
2. 关闭防火墙,selinux
3. 设置好主机名,做好解析、互信
4. 配置好时间同步
以上部分全部省略!!!
5. 关闭swap    #注意这个一定要关闭
[root@master1 ~]# swapoff -a
[root@master1 ~]# sed -i 's/.*swap/#&/' /etc/fstab

​
6. 配置内核参数
[root@master1 ~]# vim /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
​
[root@master1 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf      # 进行检测
​
7. 加载ipvs模块
# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
​
[root@master1 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master1 ~]# /etc/sysconfig/modules/ipvs.modules
​
8. 升级内核(版本较新的不需要此步)
[root@master1 ~]# yum -y update kernel 
[root@master1 ~]# shutdown -r now          #重启

3. 安装配置docker

ps:在所有节点操作


1.配置doceke  相对应yum源
[root@master1 ~]# cd /etc/yum.repos.d/   
[root@master1 yum.repos.d]# curl -O http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


2.安装docker
[root@master1 yum.repos.d]# yum -y install docker-ce-18.06.0.ce 

3.配置docker
[root@master1 yum.repos.d]#  mkdir /etc/docker
[root@master1 yum.repos.d]#  vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}

4.启动docker,并设置开机启动查看状态
[root@master1 yum.repos.d]# systemctl start docker     
[root@master1 yum.repos.d]# systemctl enable docker  &&  systemctl status docker

所有机器都需要做以上步骤,快捷方式请在主机器测试,测试成功后则scp拷贝方便快捷,安全可靠

docker正常启动,active 才算在正常

在这里插入图片描述

4.安装软件 ,kubeadm,kubelet,ipvsadm

ps:在所有节点安装


1.配置kubernetes  yum源
[root@master1 ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2.安装软件,查看版本选择较新版本
[root@master1 ~]# yum makecache
[root@master1 ~]# yum list --showduplicates|egrep kubeadm     #查看kube----等众多版本信息
安装1.18.10.0版本
[root@master1 ~]# yum install -y kubeadm-1.18.10-0 kubelet-1.18.10-0 kubectl-1.18.10-0 ipvsadm

3.设置开机启动kubelet
[root@master1 ~]# systemctl enable kubelet


以上操作在所有节点机器执行

在这里插入图片描述

5. 安装负载均衡,实现漂移

在所有Master节点操作

Kubernetes master 节点运行如下组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。kube-apiserver可以运行多个实例,但对其它组件需要提供统一的访问地址,该地址需要高可用。

本次部署使用 keepalived+haproxy 实现 kube-apiserver 的VIP 高可用和负载均衡。keepalived 提供 kube-apiserver 对外服务的 VIP高可用。haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能。kube-apiserver的端口为6443, 为避免冲突, haproxy 监听的端口要与之不同,此实验中为6444。

ps:所有master机器执行
1.创建 haproxy和 keepalived的启动脚本,为了节省把俩个脚本合并
[root@master1 ~]# vim haproxy.sh     
#!/bin/bash
MasterIP1=192.168.229.51
MasterIP2=192.168.229.52
MasterIP3=192.168.229.53
MasterPort=6443                   # apiserver端口
docker run -d --restart=always --name haproxy-k8s -p 6444:6444 \
           -e MasterIP1=$MasterIP1 \
           -e MasterIP2=$MasterIP2 \
           -e MasterIP3=$MasterIP3 \
           -e MasterPort=$MasterPort  wise2c/haproxy-k8s

VIRTUAL_IP=192.168.229.100         # VIP
INTERFACE=ens33                   # 网卡名称
NETMASK_BIT=24
CHECK_PORT=6444                   # Haproxy端口
RID=10
VRID=160
MCAST_GROUP=224.0.0.18
docker run -itd --restart=always --name=keepalived-k8s \
           --net=host --cap-add=NET_ADMIN \
           -e VIRTUAL_IP=$VIRTUAL_IP \
           -e INTERFACE=$INTERFACE \
           -e NETMASK_BIT=$NETMASK_BIT \
           -e CHECK_PORT=$CHECK_PORT \
           -e RID=$RID -e VRID=$VRID \
           -e MCAST_GROUP=$MCAST_GROUP  wise2c/keepalived-k8s


[root@master2 ~]# sh haproxy.sh    # 启动脚本


测试:
1).在每台机器上查看容器(haproxy, keepalived)是否都正常运行
2).在每台机器上查看6444端口是否监听
3).在有VIP的机器关闭haproxy容器或keepalived容器看看VIP能否正常飘移

[root@master1 ~]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      7107/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      7422/master         
tcp        0      0 192.168.229.51:10010    0.0.0.0:*               LISTEN      19742/docker-contai 
tcp6       0      0 :::6444                 :::*                    LISTEN      19906/docker-proxy  
tcp6       0      0 :::22                   :::*                    LISTEN      7107/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN

下图是这个是脚本调用过程
在这里插入图片描述
ip a :查看vip
在这里插入图片描述

测试漂移:

[root@master1 ~]# systemctl stop docker       #关闭master1 的docker

关闭master1机器docker后,成功漂移至master3机器上
在这里插入图片描述
做到这里高可用已经部署好了

二.初始化Master1

1.仅在 Master1节点操作


1.在 Master1上创建初始化配置文件
[root@master1 ~]# mkdir k8s
[root@master1 ~]# cd k8s/
导入剧本至 init.yml 文件
[root@master1 k8s]# kubeadm config print init-defaults > init.yml
W1123 23:49:59.941018   42462 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@master1 k8s]# ls
init.yml
[root@master1 k8s]# vim init.yml 
 apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.229.51         # 此处改为本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.229.100:6444"    # VIP:PORT
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    # 使用国内镜像仓库
kind: ClusterConfiguration
kubernetesVersion: v1.18.10           ## 版本号
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16           #pod子网,和Flannel中要一致
scheduler: {}

---                   #添加我们需要用到的模块
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs




2.初始化master1 ,初始化等待时间需要等的就一些,20分钟左右,这一步很容易出问题,如果有问题需要一步步排查
#初始化master1机器,并写入在kubeadm-init.log日志中
[root@master1 k8s]# kubeadm init --config=init.yml --upload-certs |tee kubeadm-init.log
W1123 23:59:10.616005   45231 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.229.51 192.168.229.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.229.51 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.229.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W1124 00:05:32.746588   45231 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1124 00:05:32.747845   45231 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.554886 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
6283c77544fbc363924aa80fdc966b7d5a101611c2f765ab8c87206e67b4867c
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.229.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:e0202c793d8972d0d42250d042ba031c47f720039f68df994472f377274b9931 \
    --control-plane --certificate-key 6283c77544fbc363924aa80fdc966b7d5a101611c2f765ab8c87206e67b4867c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.229.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:e0202c793d8972d0d42250d042ba031c47f720039f68df994472f377274b9931 


-----------------------------------------------------------------
kubeadm init主要执行了以下操作:
[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的Docker镜像文件
[kubelet-start]:生成kubelet的配置文件”"var/lib/kubelet/config.yaml",没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动不会成功。
[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。 [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
[wait-control-plane]:等待control-plan部署的Master组件启动。
[apiclient]:检查Master组件服务状态。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons]:安装附加组件CoreDNS和kube-proxy


###中间有这三段后续操作命令提示,先复制下来
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.配置kubectl
无论在master节点或node节点,要能够执行kubectl命令必须进行配置,有两种配置方式
方式一:通过配置文件,这边我们就采用方式一即可
[root@master1 k8s]#   mkdir -p $HOME/.kube
[root@master1 k8s]#   cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 k8s]#   chown $(id -u):$(id -g) $HOME/.kube/config


方式二:通过环境变量
# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
# source ~/.bashrc



4.修改一下配置文件,把如下内容  ,注释掉: - --port=0  ,等一会即可
[root@master1 k8s]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml 
#    - --port=0
[root@master1 k8s]# vim /etc/kubernetes/manifests/kube-scheduler.yaml 
#    - --port=0


5.配置好kubectl后,就可以使用kubectl命令了
[root@master1 k8s]# kubectl get cs           #检查查看是否以跑起服务,显示ok代表正常运行
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@master1 k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   


[root@master1 k8s]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-72pm6          0/1     Pending   0          31m
coredns-7ff77c879f-w79bx          0/1     Pending   0          31m
etcd-master1                      1/1     Running   0          31m
kube-apiserver-master1            1/1     Running   0          31m
kube-controller-manager-master1   1/1     Running   0          6m13s
kube-proxy-8lr27                  1/1     Running   0          31m
kube-scheduler-master1            1/1     Running   0          3m54s


[root@master1 k8s]# docker ps     
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS                    NAMES
8276a8a597e1        f23f5042d485                                        "kube-scheduler --au…"   4 minutes ago       Up 4 minutes                                 k8s_kube-scheduler_kube-scheduler-master1_kube-system_4839804a32ba90c5bfcf4f5c9e830f67_0
bf266b98b0b0        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 4 minutes ago       Up 4 minutes                                 k8s_POD_kube-scheduler-master1_kube-system_4839804a32ba90c5bfcf4f5c9e830f67_0
6438b977d528        b52d2697baa9                                        "kube-controller-man…"   6 minutes ago       Up 6 minutes                                 k8s_kube-controller-manager_kube-controller-manager-master1_kube-system_84be0c0da0e38f7b48e450e61357d600_0
aa5edf6393d3        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 6 minutes ago       Up 6 minutes                                 k8s_POD_kube-controller-manager-master1_kube-system_84be0c0da0e38f7b48e450e61357d600_0
21aa8171b7d8        2abfb19fb8ae                                        "/usr/local/bin/kube…"   31 minutes ago      Up 31 minutes                                k8s_kube-proxy_kube-proxy-8lr27_kube-system_9b47c744-6b04-4a2e-b0e1-5f27b2c585b1_0
97211857d2ee        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 31 minutes ago      Up 31 minutes                                k8s_POD_kube-proxy-8lr27_kube-system_9b47c744-6b04-4a2e-b0e1-5f27b2c585b1_0
6e7ea637245a        ab3c7c4901f3                                        "kube-apiserver --ad…"   32 minutes ago      Up 32 minutes                                k8s_kube-apiserver_kube-apiserver-master1_kube-system_3c6cddfd5da07da3c9914a8877f04e2f_0
ffc196554fb5        303ce5db0e90                                        "etcd --advertise-cl…"   32 minutes ago      Up 32 minutes                                k8s_etcd_etcd-master1_kube-system_c2b0c06988e4b224b5c12c39f4397031_0
999fdb9c5187        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 32 minutes ago      Up 32 minutes                                k8s_POD_etcd-master1_kube-system_c2b0c06988e4b224b5c12c39f4397031_0
ed7dc05b183e        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 32 minutes ago      Up 32 minutes                                k8s_POD_kube-apiserver-master1_kube-system_3c6cddfd5da07da3c9914a8877f04e2f_0
92a5f89d31f5        wise2c/keepalived-k8s                               "/usr/bin/keepalived…"   2 hours ago         Up About an hour                             keepalived-k8s
7809f8d701e9        wise2c/haproxy-k8s                                  "/docker-entrypoint.…"   2 hours ago         Up About an hour    0.0.0.0:6444->6444/tcp   haproxy-k8s


[root@master1 k8s]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      48082/kubelet       
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      48310/kube-proxy    
tcp        0      0 192.168.229.51:2379     0.0.0.0:*               LISTEN      47922/etcd          
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      47922/etcd          
tcp        0      0 192.168.229.51:2380     0.0.0.0:*               LISTEN      47922/etcd          
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      47922/etcd          
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      58494/kube-controll 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      59656/kube-schedule 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      7107/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      7422/master         
tcp        0      0 192.168.229.51:10010    0.0.0.0:*               LISTEN      19742/docker-contai 
tcp        0      0 127.0.0.1:43678         0.0.0.0:*               LISTEN      48082/kubelet       
tcp6       0      0 :::10250                :::*                    LISTEN      48082/kubelet       
tcp6       0      0 :::10251                :::*                    LISTEN      59656/kube-schedule 
tcp6       0      0 :::6443                 :::*                    LISTEN      47929/kube-apiserve 
tcp6       0      0 :::10252                :::*                    LISTEN      58494/kube-controll 
tcp6       0      0 :::6444                 :::*                    LISTEN      19906/docker-proxy  
tcp6       0      0 :::10256                :::*                    LISTEN      48310/kube-proxy    
tcp6       0      0 :::22                   :::*                    LISTEN      7107/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      7422/master  

查看以上状态说明初始化成功配置完成!


在这里插入图片描述

三.部署网络插件

仅在Master1节点操作
kubernetes支持多种网络方案,这里简单介绍常用的 flannel 方案.
下载kube-flannel.yml文件

[root@master1 k8s]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于kube-flannel.yml文件指定的镜像从coreos镜像仓库拉取,可能拉取失败,替换掉它

1.查看
[root@master1 k8s]# grep -i "flannel:" kube-flannel.yml 
        image: quay.io/coreos/flannel:v0.13.1-rc1
        image: quay.io/coreos/flannel:v0.13.1-rc1

​
2. 替换(注:因文件版本不同,替换命令可能有所不同)  ,如果替换不成功,那就进入vim kube-flannel.yml,配置文件
   找到quay.io/coreos/flannel:v0.13.1-rc1,手动替换也可以
[root@master k8s]# sed -i 's#quay.io/coreos/flannel:v0.13.1-rc1#registry.cn-shenzhen.aliyuncs.com/leedon/flannel:v0.11.0-amd64#' kube-flannel.yml
[root@master1 k8s]# grep -i "flannel:" kube-flannel.yml    #显示如下说明替换成功
        image: registry.cn-shenzhen.aliyuncs.com/leedon/flannel:v0.11.0-amd64
        image: registry.cn-shenzhen.aliyuncs.com/leedon/flannel:v0.11.0-amd64


3.执行安装
[root@master1 k8s]# kubectl apply -f kube-flannel.yml     #指定我们的配置文件
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

[root@master1 k8s]# kubectl get ds -n kube-system    #显示如下已经在运行中
NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-flannel-ds   1         1         1       1            1           <none>                   44s
kube-proxy        1         1         1       1            1           kubernetes.io/os=linux   49m


4.再次查看node和 Pod状态,全部OK, 所有的核心组件都起来了
#查看已经在运行的模块
[root@master1 k8s]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-72pm6          1/1     Running   0          50m
coredns-7ff77c879f-w79bx          1/1     Running   0          50m
etcd-master1                      1/1     Running   0          50m
kube-apiserver-master1            1/1     Running   0          50m
kube-controller-manager-master1   1/1     Running   0          25m
kube-flannel-ds-mpjt9             1/1     Running   0          103s
kube-proxy-8lr27                  1/1     Running   0          50m
kube-scheduler-master1            1/1     Running   0          22m


[root@master1 k8s]# kubectl get no
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   51m   v1.18.10
##         ready   代表运行!说明成功了



四.加入master节点

在另外两个节点执行

[root@master1 k8s]# cat kubeadm-init.log    #查看刚刚存放的初始化记录详细信息

在这里插入图片描述


#执行master2 ,master3 初始化也需要等待许久!!!!!!20分钟吧
[root@master2 ~] # kubeadm join 192.168.229.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:e0202c793d8972d0d42250d042ba031c47f720039f68df994472f377274b9931 \
    --control-plane --certificate-key 6283c77544fbc363924aa80fdc966b7d5a101611c2f765ab8c87206e67b4867c


[root@master3 ~]# kubeadm join 192.168.229.100:6444 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:e0202c793d8972d0d42250d042ba031c47f720039f68df994472f377274b9931 \
>     --control-plane --certificate-key 6283c77544fbc363924aa80fdc966b7d5a101611c2f765ab8c87206e67b4867c


[root@master2 ~]#     mkdir -p $HOME/.kube
[root@master2 ~]#     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]#     sudo chown $(id -u):$(id -g) $HOME/.kube/config[root@master3 ~]#     mkdir -p $HOME/.kube    
[root@master3 ~]#     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master3 ~]#     sudo chown $(id -u):$(id -g) $HOME/.kube/config

#查看:master2 和master3,内容一样即可
[root@master2 ~]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-72pm6          1/1     Running   0          75m
coredns-7ff77c879f-w79bx          1/1     Running   0          75m
etcd-master1                      1/1     Running   0          75m
etcd-master2                      1/1     Running   0          11m
etcd-master3                      1/1     Running   0          11m
kube-apiserver-master1            1/1     Running   0          75m
kube-apiserver-master2            1/1     Running   2          12m
kube-apiserver-master3            1/1     Running   2          12m
kube-controller-manager-master1   1/1     Running   1          50m
kube-controller-manager-master2   1/1     Running   0          12m
kube-controller-manager-master3   1/1     Running   0          12m
kube-flannel-ds-7gd9v             1/1     Running   2          12m
kube-flannel-ds-fllfk             1/1     Running   2          12m
kube-flannel-ds-mpjt9             1/1     Running   0          27m
kube-flannel-ds-tqqdr             1/1     Running   2          9m
kube-proxy-4jcbb                  1/1     Running   0          12m
kube-proxy-8brl9                  1/1     Running   0          12m
kube-proxy-8lr27                  1/1     Running   0          75m
kube-proxy-g7fkr                  1/1     Running   0          9m
kube-scheduler-master1            1/1     Running   1          48m
kube-scheduler-master2            1/1     Running   0          12m
kube-scheduler-master3            1/1     Running   0          12m


[root@master2 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   



[root@master3 ~]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-72pm6          1/1     Running   0          67m
coredns-7ff77c879f-w79bx          1/1     Running   0          67m
etcd-master1                      1/1     Running   0          68m
etcd-master2                      1/1     Running   0          4m
etcd-master3                      1/1     Running   0          3m59s
kube-apiserver-master1            1/1     Running   0          68m
kube-apiserver-master2            1/1     Running   2          4m56s
kube-apiserver-master3            1/1     Running   2          4m56s
kube-controller-manager-master1   1/1     Running   1          42m
kube-controller-manager-master2   1/1     Running   0          4m56s
kube-controller-manager-master3   1/1     Running   0          4m57s
kube-flannel-ds-7gd9v             1/1     Running   2          4m59s
kube-flannel-ds-fllfk             1/1     Running   2          4m59s
kube-flannel-ds-mpjt9             1/1     Running   0          19m
kube-flannel-ds-tqqdr             1/1     Running   2          88s
kube-proxy-4jcbb                  1/1     Running   0          4m59s
kube-proxy-8brl9                  1/1     Running   0          4m59s
kube-proxy-8lr27                  1/1     Running   0          67m
kube-proxy-g7fkr                  1/1     Running   0          88s
kube-scheduler-master1            1/1     Running   1          40m
kube-scheduler-master2            1/1     Running   0          4m56s
kube-scheduler-master3            1/1     Running   0          4m56s


[root@master3 ~]# kubectl get cs    成功
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   



##
master2机器上配置文件在这个路径下 /etc/kubernetes/manifests/kube-controller-manager.yaml 

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在master1 主机上查看集群状态

[root@master1 k8s]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   77m   v1.18.10
master2   Ready    master   14m   v1.18.10
master3   Ready    master   14m   v1.18.10
node1     Ready    <none>   10m   v1.18.10

在这里插入图片描述

添加node节点

在node1节点机器上,初始化!
[root@node1 yum.repos.d]# kubeadm join 192.168.229.100:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:e0202c793d8972d0d42250d042ba031c47f720039f68df994472f377274b9931

在这里插入图片描述

可以在任意master节点上查看集群状态:

[root@master2 ~]# kubectl get node
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   91m   v1.18.10
master2   Ready    master   28m   v1.18.10
master3   Ready    master   28m   v1.18.10
node1     Ready    <none>   24m   v1.18.10

查看pod运行情况

[root@master1 k8s]# kubectl get o -n kube-systm
error: the server doesn't have a resource type "o"
[root@master1 k8s]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-72pm6          1/1     Running   0          89m
coredns-7ff77c879f-w79bx          1/1     Running   0          89m
etcd-master1                      1/1     Running   0          90m
etcd-master2                      1/1     Running   0          25m
etcd-master3                      1/1     Running   0          25m
kube-apiserver-master1            1/1     Running   0          90m
kube-apiserver-master2            1/1     Running   2          26m
kube-apiserver-master3            1/1     Running   2          26m
kube-controller-manager-master1   1/1     Running   1          64m
kube-controller-manager-master2   1/1     Running   0          26m
kube-controller-manager-master3   1/1     Running   0          26m
kube-flannel-ds-7gd9v             1/1     Running   2          26m
kube-flannel-ds-fllfk             1/1     Running   2          26m
kube-flannel-ds-mpjt9             1/1     Running   0          41m
kube-flannel-ds-tqqdr             1/1     Running   2          23m
kube-proxy-4jcbb                  1/1     Running   0          26m
kube-proxy-8brl9                  1/1     Running   0          26m
kube-proxy-8lr27                  1/1     Running   0          89m
kube-proxy-g7fkr                  1/1     Running   0          23m
kube-scheduler-master1            1/1     Running   1          62m
kube-scheduler-master2            1/1     Running   0          26m
kube-scheduler-master3            1/1     Running   0          26m

查看service

[root@master3 ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   92m

查看代理规则

[root@master3 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.229.51:6443          Masq    1      1          0         
  -> 192.168.229.52:6443          Masq    1      0          0         
  -> 192.168.229.53:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0         
  -> 10.244.0.3:9153              Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0       

查看etcd集群状态:

[root@master1 k8s]# kubectl -n kube-system exec etcd-master1.qf.com -- etcdctl \
--endpoints=https://192.168.229.51:2379 \
--ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--key-file=/etc/kubernetes/pki/etcd/server.key cluster-health

到这里kubeadm 高可用集群已经部署完成!!!

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
--- apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unsed in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: jmgao1983/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: jmgao1983/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值