Calico集成kubernetes的CNI网络部署全过程、启用CA自签名

原创内容,转载请注明出处

博主地址:https://aronligithub.github.io/


kubernetes v1.11 二进制部署篇章目录


前言

在经过上一篇章关于kubernetes v1.11 生产环境 二进制部署 全过程,在部署kubernetes集群之后,其中nodes节点的状态依然是not readey的,因为网络的CNI插件Calico没有安装上,那么现在我就开始讲解部署Calico的网络转发服务了。

13423234-f609660e7f843b24.png


kubernetes集群的网络转发组件选择之争

在过去的一年多的生产环境,我一开始是使用flannel作为集群的docker容器之间的网络转发服务的,其实在使用的过程也没感觉有很大的不妥,但是在考虑了flannel与calico网络的性能差异之后,就决定转用calico的网络集成方式了。

那么为什么说flannelcalico的性能存在差异呢?

这个就要从我这边使用的应用场景以及flannelcalico的原理说起。

首先讲下我这边的通常生产环境

首先我们生产环境都是centos7裸机的环境,或者在阿里云、腾讯云直接购买多台centos7系统的服务器。
最能折腾的一种情况就是客户方提供的私有云,使用vsphere等集群构建的几台centos7虚拟机(这种情况下的服务器还有可能存在比较频繁的网络波动,这也是没辙的事情。通常客户方的私有云有时候出现这种情况,你也只能控制好心态,顶住前方源源不断提供过来的压力,默默排查一下集群异常的原因,最后通过etcd的心跳日志一看,存在很大的网络延迟,导致etcd集群服务出现异常,从而影响了整个kubernetes的集群服务)。

在这种环境下,flannel的作用

13423234-7f6afc59dae90bf8.png
flannel原理图.png

在这个服务器的环境下,如果使用flannel组件,基本上就是只能用vxlan的方式来创建网络,封装二层转发帧,然后flannel的网关转发,然后拆帧,达到另一台服务器的docker容器。

在这个flannel的转发过程中是很占用服务器的CPU性能的,而且转发的效率也比较低(当然是相对而言的啦,我在生产环境运行了一年多,感觉要求不是很高的话也还行)。

当然,如果熟悉flannel组件的kubernetes玩家可能会说,你可以用host-gw的方式呀,这种效率会高很多呀。

这种方式的前提是所有服务器的网络都在一台交换机上,这样才能满足。

而当你的服务器如果是在阿里云、腾讯云等云平台,你就肯定要用vxlan的方式了,因为这些虚拟机大部分情况肯定不是在一台交换机的,host-gw的方式经过验证是用不成的。
而且flannel还带有一个缺点:

flannel比较依赖防火墙,有时候如果你是部署docker-ce版本的话,在查看iptables -L的情况的时候,就会发现docker转发的规则是Drop的,这时候需要你手动改为Accept状态才可以转发的。

而如果你直接使用yum install -y docker的版本则不会有这种情况防火墙默认docker转发为Drop的情况。

总而言之,flannel是非常依赖防火墙的转发规则的,而calico并不依赖。


Calico官网介绍的运行原理

13423234-370212c3dfbb0562.png
Calico官网运行原理图

Calico官网介绍访问点击这里

Calico利用Linux内核原生的路由和iptables防火墙功能。进出各个容器,虚拟机和主机的所有流量都会在路由到目标之前遍历这些内核规则。

calicoctl:允许您从简单的命令行界面实现高级策略和网络。

orchestrator插件:提供与各种流行协调器的紧密集成和同步。

key / value store:保存Calico的策略和网络配置状态。

calico/node:在每个主机上运行,​​从键/值存储中读取相关的策略和网络配置信息,并在Linux内核中实现它。

Dikastes / Envoy:可选的Kubernetes组件,通过相互TLS身份验证保护工作负载到工作负载的通信,并实施应用层策略。

Calico 与 flannel 有什么不同呢?

相信如果第一次看Calico原理的读者应该现在还感觉模模糊糊。那么我先简单通俗讲下几点:

  • Calicoflannel 都是需要访问etcd集群,来存储数据
    • flanneletcd需要创建虚拟网络,在后续从这个虚拟网段中分配dockerIP网段
    • Calico则无需在etcd提前创建虚拟网络,直接启用即可,因为网段会直接写在linux的路由表上
  • Calico有什么flannel没有的缺点呢?
    • Calico自动分配podIP地址是存储在etcd的,如果kubernetes需要更新版本,那么就要停某个node节点的服务;在停服务之间,如果没有将该node节点的pod删除掉,残留部分,这个时候残留的pod IP也会在etcd数据中残留,当你再启动的时候,很可能由于etcd还有残留的数据,导致这些残留的pod数据无法启动。
    • flannel目前使用上没有发现这种情况。
  • Calicoflannel转发的协议区别
    • Calico是基于BGP协议进行转发的,而flannel是通过封装数据帧进行转发的。

好了,到此为止也只能先说的大概,作为it人员说再多原理没有去实践运用什么的都是一场空,那么下面来看看官网介绍的如何快速入门。


13423234-2c66ec3d7f6cb57b.jpg

Calico在Kubernetes上的快速入门

点击访问Calico官网的快速入门文档介绍

13423234-7cc9b5b58e8cb3c4.png

在文档介绍中找到kubernetes托管式安装的介绍说明

点击访问Calico托管式安装说明

13423234-ee145e0229bc6d87.png

可以从官网的介绍看出,Calico有多种托管方式,因为我们使用的二进制部署方式,所以首先可以抛开kubeadm这种不适合生产环境的方式。

标准托管方式kubernetes数据存储区分的方式有什么区别呢?

  • 标准托管方式: Calico与kubernetes共用一套etcd集群存储数据
  • kubernetes数据存储区分:Calico 与 kubernetes 分别单独各自使用一套etcd集群存储数据

目前最常用以及我线上在用的方式就是标准托管方式,因为维护一套etcd集群便于管理一些,降低了服务器维护的成本。

那么下面开始介绍如何以标准托管方式来安装部署。


标准托管方式安装

1. 查看kubernetes当前的安装状态

在前面篇章的部署过程中,kubernetes的nodes状态是not ready的,如下:

[root@server81 install_RAS_node]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-2pqfLLUo8vPQbGUyXqtN9AdDvDIymj9UrQynD59AgPA   7m        kubelet-bootstrap   Approved,Issued
node-csr-bYaJfolaFPO5HLXt96A7PHK8aKSGTQEQQdzl9lmHOOM   13m       kubelet-bootstrap   Approved,Issued
node-csr-on8Qaq30OUUTstM5rKb17OeWQOJV9s528yk_VSb-XzM   11m       kubelet-bootstrap   Approved,Issued
[root@server81 install_RAS_node]# 
[root@server81 install_RAS_node]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    7m        v1.11.0
172.16.5.86   NotReady   <none>    7m        v1.11.0
172.16.5.87   NotReady   <none>    7m        v1.11.0
[root@server81 install_RAS_node]# 
[root@server81 install_RAS_node]# 

为什么node节点处于not ready的状态呢?我们可以通过日志观察一下:

[root@server81 install_RAS_node]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    7m        v1.11.0
172.16.5.86   NotReady   <none>    7m        v1.11.0
172.16.5.87   NotReady   <none>    7m        v1.11.0
[root@server81 install_RAS_node]# 
[root@server81 install_RAS_node]# journalctl -f
-- Logs begin at Tue 2018-07-31 18:29:27 HKT. --
Sep 02 20:29:51 server81 kubelet[14005]: W0902 20:29:51.700528   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:29:51 server81 kubelet[14005]: E0902 20:29:51.700754   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:29:56 server81 kubelet[14005]: W0902 20:29:56.705337   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:29:56 server81 kubelet[14005]: E0902 20:29:56.706274   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:01 server81 kubelet[14005]: W0902 20:30:01.710599   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:01 server81 kubelet[14005]: E0902 20:30:01.711297   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:06 server81 kubelet[14005]: W0902 20:30:06.715059   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:06 server81 kubelet[14005]: E0902 20:30:06.715924   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:11 server81 kubelet[14005]: W0902 20:30:11.720491   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:11 server81 kubelet[14005]: E0902 20:30:11.721574   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:16 server81 kubelet[14005]: W0902 20:30:16.726016   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:16 server81 kubelet[14005]: E0902 20:30:16.726467   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
^C
[root@server81 install_RAS_node]# 

可以看到警告语句以及错误语句:

cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d

kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

说明就是kubernetes的CNI插件无法使用。那么,我们下面就开始一步步安装Calico吧。


2.根据官网的说明下载RABC文件

官网说明地址:https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/hosted

13423234-b586ad7b0cfd0c40.png

RBAC
如果在启用RBAC的群集上部署Calico,则应首先应用ClusterRole和ClusterRoleBinding规范:

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml

首先查看确认当前最新版本的RBAC的yaml文件

浏览yaml文件地址:https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml

# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-kube-controllers
rules:
  - apiGroups:
    - ""
    - extensions
    resources:
      - pods
      - namespaces
      - networkpolicies
      - nodes
    verbs:
      - watch
      - list
  - apiGroups:
    - networking.k8s.io
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-node
rules:
  - apiGroups: [""]
    resources:
      - pods
      - nodes
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

在这里已经得到所需要安装的yaml文件了,下面就安装看看。


在服务器上执行创建Calico的RBAC

[root@server81 install_Calico]# wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
--2018-09-02 20:54:35--  https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 159.65.5.64
Connecting to docs.projectcalico.org (docs.projectcalico.org)|159.65.5.64|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1247 (1.2K) [application/x-yaml]
Saving to: ‘rbac.yaml’

100%[==================================================================================================================================================>] 1,247       --.-K/s   in 0s      

2018-09-02 20:54:36 (48.3 MB/s) - ‘rbac.yaml’ saved [1247/1247]

[root@server81 install_Calico]# 
[root@server81 install_Calico]# ls
calico.yaml  config_etcd_https.sh  rbac.yaml  simple
[root@server81 install_Calico]# 
[root@server81 install_Calico]# kubectl apply -f rbac.yaml 
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
[root@server81 install_Calico]# 

只要下载好yaml,然后不需要修改,直接执行即可安装的了。


3.安装Calico

13423234-71c6eeeda72688e4.png

安装Calico

安装Calico:

  1. 下载calico.yaml
  2. etcd_endpoints在提供的ConfigMap中配置以匹配您的etcd集群。

然后只需应用清单:

kubectl apply -f calico.yaml

注意

在运行上述命令之前,请确保将提供的ConfigMap配置为etcd集群的位置。


服务器执行安装Calico

[root@server81 install_Calico]# wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
--2018-09-02 21:18:32--  https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 159.65.5.64
Connecting to docs.projectcalico.org (docs.projectcalico.org)|159.65.5.64|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11829 (12K) [application/x-yaml]
Saving to: ‘calico.yaml.1’

100%[==================================================================================================================================================>] 11,829      32.3KB/s   in 0.4s   

2018-09-02 21:18:34 (32.3 KB/s) - ‘calico.yaml.1’ saved [11829/11829]

[root@server81 install_Calico]# 
[root@server81 install_Calico]# ls
calico.yaml  calico.yaml.1  config_etcd_https.sh  rbac.yaml  simple
[root@server81 install_Calico]# 
[root@server81 install_Calico]# vim calico.yaml.1 
[root@server81 install_Calico]# 
[root@server81 install_Calico]# cat calico.yaml.1 
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
# This manifest includes the following component versions:
#   calico/node:v3.1.3
#   calico/cni:v3.1.3
#   calico/kube-controllers:v3.1.3

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "http://127.0.0.1:2379"

  # Configure the Calico backend to use.
  calico_backend: "bird"

  # The CNI network configuration to install on each node.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.0",
      "plugins": [
        {
          "type": "calico",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "log_level": "info",
          "mtu": 1500,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

---

# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following files with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # This self-hosted install expects three files with the following names.  The values
  # should be base64 encoded strings of the entire contents of each file.
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

---

# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      hostNetwork: true
      tolerations:
        # Make sure calico/node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: quay.io/calico/node:v3.1.3
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              value: "1440"
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /readiness
              port: 9099
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: quay.io/calico/cni:v3.1.3
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      containers:
        - name: calico-kube-controllers
          image: quay.io/calico/kube-controllers:v3.1.3
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,profile,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system
[root@server81 install_Calico]# 

修改calico的安装yaml文件中的endpoints

13423234-b821952bebb386d9.png

修改calico的安装yaml文件中的镜像为本地仓库的镜像

13423234-6a09912da8edf566.png
calico-node镜像
13423234-7a26e6bc9d054021.png
install-cni镜像
13423234-0e8f3a72736d7dc3.png
calico-kube-controllers镜像

首先yaml文件中有三个镜像,如下:

image: quay.io/calico/node:v3.1.3
image: quay.io/calico/cni:v3.1.3
image: quay.io/calico/kube-controllers:v3.1.3

将三个镜像的地址改为本地仓库的地址,如下:

image: 172.16.5.81:5000/calico/node:v3.1.3
image: 172.16.5.81:5000/calico/cni:v3.1.3
image: 172.16.5.81:5000/calico/kube-controllers:v3.1.3

为什么要改为本地镜像?
因为大部分这种官网的镜像都是需要翻墙下载的,而且速度也慢,还是先下载至本地仓库的好。

修改镜像的地址改为本地的仓库地址

:%s/quay.io/172.16.5.81:5000/g

13423234-9d33852cc678a6f6.png
批量修改仓库地址
13423234-7061f097eae6cc27.png

修改完毕了之后,当然就是翻墙将原有的镜像下载后,然后push到我的本地仓库。


4.搭建本地仓库

对于构建本地仓库这块我在这篇章就不做太多的细节说明了,我使用自动构建以及推送镜像的脚本执行如下:

[root@server81 registry]# ./install_docker_registry.sh 
2b0fb280b60d: Loading layer [==================================================>]  5.058MB/5.058MB
05d392f56700: Loading layer [==================================================>]  7.802MB/7.802MB
32f085a1e7bb: Loading layer [==================================================>]  22.79MB/22.79MB
e23ed9242cd7: Loading layer [==================================================>]  3.584kB/3.584kB
2bf5fdee0818: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: registry:2
Error response from daemon: No such container: registry
Error: No such container: registry
a806b2b5d1838918e50dd768d6ed9a8c44e07823f67fd2c0650f91fe550dda81
e17133b79956: Loading layer [==================================================>]  744.4kB/744.4kB
Loaded image: k8s.gcr.io/pause-amd64:3.1
Redirecting to /bin/systemctl restart docker.service
The push refers to repository [172.16.5.81:5000/pause-amd64]
e17133b79956: Pushed 
3.1: digest: sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d size: 527
[root@server81 registry]# 
[root@server81 registry]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
a806b2b5d183        registry:2          "/entrypoint.sh /etc…"   13 seconds ago      Up 10 seconds       0.0.0.0:5000->5000/tcp   registry
[root@server81 registry]# 
[root@server81 registry]# docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
172.16.5.81:5000/pause-amd64   3.1                 da86e6ba6ca1        8 months ago        742kB
k8s.gcr.io/pause-amd64         3.1                 da86e6ba6ca1        8 months ago        742kB
registry                       2                   751f286bc25e        13 months ago       33.2MB
[root@server81 registry]# 
[root@server81 registry]# cat restartRegistry.sh 
docker stop registry
docker rm registry
docker run -d -p 5000:5000 --name=registry --restart=always \
  --privileged=true \
  --log-driver=none \
  -v /root/registry/registrydata:/var/lib/registry \
  registry:2
[root@server81 registry]# 

下一步,将calico的镜像推送到镜像仓库中。

[root@server81 registry]# ls
catImage.sh  image  install_docker_registry.sh  networkbox.tar  pause-amd64.tar  registry.tar  restartRegistry.sh
[root@server81 registry]# 
[root@server81 registry]# cd image/
[root@server81 image]# ls
calico  coredns
[root@server81 image]# 
"1.进入我之前下载好的calico镜像文件夹目录:"
[root@server81 image]# cd calico/  
[root@server81 calico]# ls
cni.tar  controllers.tar  node.tar
[root@server81 calico]# 
"2.分别将三个Calico镜像load进来:"
[root@server81 calico]# docker load -i cni.tar 
0314be9edf00: Loading layer [==================================================>]   1.36MB/1.36MB
15db169413e5: Loading layer [==================================================>]  28.05MB/28.05MB
4252efcc5013: Loading layer [==================================================>]  2.818MB/2.818MB
76cf2496cf36: Loading layer [==================================================>]   3.03MB/3.03MB
91d3d3a16862: Loading layer [==================================================>]  2.995MB/2.995MB
18a58488ba3b: Loading layer [==================================================>]  3.474MB/3.474MB
8d8197f49da2: Loading layer [==================================================>]  27.34MB/27.34MB
7520364e0845: Loading layer [==================================================>]  9.216kB/9.216kB
b9d064622bd6: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: 172.16.5.81:5000/calico/cni:v3.1.3
[root@server81 calico]# 
[root@server81 calico]# docker load -i controllers.tar 
cd7100a72410: Loading layer [==================================================>]  4.403MB/4.403MB
2580685bfb60: Loading layer [==================================================>]  50.84MB/50.84MB
Loaded image: 172.16.5.81:5000/calico/kube-controllers:v3.1.3
[root@server81 calico]# 
[root@server81 calico]# docker load -i node.tar 
ddc4cb8dae60: Loading layer [==================================================>]   7.84MB/7.84MB
77087b8943a2: Loading layer [==================================================>]  249.3kB/249.3kB
c7227c83afaf: Loading layer [==================================================>]  4.801MB/4.801MB
2e0e333a66b6: Loading layer [==================================================>]  231.8MB/231.8MB
Loaded image: 172.16.5.81:5000/calico/node:v3.1.3
[root@server81 calico]# 
"3.可以看出我已经将镜像的仓库地址都修改tag了,那么下面直接push就可以了"
[root@server81 calico]# docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
172.16.5.81:5000/calico/node               v3.1.3              7eca10056c8e        3 months ago        248MB
172.16.5.81:5000/calico/kube-controllers   v3.1.3              240a82836573        3 months ago        55MB
172.16.5.81:5000/calico/cni                v3.1.3              9f355e076ea7        3 months ago        68.8MB
172.16.5.81:5000/pause-amd64               3.1                 da86e6ba6ca1        8 months ago        742kB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        8 months ago        742kB
registry                                   2                   751f286bc25e        13 months ago       33.2MB
[root@server81 calico]# 
"4.分开push三个Calico镜像至本地仓库"
[root@server81 calico]# docker push 172.16.5.81:5000/calico/node:v3.1.3
The push refers to repository [172.16.5.81:5000/calico/node]
2e0e333a66b6: Pushed 
c7227c83afaf: Pushed 
77087b8943a2: Pushed 
ddc4cb8dae60: Pushed 
cd7100a72410: Pushed 
v3.1.3: digest: sha256:9871f4dde9eab9fd804b12f3114da36505ff5c220e2323b7434eec24e3b23ac5 size: 1371
[root@server81 calico]# 
[root@server81 calico]# docker push 172.16.5.81:5000/calico/kube-controllers:v3.1.3
The push refers to repository [172.16.5.81:5000/calico/kube-controllers]
2580685bfb60: Pushed 
cd7100a72410: Mounted from calico/node 
v3.1.3: digest: sha256:2553b273c3fc3afbf624804f0a47fca452d53d97c2b3c8867c1fe629855ea91f size: 740
[root@server81 calico]# 
[root@server81 calico]# docker push 172.16.5.81:5000/calico/cni:v3.1.3
The push refers to repository [172.16.5.81:5000/calico/cni]
b9d064622bd6: Pushed 
7520364e0845: Pushed 
8d8197f49da2: Pushed 
18a58488ba3b: Pushed 
91d3d3a16862: Pushed 
76cf2496cf36: Pushed 
4252efcc5013: Pushed 
15db169413e5: Pushed 
0314be9edf00: Pushed 
v3.1.3: digest: sha256:0b4eb34f955f35f8d1b182267f7ae9e2be83ca6fe1b1ade63116125feb8d07b9 size: 2207
[root@server81 calico]# 
"5.查看本地仓库现在有哪些镜像,可以看出calico的镜像都已经push上仓库了"
[root@server81 registry]# ./catImage.sh 
{"repositories":["calico/cni","calico/kube-controllers","calico/node","pause-amd64"]}
[root@server81 registry]# 
[root@server81 registry]# cat catImage.sh 
curl http://localhost:5000/v2/_catalog
[root@server81 registry]# 

好了,镜像都上传至本地仓库了,下一步就是要配置Calico访问etcd集群的TLS证书。


5.配置Calico访问https方式etcd集群的TLS证书

首先我们先看看该怎么配置Calico的TLS证书:

13423234-9c8f1d0eedb2f141.png
查看etcd的TLS文件的容器路径
13423234-9cb271b1901b666c.png
注释Secret部分的TLS部分
13423234-cf9af9de15b9a30e.png
部署calico-node设置hostpath的方式来挂载TLS文件
13423234-272bca12f4eef23f.png
部署calico-kube-controllers设置hostpath挂载TLS文件

好了,基本把这两个位置的证书挂载修改一下就好了,下一步就是修改其他具体kubernetes的参数。


6.设置kubernetes集群的IP网段

13423234-bd3d5d080e58bfe8.png

7.拷贝并创建Calico用来访问etcd集群的TLS文件目录

"1.查看当前etcd集群的TLS证书"
[root@server81 install_Calico]# ls /etc/etcd/etcd
etcd.conf      etcd.conf.bak  etcdSSL/       
[root@server81 install_Calico]# ls /etc/etcd/etcdSSL/
ca-config.json  ca-csr.json  ca.pem    etcd-csr.json  etcd.pem
ca.csr          ca-key.pem   etcd.csr  etcd-key.pem
[root@server81 install_Calico]# 
"2.编写自动配置calico证书脚本"
[root@server81 install_Calico]# cat config_etcd_https.sh 
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
etcdInfo=/opt/ETCD_CLUSER_INFO

kubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLS

etcdTLSDir=/etc/etcd/etcdSSL
etcdCaPem=$etcdTLSDir/ca.pem
etcdCaKeyPem=$etcdTLSDir/ca-key.pem
etcdPem=$etcdTLSDir/etcd.pem
etcdKeyPem=$etcdTLSDir/etcd-key.pem

calicoTLSDir=/etc/calico/calicoTLS

ETCD_ENDPOINT="`cat $etcdInfo | grep ETCD_ENDPOINT_2379 | cut -f 2 -d "="`"

## function 
function copy_etcd_ca(){
  mkdir -p $calicoTLSDir
  cp $etcdCaPem $calicoTLSDir/etcd-ca
  cp $etcdKeyPem $calicoTLSDir/etcd-key
  cp $etcdPem $calicoTLSDir/etcd-cert
  ls $calicoTLSDir
}

copy_etcd_ca
[root@server81 install_Calico]# 
[root@server81 install_Calico]# ./config_etcd_https.sh 
etcd-ca  etcd-cert  etcd-key
[root@server81 install_Calico]# 
"3.可以看到最后calicoTLS的三个证书文件"
[root@server81 install_Calico]# ls /etc/calico/calicoTLS/
etcd-ca  etcd-cert  etcd-key
[root@server81 install_Calico]# 

8.最后修改好的calico.yaml

[root@server81 install_Calico]# cat calico.yaml
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
# This manifest includes the following component versions:
#   calico/node:v3.1.3
#   calico/cni:v3.1.3
#   calico/kube-controllers:v3.1.3

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  #etcd_endpoints: "http://127.0.0.1:2379"
  etcd_endpoints: "https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379"

  # Configure the Calico backend to use.
  calico_backend: "bird"

  # The CNI network configuration to install on each node.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.0",
      "plugins": [
        {
          "type": "calico",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "log_level": "info",
          "mtu": 1500,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"

---

# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following files with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # This self-hosted install expects three files with the following names.  The values
  # should be base64 encoded strings of the entire contents of each file.
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

---

# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      hostNetwork: true
      tolerations:
        # Make sure calico/node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: 172.16.5.81:5000/calico/node:v3.1.3
          env:
            ## 配置绑定的网卡,不然有些node会提示网卡搜索不了
            - name: IP_AUTODETECTION_METHOD
              #value: interface=eno4    ## 定义匹配的具体网卡名称
              #value: interface=en.*   ## 根据网卡的正则匹配所有node的网卡名称
              #value: can-reach=172.16.5.87  ## 根据目标的IP或者域名来搜索网卡
              value: first-found        ## 定义第一个找到有效的网卡
            - name: IP6_AUTODETECTION_METHOD
              value: first-found
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              #value: "192.168.0.0/16"
              value: "10.1.0.0/24"
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              value: "1440"
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /readiness
              port: 9099
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: 172.16.5.81:5000/calico/cni:v3.1.3
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # hostpath certs
        - name: etcd-certs
          hostPath:
            path: /etc/calico/calicoTLS
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        #- name: etcd-certs
        #  secret:
        #    secretName: calico-etcd-secrets
        #    defaultMode: 0400

---

# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      containers:
        - name: calico-kube-controllers
          image: 172.16.5.81:5000/calico/kube-controllers:v3.1.3
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,profile,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          hostPath: 
            path: /etc/calico/calicoTLS
        #  secret:
        #    secretName: calico-etcd-secrets
        #    defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system
[root@server81 install_Calico]# 

9.执行yaml进行calico部署

[root@server81 install_Calico]# kubectl apply -f calico.yaml 
configmap/calico-config created
secret/calico-etcd-secrets created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
[root@server81 install_Calico]# 
[root@server81 install_Calico]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    2h        v1.11.0
172.16.5.86   NotReady   <none>    2h        v1.11.0
172.16.5.87   NotReady   <none>    2h        v1.11.0
[root@server81 install_Calico]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   Ready      <none>    2h        v1.11.0
172.16.5.86   NotReady   <none>    2h        v1.11.0
172.16.5.87   NotReady   <none>    2h        v1.11.0
[root@server81 install_Calico]# 
[root@server81 install_Calico]# kubectl get pod -n kube-system
NAME                                       READY     STATUS              RESTARTS   AGE
calico-kube-controllers-795885ddbd-dr8t7   1/1       Running             0          1m
calico-node-26brb                          0/2       ContainerCreating   0          1m
calico-node-w2ntg                          0/2       ContainerCreating   0          1m
calico-node-zk8ch                          2/2       Running             0          1m
[root@server81 install_Calico]# 

可以从上面看出server81的服务器已经是ready状态了,而另外的两台因为没有calico访问etcd的TLS文件,导致无法部署成功,那么下面把文件拷贝到另外两台服务器。


10.将calico访问etcd的TLS文件拷贝至server86/87服务器

[root@server81 install_Calico]# cd /etc/
[root@server81 etc]# scp -r calico root@server86:/etc/
etcd-ca                                                   100% 1346   340.4KB/s   00:00    
etcd-key                                                  100% 1679   485.3KB/s   00:00    
etcd-cert                                                 100% 1436   433.1KB/s   00:00    
[root@server81 etc]# 
[root@server81 etc]# scp -r calico root@server87:/etc/
etcd-ca                                                   100% 1346   400.0KB/s   00:00    
etcd-key                                                  100% 1679   400.8KB/s   00:00    
etcd-cert                                                 100% 1436   535.7KB/s   00:00    
[root@server81 etc]# 
[root@server81 etc]# kubectl get pod -n kube-system
NAME                                       READY     STATUS              RESTARTS   AGE
calico-kube-controllers-795885ddbd-dr8t7   1/1       Running             0          3m
calico-node-26brb                          0/2       ContainerCreating   0          3m
calico-node-w2ntg                          0/2       ContainerCreating   0          3m
calico-node-zk8ch                          2/2       Running             0          3m
[root@server81 etc]# 

可以看出另外两台拷贝了calico的证书后,还没有部署完成,查看一下日志:

13423234-45beead2271c0ba6.png

这个问题主要是因为本地仓库采用的是http的访问方式,还没有启用https,那么就需要给docker配置非安全的访问方式了。

11.配置docker的非安全方式访问本地仓库

[root@server81 etc]# cd /etc/docker/
[root@server81 docker]# ls
daemon.json  key.json
[root@server81 docker]# cat daemon.json 
{"insecure-registries":["172.16.5.81:5000"]}

只需要创建这个daemon.json配置文件,写上非安全仓库的访问地址即可,然后重启docker服务。

将配置文件拷贝至server86/87服务器上。

[root@server81 docker]# scp daemon.json root@server86:/etc/docker/
daemon.json                                             100%   99    19.9KB/s   00:00    
[root@server81 docker]# scp daemon.json root@server87:/etc/docker/
daemon.json                                             100%   99    40.8KB/s   00:00    
[root@server81 docker]# 
[root@server86 docker]# ls
daemon.json  key.json
[root@server86 docker]# 
[root@server86 docker]# pwd
/etc/docker
[root@server86 docker]# 
[root@server86 docker]# service docker restart
Redirecting to /bin/systemctl restart docker.service
[root@server86 docker]# 
[root@server87 ~]# cd /etc/docker/
[root@server87 docker]# 
[root@server87 docker]# ls
daemon.json  key.json
[root@server87 docker]# 
[root@server87 docker]# cat daemon.json 
{"insecure-registries":["172.16.5.81:5000"]}
[root@server87 docker]# 
[root@server87 docker]# service docker restart
Redirecting to /bin/systemctl restart  docker.service
[root@server87 docker]# 

12.最后确认查看node是否都是ready状态

[root@server81 docker]# kubectl get pod -n kube-system
NAME                                       READY     STATUS    RESTARTS   AGE
calico-kube-controllers-795885ddbd-dr8t7   1/1       Running   0          28m
calico-node-26brb                          2/2       Running   0          28m
calico-node-w2ntg                          2/2       Running   0          28m
calico-node-zk8ch                          2/2       Running   0          28m
[root@server81 docker]# 
[root@server81 docker]# kubectl get node
NAME          STATUS    ROLES     AGE       VERSION
172.16.5.81   Ready     <none>    2h        v1.11.0
172.16.5.86   Ready     <none>    2h        v1.11.0
172.16.5.87   Ready     <none>    2h        v1.11.0
[root@server81 docker]# 

到此,node的状态全都是ready状态了,下面就可以部署CoreDNS以及相关pod服务进行使用了。


当然,看到这里的朋友肯定会问下一个篇章会打算写什么?
下一个篇章,我将会继续带大家如何去处理好kubernetes和物理服务器所有的DNS域名解析的问题。
点击这里,进入kuberntes部署CoreDNS服务。


kubernetes v1.11 二进制部署篇章目录


如果你想要看我写的总体系列文章目录介绍,可以点击kuberntes以及运维开发文章目录介绍

13423234-501ad754f8c89ddd.jpg

13423234-7907ae6344e86e8a.png

关注微信公众号,回复【资料】、Python、PHP、JAVA、web,则可获得Python、PHP、JAVA、前端等视频资料。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

海洋的渔夫

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值