【kubernetes/k8s 部署】kubernetes 手动二进制部署

本文基于kubernetes 1.18手动二进制部署,可执行文件目录在/opt/k8s/bin

  etcd: https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz

          /opt/k8s/bin目录:etcd etcdctl

  kubectl: https://dl.k8s.io/v1.18.8/kubernetes-client-linux-amd64.tar.gz

  flanneld: https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

         /opt/k8s/bin目录:flanneld mk-docker-opts.sh 

  kubernetes-server:  https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz

         /opt/k8s/bin目录:kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy

 

调整内核参数,对于 k8s (看情况而定)

cat > /etc/sysctl.d/kubernetes.conf

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0    #禁止使用swap空间,只有当系统OOM时才允许使用它

vm.overcomit_memory=1   #不检查物理内存是否足够用

vm.panic_on_oom=0          #开启OOM

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

 

0. 准备工作

   所有节点配置 /etc/hosts,好处是可以随意改动master指向,无需重新设置,包括设置的kubectl config,kube-proxy config,bootstrap config,感觉这么设置比较鸡贼,哈哈哈(适合自己的测试环境)

 192.168.122.224 master.node.local

    kubernetes 系统的各组件使用 TLS 证书对通信进行加密,使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书。TLS加密通信,需要一个CA认证机构,向客户端下发根证书、服务端证书以及签名私钥给客户端

    生成的 CA 证书和秘钥文件如下:

证书名

用途

ca.pemca根证书
ca-key.pem服务端私钥,对客户端请求进行解密和签名
ca.csr 
admin.pemkubectl TLS证书,具有admin权限
admin-key.pemkubectl TLS私钥
  

   CA证书管理工具:

    • easyrsa ---openvpn比较常用

    • openssl

    • cfssl ---使用最多,使用json文件格式,相对简单

   安装 CFSSL

$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 

$ chmod +x cfssl_linux-amd64 

$ sudo mv cfssl_linux-amd64 /usr/bin/cfssl 

$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 

$ chmod +x cfssljson_linux-amd64 

$ sudo mv cfssljson_linux-amd64 /usr/bin/cfssljson 

$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 

$ chmod +x cfssl-certinfo_linux-amd64 

$ sudo mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo 

 

   TLS Bootstrapping 使用的Token   

head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成         
BOOTSTRAP_TOKEN="2ab38fcb2b77d7f15ce65db2dd612ab8"    

   

   0.1 创建CA 证书和密钥 

      0.1.1 创建CA ca-config.json

       用来生成 CA 文件的 JSON 配置文件

  • signing: 证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE
  • server auth: client 可以用该CA 对server 提供的证书进行校验
  • client auth: server 可以用该CA 对client 提供的证书进行验证
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

     0.1.2  证书签名文件 ca-csr.json

       用来生成 CA 证书签名请求(CSR)的 JSON 配置文件

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

    生成CA 证书和私钥:

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

   生成文件

  •    ca.pem
  •    ca-key.pem
  •    ca-csr

           scp ca* /root@{node}/etc/kubernetes/ssl 传到所有结点上

 

  0.2 配置kubectl 命令行

      默认从~/.kube/config配置文件中获取访问kube-apiserver 地址、证书、用户名等信息。所有客户端的证书首先要经过Kubernetes集群CA的签署,否则不会被集群认可。为kubectl配置合适的kubeconfig,就可以在集群中的任意节点使用 。kubectl的权限为admin,具有访问kubernetes所有api的权限。

证书名

用途

ca.pemca根证书
admin.pemkubectl TLS证书,具有admin权限
admin-key.pemkubectl TLS私钥

    0.2.1 创建admin-csr.json, 用于创建证书签名请求文件

      kubectl 与kube-apiserver 的安全端口通信,需要为安全通信提供TLS 证书和密钥。kubectl 使用 https 协议与 kube-apiserver 进行安全通信

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
  • O: system:masters:kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system:masters
  • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空。

     0.2.2 生成admin 证书和私钥

# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

     生成文件 admin.pem admin-key.pem admin.csr

     将admin*.pem同步到所有节点/etc/kubernetes/ssl目录下

     0.2.3 创建kubectl kubeconfig 文件

设置集群参数

    # kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://master.node.local:6443

设置客户端认证参数

    # kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem --token=2ab38fcb2b77d7f15ce65db2dd612ab8

设置上下文参数

    # kubectl config set-context kubernetes --cluster=kubernetes --user=admin

设置默认上下文

    # kubectl config use-context kubernetes

  • --certificate-authority:验证 kube-apiserver 的根证书
  • --client-certificate:admin 证书,与 kube-apiserver https 通信时使用
  • --client-key: admin 私钥,与 kube-apiserver https 通信时使用
  • --embed-certs=true: 将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(否则,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件)
  • server:kube-apiserver 的地址

 

一. master部署

   master部分主要分为 etcd kube-apiserver kube-controller-manager kube-scheduler。

  1. kube-scheduler 和 kube-controller-manager 会选举产生一个 leader,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;
  2. kube-apiserver 是无状态的,可以通过loadbalancer访问,从而保证服务可用性。

1.1 etcd部署

      下载  wget https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz

      kubernetes 使用 etcd 集群持久化存储所有 API 对象

  1.1.1 创建etcd TLS 密钥和证书 

  • hosts 字段指定授权使用该证书的etcd节点,一般使用本机IP,127.0.0.1,这个参数--listen-client-urls
  • 该节点为10.10.15.70,其他两个节点修改相应得IP即可
# cat etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.10.15.70",
    "10.10.15.71",
    "10.10.15.72"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  •   hosts:指定授权使用该证书的 etcd 节点 IP 列表,需要将 etcd 集群所有节点 IP 都列在其中。

  1.1.2 生成etcd证书和私钥        

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \

-ca-key=/etc/kubernetes/ssl/ca-key.pem \

-config=/etc/kubernetes/ssl/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

       生成文件etcd.pem与etcd-key.pem,将生成得etcd*.pem 拷贝至/etc/etcd/ssl目录下,其他两个节点修改相应得IP使用cfssl命令生成证书和私钥

  1.1.3 /etc/systemd/system/etcd.service

        name分为为三个节点etcd1 etcd2 etcd3,这个需要对号入座

        10.10.15.70为本节点IP(etcd1),etcd2以及etcd3为peer 节点,其他两个节点修改相应的IP

    etcd的工作目录/var/lib/etcd,在启动服务前创建

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \
  --name=etcd1 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls=https://10.10.15.70:2380 \
  --listen-peer-urls=https://10.10.15.70:2380 \
  --listen-client-urls=https://10.10.15.70:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://10.10.15.70:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=etcd1=https://10.10.15.70:2380,etcd2=https://10.10.15.71:2380,etcd3=https://10.10.15.81:2380 \
  --initial-cluster-state=new \
  --data-dir=/var/lib/etcd \
  --wal-dir=/var/lib/etcd \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=8589934592 \
  --heartbeat-interval=500 \
  --election-timeout=4000
   
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • WorkingDirectory--data-dir:工作目录和数据目录为,需在启动服务前创建这个目录
  • --wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘;
  • --name:指定节点名称,当 --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
  • --cert-file--key-file:etcd server 与 client 通信时使用的证书和私钥;
  • --trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
  • --peer-cert-file--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
  • --peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;

 

  1.1.4 启动etcd服务

systemctl daemon-reload

systemctl enable etcd

systemctl start etcd

 

验证:/opt/k8s/bin/etcdctl -w table --endpoints=https://10.10.15.70:2379  --cacert=/etc/kubernetes/ssl/ca.pem   --cert=/etc/etcd/ssl/etcd.pem   --key=/etc/etcd/ssl/etcd-key.pem  endpoint status

+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.10.15.70:2379 | 4db88f61f3c51bb1 |  3.4.15 |   20 kB |      true |      false |         2 |          5 |                  5 |        |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
 

1.2 kube-master部署

  1.2.1 创建kubernetes 证书

      hosts中包括VIP master地址,kubernete cluster ip,该节点地址,其他master只需修改节点IP

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.10.15.70",
    "k8s-master-url",
    "master-node-local",
    "10.200.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

  1.2.2 生成kubernetes 证书和私钥

# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

    将生成的kubernetes*.pem同步至该master节点的/etc/kubernetes/ssl目录中

  1.2.3 创建kube-apiserver 客户端token

# cat /etc/kubernetes/token.csv 
2ab38fcb2b77d7f15ce65db2dd612ab8,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

    同步token.csv至所有master节点/etc/kubernetes目录

  1.2.4 创建加密配置文件

    生成 EncryptionConfig 所需的加密 key,  ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

cat > encryption-config.yaml <<EOF

kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: w8WL9YWwbSLeb/PYWukJCfrOe1RjvzthKzOH/RxnWug=
      - identity: {}

EOF

    将加密配置文件拷贝到 master 节点 /etc/kubernetes目录

   1.2.5 创建审计策略文件

apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses
  # for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
      
  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived

   分发审计策略文件到 master 节点 /etc/kubernetes目录

  1.2.6 创建访问 metrics-server 或 kube-prometheus 的证书

cat > proxy-client-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
EOF
  • CN 名称需要位于 kube-apiserver 的 --requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限

   生成证书和私钥

# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

  拷贝生成的证书和私钥文件到所有master节点 /etc/kubernetes/ssl 

  1.2.7 /etc/systemd/system/kube-apiserver.service

    其他master只需修改IP地址

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \
  --advertise-address=10.10.15.70 \
  --default-not-ready-toleration-seconds=360 \
  --default-unreachable-toleration-seconds=360 \
  --feature-gates=DynamicAuditing=true \
  --max-mutating-requests-inflight=2000 \
  --max-requests-inflight=4000 \
  --default-watch-cache-size=200 \
  --delete-collection-workers=2 \
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.10.15.70:2379 \
  --bind-address=10.10.15.70 \
  --secure-port=6443 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --insecure-port=8080 \
  --audit-dynamic-configuration \
  --audit-log-maxage=15 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-truncate-enabled \
  --audit-log-path=/kube-apiserver/audit.log \
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
  --profiling \
  --anonymous-auth=false \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --enable-bootstrap-token-auth \
  --requestheader-allowed-names="aggregator" \
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-admission-plugins=NodeRestriction \
  --allow-privileged=true \
  --apiserver-count=3 \
  --event-ttl=168h \
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
  --kubelet-https=true \
  --kubelet-timeout=10s \
  --proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem \
  --service-cluster-ip-range=10.200.0.0/16 \
  --service-node-port-range=30000-60000 \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • --advertise-address:apiserver 对外通告的 IP
  • --default-*-toleration-seconds:设置节点异常相关的阈值;
  • --max-*-requests-inflight:请求相关的最大阈值;
  • --etcd-*:访问 etcd 的证书和 etcd 服务器地址
  • --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443
  • --secret-port:https 监听端口
  • --insecure-port=0:关闭监听 http 非安全端口(8080);
  • --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • --audit-*:配置审计策略和审计日志文件相关的参数;
  • --client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
  • --enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
  • --requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
  • --requestheader-client-ca-file:用于签名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
  • --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 "aggregator";
  • --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
  • --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • --authorization-mode=Node,RBAC--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • --enable-admission-plugins:启用一些默认关闭的 plugins;
  • --allow-privileged:运行执行 privileged 权限的容器;
  • --apiserver-count=3:指定 apiserver 实例的数量;
  • --event-ttl:指定 events 的保存时间;
  • --kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • --proxy-client-*:apiserver 访问 metrics-server 使用的证书;
  • --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
  • --service-node-port-range: 指定 NodePort 的端口范围;

   如果 kube-apiserver 机器未运行 kube-proxy,则需要添加 --enable-aggregator-routing=true 参数;

注意:

  --requestheader-client-ca-file 参数 CA 证书,须具有 client auth and server auth
    如果 --requestheader-allowed-names 不为空,且 --proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败 

    拷贝至master节点的 /etc/systemd/system 目录下 ,其他修改响应 IP地址

  1.2.8 启动kube-apiserver服务

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl start kube-apiserver

  1.2.9 检查集群状态

# kubectl cluster-info
Kubernetes master is running at https://master.node.local:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

   kube-apiserver开启监听 6443 端口,开启了非安全端口 8080

 

1.3 kube-controller-manager部署

    这里部署 kube-controller-manager 有两种方式,一个是与kube-apiserver 使用非安全端口通信,一个是使用安全端口通信

   1.3.1 使用非安全端口通信

     /etc/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.200.0.0/16 \
  --cluster-cidr=192.170.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

 

    为保证通信安全,先生成 x509 证书和私钥,kube-controller-manager 在如下情况下使用证书:

  • 与 kube-apiserver 的安全端口通信
  • 在安全端口(https,10252) 输出 prometheus 格式的 metrics

  1.3.2 使用安全端通信

   1.3.2.1 创建证书签名

cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "10.10.15.70,
      "10.10.15.71",
      "10.10.15.72"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}
EOF
  • host列表包含所有 kube-controller-manager 节点IP
  • CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 所需的权限。

    生成证书和私钥

# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

     将生存的kube-controller-manager.pem 和 kube-controller-manager-key.pem 分发至所有master节点 /etc/kubernetes/ssl目录下

  1.3.2.2 创建 kubeconfig文件

   kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书等信息:

   TODO

1.3.3 启动kube-controller-manager服务

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl start kube-controller-manager

 

1.4 kube-scheduler部署

     为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 在安全端口(https,10251) 输出 prometheus 格式的 metrics

     有如下两种部署方式: 一种使用非安全端口,一种使用安全端口,需要配置证书

  1.4.1 使用非安全端口部署

    /etc/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  1.4.2 使用安全端口部署

    TODO

  1.4.3 启动kube-scheduler服务

systemctl daemon-reload

systemctl enable kube-scheduler

systemctl start kube-scheduler

   kube-scheduler 监听 10251 和 10259 端口:

  • 10251:接收 http 请求,非安全端口,不需要认证授权;
  • 10259:接收 https 请求,安全端口,需要认证授权;

  
二. node部署

    node部署包括的服务有flanneld/calico,docker,kubelet,kube-proxy

2.1 flanneld部署

    这个flanneld在所有node节点部署,负责建立iptables,到其他节点的路由(host-gw模式),watch etcd设置路由变化

    下载:wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

 2.1.1 创建TLS 密钥和证书

     etcd 启用双向TLS 认证,需要创建flanneld与etcd 集群通信的CA 和密钥

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

  2.1.2 生成flanneld 证书和私钥

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

    同步flanneld*.pem到所有node节点/etc/flanneld/ssl目录下

  2.1.3 向etcd 注册Pod network

      设置flanneld为host-gw模式,网络为192.170.0.0/16,各个节点的网络号24位,也就是有250多个IP可以使用,足以

/opt/k8s/bin/etcdctl --endpoints=https://10.10.15.70:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem set /flannel/network/config '{"Network":"'192.170.0.0/16'", "SubnetLen": 24, "Backend": {"Type": "host-gw"}}'

 

  2.1.4 /etc/systemd/system/flanneld.service文件

    不需要IP伪装设置了ip-masq,没有这个需求可以去掉

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  -etcd-certfile=/etc/flanneld/ssl/flanneld.pem \
  -etcd-keyfile=/etc/flanneld/ssl/flanneld-key.pem \
  -etcd-endpoints=https://10.10.15.70:2379,https://10.10.15.71:2379,https://10.10.15.81:2379 \
  -etcd-prefix=/flannel/network \
  -ip-masq=false
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

   生成的/run/flannel/docker文件,这里也是设置docker也关闭ip-masq

DOCKER_OPT_BIP="--bip=192.170.35.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1500"
DOCKER_NETWORK_OPTIONS=" --bip=192.170.35.1/24 --ip-masq=false --mtu=1500"

 

  2.1.5 启动flanneld进程

systemctl daemon-reload

systemctl enable flanneld

systemctl start flanneld

    如果使用 calico 参看后面章节

 

2.2 docker部署

  2.2.1 开启路由转发

/etc/sysctl.conf文件添加如下,执行命令sysctl -p

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

  2.2.2 /etc/systemd/system/docker.service

    加入了环境文件,/run/flannel/docker是由mk-docker-opts.sh生成的配置,加入了iptables规则,docker 1.13以后iptables FORWARD chain的默认策略设置为DROP,ping 其它 节点上的Pod IP不通

[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment="PATH=/usr/sbin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/sbin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

  2.2.3 启动docker服务

systemctl daemon-reload

systemctl enable docker

systemctl start docker

   或者部署使用第三方安装:使用官方安装脚本自动安装

  安装命令如下:

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

也可以使用国内 daocloud 一键安装命令:

curl -sSL https://get.daocloud.io/docker | sh

 

   参考: https://www.runoob.com/docker/centos-docker-install.html

               https://docs.docker.com/engine/install/centos/

    后续直接使用 containerd 这个, containerd 实现了 kubernetes 的 Container Runtime Interface (CRI) 接口,提供容器运行时核心功能(TODO)

 

2.3 kubelet部署

      kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

      kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色。为Node 请求创建一个RBAC 授权规则。向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书;

  • --user=kubelet-bootstrap:  文件 /etc/kubernetes/token.csv 中的用户名,写入 /etc/kubernetes/bootstrap.kubeconfig

   token.csv为一个用户的描述文件,基本格式为 Token,用户名,UID,用户组;这个文件在 apiserver 启动时被 apiserver 加载,然后就相当于在集群内创建了一个这个用户

  2.3.1 安装依赖包

       yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat

  2.3.2 创建/etc/kubernetes/bootstrap.kubeconfig 文件 

设置集群参数

    # kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://master.node.local:6443 --kubeconfig=bootstrap.kubeconfig

设置客户端认证参数  

   # kubectl config set-credentials kubelet-bootstrap --token=2ab38fcb2b77d7f15ce65db2dd612ab8 --kubeconfig=bootstrap.kubeconfig

设置上下文参数

    # kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig

设置默认上下文

    # kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

    分发bootstrap.kubeconfig 至所有kubelet节点/etc/kubernetes目录下

  2.3.3 创建 kubelet-config.yaml 配置文件

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "192.168.122.224"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "192.168.122.224"
clusterDomain: "cluster.local"
clusterDNS:
  - "10.200.254.254"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "192.170.0.0/16"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
  nodefs.available:  "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]

  2.3.4 /etc/systemd/system/kubelet.service

    其他节点修改node-name可

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/k8s/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --root-dir=/var/lib/kubelet \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet-config.yaml \
  --hostname-override=${node-name} \
  --pod-infra-container-image=zhangzhonglin/pause:3.5 \
  --image-pull-progress-deadline=15m \
  --logtostderr=true \
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
  • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
  • --bootstrap-kubeconfig:bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求
  • K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
  • --pod-infra-container-image 镜像

    分发 kubelet.service 到各个 node 节点 /etc/systemd/system 目录下

  2.3.5 授予 kube-apiserver 访问 kubelet API 的权限

        在执行 kubectl exec、run、logs 命令时,apiserver 会将请求转发到 kubelet 的 https 端口。定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes-master)访问 kubelet API 的权限

        # kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master

  2.3.6 Bootstrap Token Auth 和授予权限

      kubelet 启动时 --kubeletconfig 参数对应的文件,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件内容向 kube-apiserver 发送证书签名请求 (CSR)。

      kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth

    默认 user 和 group 没有创建 CSR 的权限,kubelet 启动失败。

    解决:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 进行绑定

    # kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

  2.3.7 自动 approve CSR,生成 kubelet client 证书 

    kubelet 创建 CSR 请求后,需要创建被 approve,有两种方式:

  • kube-controller-manager 自动 aprrove
  • 手动 kubectl certificate approve

   CSR 被 approve 后,kubelet 向 kube-controller-manager 请求创建 client 证书,kube-controller-manager 的 csrapproving controller 使用 SubjectAccessReview API 来检查 kubelet 请求(对应的 group 是 system:bootstrappers)是否有权限。

   创建三个 ClusterRoleBinding,授予 group system:bootstrappers 和 group system:nodes 进行 approve client、renew client、renew server 证书的权限。# cat csr-crb.yaml 

 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
  • auto-approve-csrs-for-group:自动第一次 CSR; 请求的 Group 为 system:bootstrappers
  • node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes
  • node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

   kubectl apply -f csr.yaml

  2.3.8 启动kubelet服务

  • 启动服务须先创建工作目录
  • 关闭 swap 分区

    kubelet 启动使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig

   注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

systemctl daemon-reload

systemctl enable kubelet

systemctl start kubelet

  2.3.9 手动 kubectl approve kubelet TLS 证书

    kubelet 首次启动时向kube-apiserver 发送证书签名请求,approve才能加入到集群

kubectl get csr

kubectl certificate approve node-csr-kWKUc83k2DshGM2jFp2lnt3iWy3qaY0QO1USkbWydNM

 

2.4 kube-proxy部署

  2.4.1 创建kube-proxy 证书签名请求

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

  2.4.2 生成kube-proxy 客户端证书和私钥

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

    将生成的kube-proxy*.pem同步到所有节点/etc/kubernetes/ssl目录下

  2.4.2 创建kube-proxy.kubeconfig文件

设置集群参数

  kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://master.node.local--kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

  kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

设置上下文参数

  kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

设置默认上下文

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

  将生成的kube-proxy.kubeconfig同步到所有node节点/etc/kubernetes目录下

  2.4.3 /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后
# kube-proxy 会对访问 Service IP 的请求做 SNAT,这个特性与calico 实现 network policy冲突,因此禁用
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
  --bind-address=10.10.15.70 \
  --hostname-override=10.10.15.70 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  2.4.4 启动kube-proxy服务

systemctl daemon-reload

systemctl enable kube-proxy

systemctl start kube-proxy

  

三. coredns 安装部署

     coredns.yaml 如下

     需要修改的有三处位置:

     kubernetes __PILLAR__DNS__DOMAIN__ 替换为: zqdlsvc.local

     kubedns: 10.254.0.2

     images地址看看能不能下载

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        kubernetes zqdlk8s.local 10.254.0.0/16
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: coredns/coredns:latest
        imagePullPolicy: Always
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 10.254.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

  验证方式:启动两个pod

        # kubectl exec -it alpine nslookup kubernetes
          nslookup: can't resolve '(null)': Name does not resolve

          Name:      kubernetes
          Address 1: 10.254.0.1 kubernetes.default.svc.zqdlk8s.local

 

 

-----------------------------------------------------------------------------------------------------------------------------

Kubernetes 部署安装 calico 网络

 Calico组件:

  •      Felix:Calico agent     运行在每台node上,设置网络信息:IP,路由规则,iptable规则
  •      etcd:calico后端存储
  •      BIRD:  BGP Client: 负责把Felix在各node上设置的路由信息广播到Calico网络( 通过BGP协议)。
  •      BGP Route Reflector: 大规模集群的分级路由分发。
  •      calico: calico命令行管理工具

参考:https://docs.projectcalico.org/v3.7/getting-started/kubernetes/installation/calico 

 

1. Installing with the Kubernetes API datastore—50 nodes or less

下载

curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml -O

修改 pod 的子网掩码

POD_CIDR="<your-pod-cidr>" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml

Apply the manifest using the following command.

kubectl apply -f calico.yaml

2. Installing with the etcd datastore

 

建议:

    1. docker 配置

cat > /etc/docker/daemon.json << EOF
{

    "exec-opts": ["native-cgroupdrvier=systemd"].

    "log-dirver": "json-file",

    "log-opts": {

        "max-size": "100m",

    }

}

EOF

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值