k8s二进制

集群环境和功能介绍
组件版本
  1. Kubernetes 1.14.4
  2. docker-ce-18.06.0.ce-3.el7
  3. etcd-v3.2.12
  4. flannel-v0.11.0
  • 插件
    • Coredns
    • Dashboard
    • Heapster (influxdb、grafana)
    • Metrics-Server
    • EFK (elasticsearch、fluentd、kibana)
  • 镜像仓库
    • docker registry
    • harbor
主要配置策略
  • kube-apiserve
  1. 使用 keepalived 和 haproxy 实现 3 节点高可用;
  2. 关闭非安全端口 8080 和匿名访问;
  3. 在安全端口 6443 接收 https 请求;
  4. 严格的认证和授权策略 (x509、token、RBAC);
  5. 开启 bootstrap token 认证,支持 kubelet TLS bootstrapping;
  6. 使用 https 访问 kubelet、etcd,加密通信
  • kube-controller-manager
  1. 3 节点高可用
  2. 关闭非安全端口,在安全端口 10252 接收 https 请求
  3. 使用 kubeconfig 访问 apiserver 的安全端口
  4. 自动 approve kubelet 证书签名请求 (CSR),证书过期后自动轮转
  5. 各 controller 使用自己的 ServiceAccount 访问 apiserver
  • kube-scheduler
  1. 3 节点高可用
  2. 使用 kubeconfig 访问 apiserver 的安全端口
  • kubelet
  1. 使用 kubeadm 动态创建 bootstrap token,而不是在 apiserver 中静态配置
  2. 使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转
  3. 在 KubeletConfiguration 类型的 JSON 文件配置主要参数
  4. 关闭只读端口,在安全端口 10250 接收 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问
  5. 使用 kubeconfig 访问 apiserver 的安全端口
  • kube-proxy
  1. 使用 kubeconfig 访问 apiserver 的安全端口
  2. 在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数
  3. 使用 ipvs 代理模式
  • 集群插件
  1. DNS:使用功能、性能更好的 coredns
  2. Dashboard:支持登录认证
  3. Metric:heapster、metrics-server,使用 https 访问 kubelet 安全端口
  4. Log:Elasticsearch、Fluend、Kibana
  5. Registry 镜像库:docker-registry、harbor
本地hosts配置
cat >>/etc/hosts <<EOF
192.168.50.110  etcd01
192.168.50.111  etcd02
192.168.50.112  etcd03
192.168.50.110  k8s-master01
192.168.50.111  k8s-master02
192.168.50.112  k8s-master03
192.168.50.113  k8s-node01
192.168.50.114  k8s-node02
EOF
组件部署目录规划
  • ca
部署目录:/opt/cfssl
证书目录:/opt/cfssl/ssl
部署主机:k8s-master01
  • etcd
部署目录:/opt/etcd
证书目录:/opt/etcd/ssl
所需证书:ca-key.pem、etcd-key.pem、ca.pem、etcd.pem
部署主机:etcd01、etcd02、etcd03
  • flanner
部署目录:/opt/flannel
证书目录:/opt/flannel/ssl
所需证书:ca.pem、flanneld.pem、flanneld-key.pem
部署主机:k8s-master所有节点、k8s-node所有节点
  • k8s master集群
  1. kube-apiserver
部署目录:/opt/k8s
证书目录:/opt/k8s/ssl/apiserver
所需证书:ca-key.pem、ca.pem、kubernetes.pem、kubernetes-key.pem
日志目录:/log/k8s/apiserver
部署主机:k8s-master01、k8s-master02、k8s-master03
  1. kube-controller-manager
部署目录:/opt/k8s
证书目录:/opt/k8s/ssl/controller-manager
所需证书:ca.pem、ca-key.pem、kube-controller-manager.pem、kube-controller-manager-key.pem
日志目录:/log/k8s/controller-manager
部署主机:k8s-master01、k8s-master02、k8s-master03
  1. kube-scheduler
部署目录:/opt/k8s
证书目录:/opt/k8s/ssl/scheduler
所需证书:
日志目录:/log/k8s/scheduler
部署主机:k8s-master01、k8s-master02、k8s-master03
  • k8s node
  1. docker
部署方式:yum安装
数据目录:/data/docker
部署主机:k8s-node01、k8snode02
  1. kubelet
部署目录:/opt/k8s
证书目录:/opt/k8s/ssl/kubelet
所需证书:
日志目录:/log/k8s/kubelet
部署主机:k8s-node01、k8s-node02
  1. kube-proxy
部署目录:/opt/k8s
所需证书:
日志目录:/log/k8s/proxy
部署主机:k8s-node01、k8s-node02
部署CA
  • k8s证书类型
client certificate: 用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
server certificate: 服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserver
peer certificate: 双向证书,用于etcd集群成员间通信
  • 下载安装cfssl
#创建目录
mkdir -p /opt/cfssl/{bin,ssl}

#下载可执行文件
curl -s -L -o  /opt/cfssl/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o  /opt/cfssl/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o  /opt/cfssl/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

#授权
chmod +x /opt/cfssl/bin/cfssl*

#配置环境变量
cat >/etc/profile.d/cfssl.sh <<"EOF"
export PATH=$PATH:/opt/cfssl/bin
EOF

#使环境变量生效
source /etc/profile.d/cfssl.sh

  • 配置证书生成策略,规定CA可以颁发那种类型的证书
cat >/opt/cfssl/ssl/ca-config.json <<"EOF"
{

  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF
  • 创建CA证书签名请求
cat >/opt/cfssl/ssl/ca-csr.json <<"EOF"
{
"CN": "kubernetes",
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
    {
        "C": "CN",
        "L": "GuangDong",
        "O": "x",
        "ST": "GuangZhou",            
        "OU": "ops"
    }    ]
}
EOF
  • 初始化ca
cd /opt/cfssl/ssl/
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
部署etcd集群
  • 部署结构
etcd01 192.168.50.110
etcd02 192.168.50.111
etcd03 192.168.50.112
  • 配置生成证书
#查看默认配置
#cfssl print-defaults csr 

#生成签名请求文件
cd /opt/cfssl/ssl
cat >/opt/cfssl/ssl/etcd-csr.json << "EOF"
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.50.110",
    "192.168.50.111",
    "192.168.50.112"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "guangzhou",
      "L": "guangzhou",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成服务端证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
  • 部署etcd01
mkdir -p /opt/src
cd /opt/src

#下载二进制包
wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz

mkdir -p /opt/etcd/{bin,conf,ssl}
mkdir -p /data/etcd

tar -zxf etcd-v3.2.12-linux-amd64.tar.gz
cp -rfa etcd-*/etcd* /opt/etcd/bin/

#拷贝证书
cp /opt/cfssl/ssl/etcd*pem  /opt/etcd/ssl/
cp /opt/cfssl/ssl/ca*pem /opt/etcd/ssl

#配置etcd systemctl管理
cat > /etc/systemd/system/etcd.service << "EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/opt/etcd/
ExecStart=/opt/etcd/bin/etcd \
  --name etcd01 \
  --cert-file=/opt/etcd/ssl/etcd.pem \
  --key-file=/opt/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/opt/etcd/ssl/etcd.pem \
  --peer-key-file=/opt/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.50.110:2380 \
  --listen-peer-urls https://192.168.50.110:2380 \
  --listen-client-urls https://192.168.50.110:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.50.110:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd01=https://192.168.50.110:2380,etcd02=https://192.168.50.111:2380,etcd03=https://192.168.50.112:2380 \
  --initial-cluster-state new \
  --data-dir=/data/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#添加自启动
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

#拷贝etcd到其他节点
scp -r /opt/etcd root@etcd02:/opt
scp -r /opt/etcd root@etcd03:/opt

  • 部署etcd02
#配置etcd systemctl管理
cat > /etc/systemd/system/etcd.service << "EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/opt/etcd/
ExecStart=/opt/etcd/bin/etcd \
  --name etcd02 \
  --cert-file=/opt/etcd/ssl/etcd.pem \
  --key-file=/opt/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/opt/etcd/ssl/etcd.pem \
  --peer-key-file=/opt/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.50.111:2380 \
  --listen-peer-urls https://192.168.50.111:2380 \
  --listen-client-urls https://192.168.50.111:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.50.111:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd01=https://192.168.50.110:2380,etcd02=https://192.168.50.111:2380,etcd03=https://192.168.50.112:2380 \
  --initial-cluster-state new \
  --data-dir=/data/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#添加自启动
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
  • 部署etcd03
#配置etcd systemctl管理
cat > /etc/systemd/system/etcd.service << "EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/opt/etcd/
ExecStart=/opt/etcd/bin/etcd \
  --name etcd03 \
  --cert-file=/opt/etcd/ssl/etcd.pem \
  --key-file=/opt/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/opt/etcd/ssl/etcd.pem \
  --peer-key-file=/opt/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.50.112:2380 \
  --listen-peer-urls https://192.168.50.112:2380 \
  --listen-client-urls https://192.168.50.112:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.50.112:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd01=https://192.168.50.110:2380,etcd02=https://192.168.50.111:2380,etcd03=https://192.168.50.112:2380 \
  --initial-cluster-state new \
  --data-dir=/data/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#添加自启动
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
  • 检查集群状态
/opt/etcd/bin/etcdctl --endpoints=http://127.0.0.1:2379 --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem cluster-health
  • etcd操作
删除etcdkey
<!--etcdctl --endpoints=http://127.0.0.1:2379 --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem rm /kubernetes/network/config

删除目录
etcdctl --endpoints=http://127.0.0.1:2379 --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem rmdir /kubernetes/network/

#查看键值
 curl http://127.0.0.1:2379/v2/keys/?recursive=true
安装flannel网络
  • 配置生成证书
#生成证书签名请求
cd /opt/cfssl/ssl/

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "guangzhou",
      "L": "guangzhou",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

#该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空

#生成证书和私钥
cfssl gencert -ca=ca.pem   -ca-key=ca-key.pem   -config=ca-config.json   -profile=kubernetes  flanneld-csr.json | cfssljson -bare flanneld

#所有安装flannel的机器上面创建目录
mkdir -p /opt/flannel/ssl

# 将证书分发到其他节点
scp ca.pem flanneld*.pem  root@k8s-master01:/opt/flannel/ssl/
scp ca.pem flanneld*.pem  root@k8s-master02:/opt/flannel/ssl/
scp ca.pem flanneld*.pem  root@k8s-master03:/opt/flannel/ssl/
scp ca.pem flanneld*.pem  root@k8s-node01:/opt/flannel/ssl/
scp ca.pem flanneld*.pem  root@k8s-node02:/opt/flannel/ssl/

  • 安装flanner网络
cd /opt/src
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

mkdir -p /opt/flannel
tar -zxf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel

#向etcd写入网段信息(etcd集群的任意一台机器上面执行)
cd /opt/etcd/ssl/
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem mkdir /kubernetes/network
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

#创建system unit文件
cat > /etc/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/flannel/flanneld \
  -etcd-cafile=/opt/flannel/ssl/ca.pem \
  -etcd-certfile=/opt/flannel/ssl/flanneld.pem \
  -etcd-keyfile=/opt/flannel/ssl/flanneld-key.pem \
  -etcd-endpoints=https://192.168.50.110:2379,https://192.168.50.111:2379,https://192.168.50.112:2379 \
  -etcd-prefix=/kubernetes/network
ExecStartPost=/opt/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

## mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。
## flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口

#启动flannel并设置开机启动
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld

#查看分配好的子网信息
cat /run/flannel/docker

#查看flannel网络是否生效
ifconfig

配置k8s master集群

部署信息

k8s-master01 192.168.50.110
k8s-master02 192.168.50.111
k8s-master03 192.168.50.112

k8s master包含组件

kube-apiserver
kube-scheduler
kube-controller-manager

#kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关
#同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader
部署kubectl
  • 配置生成证书
#创建证书签名请求文件
cd /opt/cfssl/ssl

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "guangzhou",
      "L": "guangzhou",
      "O": "system:masters",
      "OU": "4Paradigm"
    }
  ]
}
EOF

# O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
# 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;
# 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;


#生成证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

  • 安装kubelet
#下载地址
https://github.com/kubernetes/kubernetes

#下载k8s服务端二进制包
cd /opt/src
wget https://dl.k8s.io/v1.14.4/kubernetes-server-linux-amd64.tar.gz

mkdir -p /opt/k8s/
tar -zxf kubernetes-server-linux-amd64.tar.gz
cp -rf  /opt/src/kubernetes/server/bin/  /opt/k8s/

#配置环境变量
cat >/etc/profile.d/k8s.sh << "EOF"
export PATH=$PATH:/opt/k8s/bin
EOF

#使环境变量生效
source /etc/profile.d/k8s.sh

#将kubctl分发到其他节点
scp -r /opt/k8s/  root@k8s-master02:/opt/ 
scp -r /opt/k8s/  root@k8s-master03:/opt/
scp -r /opt/k8s/  root@k8s-node01:/opt/
scp -r /opt/k8s/  root@k8s-node02:/opt/

scp /etc/profile.d/k8s.sh root@k8s-master02:/etc/profile.d/
scp /etc/profile.d/k8s.sh root@k8s-master03:/etc/profile.d/

  • 创建~/.kube/config
cd /opt/cfssl/ssl

kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.50.110:6443 \
  --kubeconfig=kubectl.kubeconfig
  
# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig
  
# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
  
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

#分发~/.kube/config文件
cp kubectl.kubeconfig ~/.kube/config
scp -r ~/.kube/  root@k8s-master02:~/
scp -r ~/.kube/  root@k8s-master03:~/
scp -r ~/.kube/  root@k8s-node01:~/

部署apiserver
  • 配置生成证书
#创建证书签名请求文件
cd /opt/cfssl/ssl
cat > kubernetes-csr.json <<EOF
 {
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.50.110",
    "192.168.50.111",
    "192.168.50.112",
    "192.168.50.113",
    "192.168.50.115",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "guangzhou",
      "L": "guangzhou",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

#hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IP、kubernetes 服务 IP 和域名;
#域名最后字符不能是 .(如不能为 kubernetes.default.svc.cluster.local.),否则解析时失败,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";
#如果使用非 cluster.local 域名,如 bqding.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.bqding、kubernetes.default.svc.bqding.com
#主机依次为master节点的ip,以及负载均衡器的IP。


#生成证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  
#所有节点上创建证书目录
sshpass -p 'xxxx' ssh root@k8s-master01 'mkdir -p /opt/k8s/ssl/{apiserver,controller-manager,scheduler}'
sshpass -p 'xxxx' ssh root@k8s-master02 'mkdir -p /opt/k8s/ssl/{apiserver,controller-manager,scheduler}'
sshpass -p 'xxxx' ssh root@k8s-master03 'mkdir -p /opt/k8s/ssl/{apiserver,controller-manager,scheduler}'
sshpass -p 'xxxx' ssh root@k8s-node01 'mkdir -p /opt/k8s/ssl/{kubelet,kube-proxy}'


#将生成的证书和私钥文件拷贝到 master 节点
scp ca*.pem kubernetes*.pem root@k8s-master01:/opt/k8s/ssl/apiserver/
scp ca*.pem kubernetes*.pem root@k8s-master02:/opt/k8s/ssl/apiserver/
scp ca*.pem kubernetes*.pem root@k8s-master03:/opt/k8s/ssl/apiserver/

  • 创建加密配置文件
#创建加密配置文件
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

#将加密配置文件分发到其他master节点
scp encryption-config.yaml root@k8s-master01:/opt/k8s/ssl/apiserver/
scp encryption-config.yaml root@k8s-master02:/opt/k8s/ssl/apiserver/
scp encryption-config.yaml root@k8s-master03:/opt/k8s/ssl/apiserver/

  • 部署apiserver01
#创建kube-apiserver systemd unit文件
cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/opt/k8s/ssl/apiserver/encryption-config.yaml \
  --advertise-address=192.168.50.110 \
  --bind-address=192.168.50.110 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --tls-private-key-file=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --client-ca-file=/opt/k8s/ssl/apiserver/ca.pem \
  --kubelet-client-certificate=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --kubelet-client-key=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --service-account-key-file=/opt/k8s/ssl/apiserver/ca-key.pem \
  --etcd-cafile=/opt/k8s/ssl/apiserver/ca.pem \
  --etcd-certfile=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --etcd-keyfile=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --etcd-servers=https://192.168.50.110:2379,https://192.168.50.111:2379,https://192.168.50.112:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/log/k8s/apiserver/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/apiserver \
  --requestheader-client-ca-file=/opt/k8s/ssl/apiserver/ca.pem \
  --requestheader-allowed-names=aggregator \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/opt/k8s/ssl/apiserver/kube-proxy.pem \
  --proxy-client-key-file=/opt/k8s/ssl/apiserver/kube-proxy-key.pem
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#--experimental-encryption-provider-config:启用加密特性;
#--authorization-mode=Node,RBAC: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
#--enable-admission-plugins:启用 ServiceAccount 和 NodeRestriction;
#--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
#--tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件。--client-ca-file 用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
#--kubelet-client-certificate、--kubelet-client-key:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
#--bind-address: 不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
#--insecure-port=0:关闭监听非安全端口(8080);
#--service-cluster-ip-range: 指定 Service Cluster IP 地址段;
#--service-node-port-range: 指定 NodePort 的端口范围;
#--runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
#--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
#--apiserver-count=3:指定集群运行模式,多台 kube-apiserver 会通过 leader 选举产生一个工作节点,其它节点处于阻塞状态;


#创建日志目录
mkdir -p /log/k8s/apiserver

#启动api-server服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

#授予kubernetes证书访问kubelet api权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

  • 部署apiserver02
#创建kube-apiserver systemd unit文件
cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/opt/k8s/ssl/apiserver/encryption-config.yaml \
  --advertise-address=192.168.50.111 \
  --bind-address=192.168.50.111 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --tls-private-key-file=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --client-ca-file=/opt/k8s/ssl/apiserver/ca.pem \
  --kubelet-client-certificate=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --kubelet-client-key=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --service-account-key-file=/opt/k8s/ssl/apiserver/ca-key.pem \
  --etcd-cafile=/opt/k8s/ssl/apiserver/ca.pem \
  --etcd-certfile=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --etcd-keyfile=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --etcd-servers=https://192.168.50.110:2379,https://192.168.50.111:2379,https://192.168.50.112:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/log/k8s/apiserver/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/apiserver \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#创建日志目录
mkdir -p /log/k8s/apiserver

#启动api-server服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
  • 部署apiserver03
#创建kube-apiserver systemd unit文件
cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/opt/k8s/ssl/apiserver/encryption-config.yaml \
  --advertise-address=192.168.50.112 \
  --bind-address=192.168.50.112 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --tls-private-key-file=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --client-ca-file=/opt/k8s/ssl/apiserver/ca.pem \
  --kubelet-client-certificate=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --kubelet-client-key=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --service-account-key-file=/opt/k8s/ssl/apiserver/ca-key.pem \
  --etcd-cafile=/opt/k8s/ssl/apiserver/ca.pem \
  --etcd-certfile=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --etcd-keyfile=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --etcd-servers=https://192.168.50.110:2379,https://192.168.50.111:2379,https://192.168.50.112:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/log/k8s/apiserver/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/apiserver \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF


#创建日志目录
mkdir -p /log/k8s/apiserver

#启动api-server服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

#检查apiserver和集群状态
netstat -ptln | grep kube-apiserve
kubectl cluster-info
部署kube-controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性

kube-controller-manager 在如下两种情况下使用该证书
1. 与 kube-apiserver 的安全端口通信时
2. 在安全端口(https,10252) 输出 prometheus 格式的 metrics

  • 配置生成证书
#创建证书签名请求文件
cd /opt/cfssl/ssl

cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.50.110",
      "192.168.50.111",
      "192.168.50.112"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "guangzhou",
        "L": "guangzhou",
        "O": "system:kube-controller-manager",
        "OU": "4Paradigm"
      }
    ]
}
EOF

#hosts 列表包含所有 kube-controller-manager 节点 IP;
#CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

#生成证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  
#将生成的证书和私钥分发到所有 master 节点
scp ca*.pem kube-controller-manager*.pem root@k8s-master01:/opt/k8s/ssl/controller-manager/
scp ca*.pem kube-controller-manager*.pem root@k8s-master02:/opt/k8s/ssl/controller-manager/
scp ca*.pem kube-controller-manager*.pem root@k8s-master03:/opt/k8s/ssl/controller-manager/
  • 创建配置文件
#创建kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.50.110:6443 \
  --kubeconfig=kube-controller-manager.kubeconfig
  
kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig
  
kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig
  
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

#分发kube-controller-manager.kubeconfig到所有master节点
scp kube-controller-manager.kubeconfig root@k8s-master01:/opt/k8s/ssl/controller-manager/
scp kube-controller-manager.kubeconfig root@k8s-master02:/opt/k8s/ssl/controller-manager/
scp kube-controller-manager.kubeconfig root@k8s-master03:/opt/k8s/ssl/controller-manager/

  • 部署kube-controller-manager01
#创建kube-controller-manager systemd unit文件
cat > /etc/systemd/system/kube-controller-manager.service  << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --kubeconfig=/opt/k8s/ssl/controller-manager/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/opt/k8s/ssl/controller-manager/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/k8s/ssl/controller-manager/ca.pem \
  --cluster-signing-key-file=/opt/k8s/ssl/controller-manager/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/opt/k8s/ssl/controller-manager/ca.pem \
  --service-account-private-key-file=/opt/k8s/ssl/controller-manager/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/opt/k8s/ssl/controller-manager/kube-controller-manager.pem \
  --tls-private-key-file=/opt/k8s/ssl/controller-manager/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/kube-controller-manager \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#--address:指定监听的地址为127.0.0.1
#--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
#--cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
#--experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
#--root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
#--service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
#--service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
#--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
#--feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
#--controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
#--horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
#--tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
#--use-service-account-credentials=true:


#创建日志目录
mkdir -p /log/k8s/kube-controller-manager

#启动kube-controller-manager服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

#分发kube-controller-manager systemd unit文件到其他节点
scp /etc/systemd/system/kube-controller-manager.service root@k8s-master02:/etc/systemd/system/
scp /etc/systemd/system/kube-controller-manager.service root@k8s-master03:/etc/systemd/system/

  • 部署kube-controller-manager02
#创建日志目录
mkdir -p /log/k8s/kube-controller-manager

#启动kube-controller-manager服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

  • 部署kube-controller-manager03
#创建日志目录
mkdir -p /log/k8s/kube-controller-manager

#启动kube-controller-manager服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

  • 任意节点执行
#查看当前kube-controller-manager的leader
kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
部署kube-scheduler
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。<p>
为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:
1. 与 kube-apiserver 的安全端口通信
2. 在安全端口(https,10251) 输出 prometheus 格式的 metrics
  • 配置生成证书
#创建证书签名请求文件
cd /opt/cfssl/ssl

cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.50.110",
      "192.168.50.111",
      "192.168.50.112"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "guangzhou",
        "L": "guangzhou",
        "O": "system:kube-scheduler",
        "OU": "4Paradigm"
      }
    ]
}
EOF

#生成证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes  kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  • 创建配置文件
#创建kube-scheduler.kubeconfig
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.50.110:6443 \
  --kubeconfig=kube-scheduler.kubeconfig
  
kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig
  
kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig
  
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

#上一步创建的证书、私钥以及 kube-apiserver 地址被写入到 kubeconfig 文件中

#分发kube-scheduler.kubeconfig到所有节点
scp kube-scheduler.kubeconfig root@k8s-master01:/opt/k8s/ssl/scheduler/
scp kube-scheduler.kubeconfig root@k8s-master02:/opt/k8s/ssl/scheduler/
scp kube-scheduler.kubeconfig root@k8s-master03:/opt/k8s/ssl/scheduler/

  • 部署kube-scheduler01
#创建kube-scheduler systemd unit文件
cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/opt/k8s/ssl/scheduler/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/scheduler \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#创建日志目录
mkdir -p /log/k8s/scheduler

#启动kube-scheduler服务
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler

#同步kube-scheduler systemd unit到其他节点
scp /etc/systemd/system/kube-scheduler.service root@k8s-master02:/etc/systemd/system/
scp /etc/systemd/system/kube-scheduler.service root@k8s-master03:/etc/systemd/system/

  • 部署kube-scheduler02
#创建日志目录
mkdir -p /log/k8s/scheduler/

#启动kube-scheduler服务
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
  • 部署kube-scheduler03
#创建日志目录
mkdir -p /log/k8s/scheduler

#启动kube-scheduler服务
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
  • 任意节点执行
#查看kube-scheduler运行监听端口
netstat -lnpt|grep kube-sche

#查看当前kube-scheduler的leader
kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
  • 在所有master节点上验证功能是否正常
kubectl get componentstatuses
配置node节点
k8s-node01:192.168.50.113
k8s-node02:192.168.50.114

kubernetes node 节点运行如下组件

docker
kubelet
kube-proxy

安装docker

vim docker_install.sh
内容如下:
#!bin/bash
info_echo(){
    echo -e "\\033[32m [Info]: $1 \\033[0m"
}

#安装docker
install_docker(){

    info_echo "install docker"
    local docker_ver=$1
    #配置epel源
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    yum -y install epel-release
    yum clean all
    rm -rf /var/cache/yum
    yum makecache
    #安装相关依赖
    yum -y install yum-utils device-mapper-persistemt-data lvm2  container-selinux
    #配置yum仓库
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum makecache fast
    #安装指定版本docker
    yum -y install ${docker_ver}
    #启动docker
    systemctl start docker
    systemctl enable docker
}

config_docker(){

    info_echo "config_docker"
    local docker_data_dir=$1
    mkdir -p ${docker_data_dir}
    #使用清华大学镜像源
    sed -i 's@https://download.docker.com@https://mirrors.tuna.tsinghua.edu.cn/docker-ce@' /etc/yum.repos.d/docker-ce.repo
cat >/etc/docker/daemon.json <<EOF
{
  "registry-mirrors": [
    "https://registry.docker-cn.com"
  ],
  "graph": "${docker_data_dir}"
}
EOF
    systemctl restart docker

}

install_docker docker-ce-18.06.0.ce-3.el7
config_docker /data/docker


#执行安装脚本
bash docker_install.sh

  • 配置docker支持flannel网络
sed -i '/^Type=notify/a\EnvironmentFile=/run/flannel/docker' /etc/systemd/system/multi-user.target.wants/docker.service
sed -i 's@ExecStart=/usr/bin/dockerd.*@ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS@' /etc/systemd/system/multi-user.target.wants/docker.service

#重启docker使配置生效
systemctl daemon-reload
systemctl restart docker

#验证docker网络是否生效
docker run -itd centos
docker container inspect 92d57c32850c

#没有生效修改下面这个文件
/usr/lib/systemd/system/docker.service

#查看集群所有主机网络情况(在etcd集群任意主机上面执行)
cd /opt/etcd/ssl/
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem ls /kubernetes/network/subnets

部署kubelet

  • 创建配置文件(CA服务器上执行)
#创建kubelet bootstrap kubeconfig文件(node01)
cd /opt/cfssl/ssl
## 创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
  --description kubelet-bootstrap-token \
  --groups system:bootstrappers:k8s-node01 \
  --kubeconfig ~/.kube/config)
  
## 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.50.110:6443 \
  --kubeconfig=kubelet-bootstrap-k8s-node01.kubeconfig

## 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap-k8s-node01.kubeconfig
  
## 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap-k8s-node01.kubeconfig
  
## 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-node01.kubeconfig


#创建kubelet bootstrap kubeconfig文件(node02)
## 创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
  --description kubelet-bootstrap-token \
  --groups system:bootstrappers:k8s-node02 \
  --kubeconfig ~/.kube/config)
  
## 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.50.110:6443 \
  --kubeconfig=kubelet-bootstrap-k8s-node02.kubeconfig

## 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap-k8s-node02.kubeconfig
  
## 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap-k8s-node02.kubeconfig
  
## 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-node02.kubeconfig

#kubelet bootstrap kubeconfig文件创建2次、分别把k8s-master1改成k8s-master2、k8s-master3
#证书中写入 Token 而非证书,证书后续由 controller-manager 创建

#查看 kubeadm 为各节点创建的 token
kubeadm token list --kubeconfig ~/.kube/config

#创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话)
#kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers

#查看各 token 关联的 Secret
kubectl get secrets  -n kube-system

#分发ca key
scp ca*.pem root@k8s-node01:/opt/k8s/ssl/kubelet/
scp ca*.pem root@k8s-node02:/opt/k8s/ssl/kubelet/

#分发bootstrap kubeconfig文件
scp kubelet-bootstrap-k8s-node01.kubeconfig root@k8s-node01:/opt/k8s/ssl/kubelet/kubelet-bootstrap.kubeconfig
scp kubelet-bootstrap-k8s-node02.kubeconfig root@k8s-node02:/opt/k8s/ssl/kubelet/kubelet-bootstrap.kubeconfig

  • node01创建kubelet配置文件
#创建 kubelet 参数配置文件

cd /opt/k8s/ssl/kubelet/
cat > kubelet.config.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/opt/k8s/ssl/kubelet/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.50.113",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.254.0.2"]
}
EOF

#address:API 监听地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API
#readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定
#authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口
#authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证
#authentication.webhook.enabled=true:开启 HTTPs bearer token 认证
#对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized
#authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC)
#featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数
#需要 root 账户运行
  • node01部署启动kubelet
#创建kubelet systemd unit文件 
cat >/etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/opt/k8s
ExecStart=/opt/k8s/bin/kubelet \
  --bootstrap-kubeconfig=/opt/k8s/ssl/kubelet/kubelet-bootstrap.kubeconfig \
  --cert-dir=/opt/k8s/ssl/kubelet \
  --kubeconfig=/opt/k8s/ssl/kubelet/kubelet.kubeconfig \
  --config=/opt/k8s/ssl/kubelet/kubelet.config.json \
  --hostname-override=192.168.50.113 \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \
  --allow-privileged=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/kubelet \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#Bootstrap Token Auth和授予权限
kublet 启动时查找配置的 --kubeletconfig 文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 向 kube-apiserver 发送证书签名请求 (CSR)。

kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下
sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378   26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

#解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定(在k8s-master任意主机上执行)
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

#创建日志目录
mkdir -p /log/k8s/kubelet

#启动kubelet
systemctl daemon-reload 
systemctl enable kubelet 
systemctl restart kubelet

  • node02创建kubelet配置文件
#创建 kubelet 参数配置文件

cd /opt/k8s/ssl/kubelet/
cat > kubelet.config.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/opt/k8s/ssl/kubelet/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.50.114",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.254.0.2"]
}
EOF
  • node02部署启动kubelet
#创建kubelet systemd unit文件 
cat >/etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/opt/k8s
ExecStart=/opt/k8s/bin/kubelet \
  --bootstrap-kubeconfig=/opt/k8s/ssl/kubelet/kubelet-bootstrap.kubeconfig \
  --cert-dir=/opt/k8s/ssl/kubelet \
  --kubeconfig=/opt/k8s/ssl/kubelet/kubelet.kubeconfig \
  --config=/opt/k8s/ssl/kubelet/kubelet.config.json \
  --hostname-override=192.168.50.114 \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \
  --allow-privileged=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/kubelet \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#创建日志目录
mkdir -p /log/k8s/kubelet

#启动kubelet
systemctl daemon-reload 
systemctl enable kubelet 
systemctl restart kubelet

approve kubelet csr

kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。

注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。
此时kubelet的进程有,但是监听端口还未启动,需要进行下面步骤!

可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书

  • 配置自动approve csr(master 节点上执行)
#自动approve csr请求
cd /opt/k8s

cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF


#auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
#node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
#node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

#应用配置
kubectl apply -f csr-crb.yaml

#查看kubelet的情况
system:bootstrap:yltz4c
kubectl get nodes


  • Kubelet提供的API接口

    kublet 启动后监听多个端口,用于接收 kube-apiserver 或其它组件发送的请求
netstat -lnpt|grep kubelet

#4194: cadvisor http 服务
#10248: healthz http 服务
#10250: https API 服务;注意:未开启只读端口 10255

  • kubet api认证和授权

    kublet的配置文件kubelet.config.json配置了如下认证参数
authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证
authentication.webhook.enabled=true:开启 HTTPs bearer token 认证
同时配置了如下授权参数:
authroization.mode=Webhook:开启 RBAC 授权;

kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:

# 证书认证和授权:
# 权限不足的证书
curl -k -s --cacert /opt/k8s/ssl/controller-manager/ca.pem --cert /opt/k8s/ssl/controller-manager/kube-controller-manager.pem --key /opt/k8s/ssl/controller-manager/kube-controller-manager-key.pem https://192.168.50.113:10250/metrics

#使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书
curl -k -s --cacert /opt/cfssl/ssl/ca.pem --cert /opt/cfssl/ssl/admin.pem --key /opt/cfssl/ssl/admin-key.pem https://192.168.50.113:10250/metrics|head

--cacert、--cert、--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized;

#bear token 认证和授权:
    创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:
    kubectl create sa kubelet-api-test
    kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
    SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
    TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
    echo ${TOKEN}
    curl -k -s --cacert /data/ssl/k8sca/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.50.113:10250/metrics|head
    
kublet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务

#浏览器访问 kube-apiserver 安全端口
https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/A.%E6%B5%8F%E8%A7%88%E5%99%A8%E8%AE%BF%E9%97%AEkube-apiserver%E5%AE%89%E5%85%A8%E7%AB%AF%E5%8F%A3.md
部署kube-proxy
  • 配置生成证书(CA服务器执行)
#创建证书签名请求文件
cd /opt/cfssl/ssl
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "guangzhou",
      "L": "guangzhou",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

#CN:指定该证书的 User 为 system:kube-proxy;
#预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
#该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;


#生成证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • 创建配置文件(master服务器执行)
#创建kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.50.110:6443 \
  --kubeconfig=kube-proxy.kubeconfig
  
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
  
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

#--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);

#分发配置文件
scp kube-proxy.kubeconfig root@k8s-node01:/opt/k8s/ssl/kube-proxy/
scp kube-proxy.kubeconfig root@k8s-node02:/opt/k8s/ssl/kube-proxy/

  • node1部署启动kube-proxy
#安装依赖包
yum install -y epel-release wget conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

cd /opt/k8s/ssl/kube-proxy
#创建kube-proxy配置文件
cat >kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.50.113
clientConnection:
    kubeconfig: /opt/k8s/ssl/kube-proxy/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 192.168.50.113:10256
hostnameOverride: k8s-node1
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.50.113:10249
mode: "ipvs"
EOF

#bindAddress: 监听地址;
#clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
#clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
#hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
#mode: 使用 ipvs 模式;
#红色字体改成对应主机的信息。其中clusterc idr为flannel网络地址。

#创建kube-proxy systemd unit文件
cat >/etc/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/opt/k8s
ExecStart=/opt/k8s/bin/kube-proxy \
  --config=/opt/k8s/ssl/kube-proxy/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/kube-proxy \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#创建日志目录
mkdir -p /log/k8s/kube-proxy

#启动kube-proxy服务
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy


#查看ipvs路由规则
ipvsadm -ln

  • node2部署启动kube-proxy
#安装依赖包
yum install -y epel-release wget conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

cd /opt/k8s/ssl/kube-proxy
#创建kube-proxy配置文件
cat >kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.50.114
clientConnection:
    kubeconfig: /opt/k8s/ssl/kube-proxy/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 192.168.50.114:10256
hostnameOverride: k8s-node1
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.50.114:10249
mode: "ipvs"
EOF

#创建kube-proxy systemd unit文件
cat >/etc/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/opt/k8s
ExecStart=/opt/k8s/bin/kube-proxy \
  --config=/opt/k8s/ssl/kube-proxy/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/kube-proxy \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#创建日志目录
mkdir -p /log/k8s/kube-proxy

#启动kube-proxy服务
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy


#查看ipvs路由规则
ipvsadm -ln

  • 创建apiserver到kubelet的权限,给kubernetes用户rbac授权
cat > apiserver-to-kubelet.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kubernetes-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kubernetes
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubernetes-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

#创建授权
kubectl create -f apiserver-to-kubelet.yaml 
验证集群功能
  • 查看节点状况
kubectl get nodes 
  • 部署一个nginx测试
cd /data/
#创建测试文件
cat >nginx-web.yml  <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-web
  labels:
    tier: frontend
spec:
  type: NodePort
  selector:
    tier: frontend
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-con
  labels:
    tier: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx-pod
        image: nginx
        ports:
        - containerPort: 80
EOF

#应用配置
kubectl create -f nginx-web.yml

#查看pod
kubectl get pod -o wide

#查看service的集群ip
kubectl get svc

#访问nginx
curl -I 192.168.50.113:31975
部署集群插件
部署coredns
  • master01上面执行
#使用二进制包自带的yml文件部署
cd /opt/src/kubernetes
tar -zxf kubernetes-src.tar.gz
cd /opt/src/kubernetes/cluster/addons/dns/coredns

#将transforms2sed.sed里面$DNS_SERVER_IP、$DNS_DOMAIN、$SERVICE_CLUSTER_IP_RANGE替换为10.254.0.2、cluster.local、10.254.0.0/16

sed -i 's#/#@#g' transforms2sed.sed
sed -i 's@$DNS_SERVER_IP@10.254.0.2@' transforms2sed.sed
sed -i 's@$DNS_DOMAIN@cluster.local@' transforms2sed.sed
sed -i 's@$SERVICE_CLUSTER_IP_RANGE@10.254.0.0/16@' transforms2sed.sed

#生成yaml文件
sed -f transforms2sed.sed coredns.yaml.base > coredns.yaml

#替换coredns.yaml里面被墙的镜像地址
sed -i 's@k8s.gcr.io@registry.cn-hangzhou.aliyuncs.com/google_containers@' coredns.yaml

#应用
kubectl apply -f coredns.yaml

#查看coredns pod状态
kubectl get pod -n kube-system

验证coredns
#新建一个Deployment
cat > my-nginx.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: my-nginx
spec:
    replicas: 2
    template:
        metadata:
            labels:
                run: my-nginx
        spec:
            containers:
            -   name: my-nginx
                image: nginx:1.7.9
                ports:
                -   containerPort: 80
EOF

kubectl apply -f my-nginx.yaml
kubectl expose deploy my-nginx

#创建另一个pod查看/etc/resolv.conf是否包含kubelet配置的dns,是否能够将服务my-nginx解析到集群my-nginx的集群ip上面
cat > dnsutils-ds.yaml << EOF
apiVersion: v1
kind: Service
metadata:
    name: dnsutils-ds
    labels:
        app: dnsutils-ds
spec:
    type: NodePort
    selector:
        app: dnsutils-ds
    ports:
    -   name: http
        port: 80
        targetPort: 80
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
    name: dnsutils-ds
    labels:
        addonmanager.kubernetes.io/mode: Reconcile
spec:
    template:
        metadata:
            labels:
                app: dnsutils-ds
        spec:
            containers:
            -   name: my-dnsutils
                image: tutum/dnsutils:latest
                command:
                    -   sleep
                    -   "3600"
                ports:
                -   containerPort: 80
EOF

kubectl apply -f dnsutils-ds.yaml

#查看dnsutils-ds的pod
kubectl get pods -l app=dnsutils-ds

#选择其中一个pod进入终端
kubectl -it exec dnsutils-ds-h8ml2 bash

#查看dns设置是否正确
cat /etc/resolv.conf

#查看能否正确解析
nslookup my-nginx
nslookup www.baidu.com
nslookup kubernetes

部署dashboard
  • master01上面执行
#使用二进制包自带的yml文件部署
cd /opt/src/kubernetes/cluster/addons/dashboard

#修改 service 定义,指定端口类型为 NodePort,这样外界可以通过地址 NodeIP:NodePort 访问 dashboard
sed -i '/spec:/a\  type: NodePort' dashboard-service.yaml

#修改被墙的pod下载地址
sed -i 's@k8s.gcr.io@gcr.azk8s.cn/google_containers@g' *

#执行所有定义文件
kubectl apply -f  .

#查看分配的 NodePort
kubectl get deployment kubernetes-dashboard  -n kube-system

kubectl --namespace kube-system get pods -o wide

kubectl get services kubernetes-dashboard -n kube-system

#查看 dashboard 支持的命令行参数
kubectl exec --namespace kube-system -it kubernetes-dashboard-85bcf5dbf8-5l56v  -- /dashboard --help

#访问 dashboard
1.kubernetes-dashboard 服务暴露了 NodePort,可以使用 https://NodeIP:NodePort 地址访问 dashboard;
2.通过 kube-apiserver 访问 dashboard;
3.通过 kubectl proxy 访问 dashboard:


#创建登录 Dashboard 的 token 和 kubeconfig 配置文件
kubectl create sa dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')

echo ${DASHBOARD_LOGIN_TOKEN}
#使用输出的 token 登录 Dashboard

部署metrics-server
  • master01上面执行
cd /opt/src/
git clone https://github.com/kubernetes-incubator/metrics-server

#创建证书签名请求文件
cd /opt/cfssl/ssl
cat > metrics-server-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "guangzhou",
      "L": "guangzhou",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

#生成证书和私钥
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server
  
#将证书拷到apiserver对应目录中
cd /opt/cfssl/ssl/
cp -rf metrics-server-*.pem /opt/k8s/ssl/apiserver/
scp  metrics-server-*.pem root@k8s-master02:/opt/k8s/ssl/apiserver/
scp  metrics-server-*.pem root@k8s-master03:/opt/k8s/ssl/apiserver/

#apiserver新增启动参数开启聚合层
--requestheader-client-ca-file=/opt/k8s/ssl/apiserver/ca.pem
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=/opt/k8s/ssl/apiserver/metrics-server.pem
--proxy-client-key-file=/opt/k8s/ssl/apiserver/metrics-server-key.pem
--runtime-config=api/all=true
--enable-aggregator-routing=true

cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/opt/k8s/ssl/apiserver/encryption-config.yaml \
  --advertise-address=192.168.50.111 \
  --bind-address=192.168.50.111 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --tls-private-key-file=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --client-ca-file=/opt/k8s/ssl/apiserver/ca.pem \
  --kubelet-client-certificate=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --kubelet-client-key=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --service-account-key-file=/opt/k8s/ssl/apiserver/ca-key.pem \
  --etcd-cafile=/opt/k8s/ssl/apiserver/ca.pem \
  --etcd-certfile=/opt/k8s/ssl/apiserver/kubernetes.pem \
  --etcd-keyfile=/opt/k8s/ssl/apiserver/kubernetes-key.pem \
  --etcd-servers=https://192.168.50.110:2379,https://192.168.50.111:2379,https://192.168.50.112:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/log/k8s/apiserver/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/log/k8s/apiserver \
  --requestheader-client-ca-file=/opt/k8s/ssl/apiserver/ca.pem \ --requestheader-allowed-names=aggregator \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group  \ --requestheader-username-headers=X-Remote-User \ --proxy-client-cert-file=/opt/k8s/ssl/apiserver/metrics-server.pem  \ --proxy-client-key-file=/opt/k8s/ssl/apiserver/metrics-server-key.pem \ --runtime-config=api/all=true \ 
  --enable-aggregator-routing=true \
  --v=2 \
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#修改kube-controller-manager启动文件增加以下一行:
  --horizontal-pod-autoscaler-use-rest-clients=true
  
#重启kub-apiserver 重启kube-controller-manager
systemctl daemon-reload
systemctl restart kube-apiserver
systemctl restart kube-controller-manager


cd /opt/src/metrics-server/deploy/1.8+

#修改被墙的镜像地址
sed -i 's@k8s.gcr.io@gcr.azk8s.cn/google_containers@g' *

在imagePullPolicy: Always下新增
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
        
#应用配置
kubectl apply -f .

#查看运行情况
kubectl -n kube-system get pods -l k8s-app=metrics-server
kubectl get svc -n kube-system  metrics-server

#查看 metrics-server 输出的 metrics
1.通过 kube-apiserver 或 kubectl proxy 访问
curl -k -s --cacert /opt/cfssl/ssl/ca.pem --cert /opt/cfssl/ssl/admin.pem --key /opt/cfssl/ssl/admin-key.pem https://192.168.50.110:6443/apis/metrics.k8s.io/v1beta1/nodes
curl -k -s --cacert /opt/cfssl/ssl/ca.pem --cert /opt/cfssl/ssl/admin.pem --key /opt/cfssl/ssl/admin-key.pem https://192.168.50.110:6443/apis/metrics.k8s.io/v1beta1/pods
curl -k -s --cacert /opt/cfssl/ssl/ca.pem --cert /opt/cfssl/ssl/admin.pem --key /opt/cfssl/ssl/admin-key.pem https://192.168.50.110:6443/apis/metrics.k8s.io/v1beta1/namespace/pods/

2.直接使用 kubectl 
kubectl get --raw "/apis/metrics.k8s.io/v1beta1"
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"

#使用 kubectl top 命令查看集群节点资源使用情况
kubectl top node

  • 16
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

李钟意i

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值