kubernetes实践指南(三)

一、apiserver高可用代理
二、apiserver安装
三、controller-manager安装
四、scheduler安装
五、信息说明

一、apiserver高可用代理

为了实现api-server高可用,基于ningx做了四层代理

1、安装和部署nginx

[root@master1 work]# ansible all -i /root/udp/hosts.ini -m shell -a "yum install nginx -y "
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m shell -a "mkdir /etc/kube-nginx/{conf,bin} -pv "
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/usr/sbin/nginx dest=/etc/kube-nginx/bin/" 

2、配置文件

[root@master1 work]# cat kube-nginx.conf
worker_processes 1;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 192.168.192.222:6443        max_fails=3 fail_timeout=30s;
        server 192.168.192.223:6443        max_fails=3 fail_timeout=30s;
        server 192.168.192.224:6443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./kube-nginx.conf dest=/etc/kube-nginx/conf/" 

3、配置service

[root@master1 work]# vim kube-nginx.service
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/etc/kube-nginx/bin/nginx -c /etc/kube-nginx/conf/kube-nginx.conf -p /etc/kube-nginx -t
ExecStart=/etc/kube-nginx/bin/nginx -c /etc/kube-nginx/conf/kube-nginx.conf -p /etc/kube-nginx
ExecReload=/etc/kube-nginx/bin/nginx -c /etc/kube-nginx/conf/kube-nginx.conf -p /etc/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@master1 service]# ansible all  -i /root/udp/hosts.ini -m copy -a "src=./kube-nginx.service dest=/etc/systemd/system/" 
[root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload ;systemctl restart kube-nginx &&systemctl status kube-nginx"
[root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl enable kube-nginx"

二、apiserver安装

1、准备证书

[root@master1 cert]# vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.192.222",
    "192.168.192.223",
    "192.168.192.224",
    "10.244.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local."
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem  -config=./ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
[root@master1 cert]# ls kubernetes*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

说明:hosts包含的是授权范围:不在此范围的的节点或者服务使用此证书就会报证书不匹配错误

2、分发证书

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kubernetes.pem dest=/etc/kubernetes/cert/" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kubernetes-key.pem dest=/etc/kubernetes/cert/" 

3、创建加密配置文件

[root@master1 work]# vim encryption-config.yaml 
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: yTGgrO7RkJUgT2VcFX7RNWSEJ2Pg+n2TkoodX413JZY=
      - identity: {}

[root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./encryption-config.yaml  dest=/etc/kubernetes/"   

4、创建审计策略文件

[root@master1 work]# vim audit-policy.yaml
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io

  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived

[root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./audit-policy.yaml  dest=/etc/kubernetes/audit-policy.yaml" 

5、metrics-server证书

[root@master1 cert]# vim proxy-client-csr.json
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem  -config=./ca-config.json -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
[root@master1 cert]# ls proxy-client*.pem
proxy-client-key.pem  proxy-client.pem
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./proxy-client-key.pem  dest=/etc/kubernetes/cert/" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./proxy-client.pem  dest=/etc/kubernetes/cert/" 

6、创建service

[root@master1 service]# vim kube-apiserver.service.template
[root@master1 service]# cat kube-apiserver.service.template
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/data/k8s/k8s/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \
  --advertise-address=##NODE_IP## \
  --default-not-ready-toleration-seconds=360 \
  --default-unreachable-toleration-seconds=360 \
  --feature-gates=DynamicAuditing=true \
  --max-mutating-requests-inflight=2000 \
  --max-requests-inflight=4000 \
  --default-watch-cache-size=200 \
  --delete-collection-workers=2 \
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 \
  --bind-address=##NODE_IP## \
  --secure-port=6443 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --insecure-port=0 \
  --audit-dynamic-configuration \
  --audit-log-maxage=15 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-truncate-enabled \
  --audit-log-path=/data/k8s/k8s/kube-apiserver/audit.log \
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
  --profiling \
  --anonymous-auth=false \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --enable-bootstrap-token-auth \
  --requestheader-allowed-names="aggregator" \
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-admission-plugins=NodeRestriction \
  --allow-privileged=true \
  --apiserver-count=3 \
  --event-ttl=168h \
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --kubelet-https=true \
  --kubelet-timeout=10s \
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \
  --service-cluster-ip-range=10.244.0.0/16 \
  --service-node-port-range=30000-32767 \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

[root@master1 service]# sed    "s@##NODE_IP##@192.168.192.222@g"  kube-apiserver.service.template  &> ./kube-apiserver.service.master1
[root@master1 service]# sed    "s@##NODE_IP##@192.168.192.223@g"  kube-apiserver.service.template  &> ./kube-apiserver.service.master2
[root@master1 service]# sed    "s@##NODE_IP##@192.168.192.224@g"  kube-apiserver.service.template  &> ./kube-apiserver.service.master3
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/k8s/kube-apiserver/" 

7、api-server启动参数说明

通用参数:

advertise-address=  #apiserver 对外通告的 IP
default-not-ready-toleration-seconds=360   #等待notReady:NotExecute的秒数,默认300,默认会给所有未设置的toleration的pod设置该值
default-unreachable-toleration-seconds=360   #等待notreachable:NotExecute的秒数,默认300,默认会给所有未设置的toleration的pod设置该值
feature-gates=DynamicAuditing=true   #用于实验性质的特性开关组,每个key=value表示
max-mutating-requests-inflight=2000   #同时处理的最大突变请求数,超过会被拒绝
max-requests-inflight=4000   #同时处理的最大请求数

etcd参数:
default-watch-cache-size=200   #默认watch缓存大小
delete-collection-workers=2   #启动Delete-collection的工作线程数,用于提高NameSpace的效率,默认值为1
encryption-provider-config=/etc/kubernetes/encryption-config.yaml   #在etcd中存储机密信息的配置文件
etcd-cafile=/etc/kubernetes/cert/ca.pem   #etcd的SSLCA文件
etcd-certfile=/etc/kubernetes/cert/kubernetes.pem   #etcd的 SSL证书文件
etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem   #SSL Key文件
etcd-servers=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379   #server地址

安全服务相关:
bind-address=##NODE_IP##   #开启https的服务的地址,默认0.0.0.0
secure-port=6443   #设置api server的HTTPS安全模式端口,设置为0表示不启用https,默认6443
tls-cert-file=/etc/kubernetes/cert/kubernetes.pem   #用于HTTPS认证
tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem   #tls对应的私钥文件
insecure-port=0   #非安全端口

审计相关:
audit-dynamic-configuration   #动态审计配置
audit-log-maxage=15   #审计日志最大保留天数
audit-log-maxbackup=3   #审计日志个数
audit-log-maxsize=100   #审计日志最大大小
audit-log-truncate-enabled   #是否启动event分批截断机制
audit-log-path=/opt/k8s/k8s/kube-apiserver/audit.log   #审计日志
audit-policy-file=/etc/kubernetes/audit-policy.yaml   #审计策略文件
profiling   #打开性能分析功能

认证:
anonymous-auth=false   #匿名登录
client-ca-file=/etc/kubernetes/cert/ca.pem   #客户端ca认证
enable-bootstrap-token-auth   #设置在TLS认证时是否允许使用kube-system 命名空间中类型为bootstrap.kubernetes.io/token的secret
requestheader-allowed-names="aggregator"   #允许客户端证书中的common name列表
requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem   #验证客户端证书的ca证书 
service-account-key-file=/etc/kubernetes/cert/ca.pem   #
authorization-mode=Node,RBAC   #认证模式Node和RBAC
runtime-config=api/all=true   #
enable-admission-plugins=NodeRestriction   #准入控制插件
allow-privileged=true   #特权
apiserver-count=3   #
event-ttl=168h   #
kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem   #kubelet使用ca证书
kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem   #kubelet客户端使用的证书
kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem   #kubelet客户端使用的私钥
kubelet-https=true   #使用https
kubelet-timeout=10s   #
proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem   #
proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem   #
service-cluster-ip-range=10.244.0.0/16  #
service-node-port-range=30000-32767   #
logtostderr=true   #
v=2 #

8、启动和检查

[root@master1 service]# ansible master1 -i /root/udp/hosts.ini -m copy  -a "src=./kube-apiserver.service.master1 dest=/etc/systemd/system/kube-apiserver.service"
[root@master1 service]# ansible master2 -i /root/udp/hosts.ini -m copy  -a "src=./kube-apiserver.service.master2 dest=/etc/systemd/system/kube-apiserver.service"
[root@master1 service]# ansible master3 -i /root/udp/hosts.ini -m copy  -a "src=./kube-apiserver.service.master3 dest=/etc/systemd/system/kube-apiserver.service"
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload;systemctl restart kube-apiserver.service;systemctl status kube-apiserver.service" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl enable kube-apiserver.service"
[root@master1 service]# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379  --cacert=/etc/kubernetes/cert/ca.pem  --cert=/etc/etcd/cert/etcd.pem  --key=/etc/etcd/cert/etcd-key.pem   get /registry/ --prefix --keys-only

9、检查集群信息

[root@master1 service]# kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:8443
[root@master1 service]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.244.0.1   <none>        443/TCP   17m
[root@master1 service]#  kubectl get componentstatuses
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-2               Healthy     {"health": "true"}                                                                          
etcd-1               Healthy     {"health": "true"}                                                                          
etcd-0               Healthy     {"health": "true"}    

10、授权apiserver访问kubelet

执行kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet的https端口。
定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:

[root@master1 service]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

三、controller-manager安装

1、创建证书

[root@master1 cert]# vim kube-controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.192.222",
      "192.168.192.223",
      "192.168.192.224"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "HangZhou",
        "L": "HangZhou",
        "O": "system:kube-controller-manager",
        "OU": "FirstOne"
      }
    ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem   -ca-key=./ca-key.pem   -config=./ca-config.json   -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@master1 cert]# ls kube-controller-manager*
kube-controller-manager.csr  kube-controller-manager-csr.json  kube-controller-manager-key.pem  kube-controller-manager.pem
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kube-controller-manager-key.pem  dest=/etc/kubernetes/cert/" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kube-controller-manager.pem  dest=/etc/kubernetes/cert/" 

2、创建kubeconfig

[root@master1 cert]# kubectl config set-cluster kubernetes   --certificate-authority=./ca.pem   --embed-certs=true   --server=https://127.0.0.1:8443   --kubeconfig=kube-controller-manager.kubeconfig
[root@master1 cert]# kubectl config set-credentials system:kube-controller-manager   --client-certificate=kube-controller-manager.pem   --client-key=kube-controller-manager-key.pem   --embed-certs=true   --kubeconfig=kube-controller-manager.kubeconfig
[root@master1 cert]# kubectl config set-context system:kube-controller-manager   --cluster=kubernetes   --user=system:kube-controller-manager   --kubeconfig=kube-controller-manager.kubeconfig
[root@master1 cert]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy  -a "src=./kube-controller-manager.kubeconfig  dest=/etc/kubernetes/" 

3、创建service

[root@master1 service]# vim kube-controller-manager.service.template
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/k8s/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \
  --profiling \
  --cluster-name=kubernetes \
  --controllers=*,bootstrapsigner,tokencleaner \
  --kube-api-qps=1000 \
  --kube-api-burst=2000 \
  --leader-elect \
  --use-service-account-credentials\
  --concurrent-service-syncs=2 \
  --bind-address=##NODE_IP## \
  --secure-port=10252 \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --port=0 \
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-allowed-names="" \
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --experimental-cluster-signing-duration=876000h \
  --horizontal-pod-autoscaler-sync-period=10s \
  --concurrent-deployment-syncs=10 \
  --concurrent-gc-syncs=30 \
  --node-cidr-mask-size=24 \
  --service-cluster-ip-range=10.244.0.0/16 \
  --pod-eviction-timeout=6m \
  --terminated-pod-gc-threshold=10000 \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

[root@master1 service]# sed 's@##NODE_IP##@192.168.192.222@g' kube-controller-manager.service.template &> ./kube-controller-manager.service.master1 
[root@master1 service]# sed 's@##NODE_IP##@192.168.192.223@g' kube-controller-manager.service.template &> ./kube-controller-manager.service.master2
[root@master1 service]# sed 's@##NODE_IP##@192.168.192.224@g' kube-controller-manager.service.template &> ./kube-controller-manager.service.master3
[root@master1 service]# for i in master1 master2 master3 ;do ansible $i -i /root/udp/hosts.ini -m copy -a "src=./kube-controller-manager.service.$i dest=/etc/systemd/system/kube-controller-manager.service"  ;done 

4、参数说明

port=0  #关闭监听非安全端口(http),同时 address 参数无效,bind-address 参数有效;
secure-port=10252、bind-address=0.0.0.0  #在所有网络接口监听 10252 端口的 https /metrics 请求;
kubeconfig  #  指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
authentication-kubeconfig 和 authorization-kubeconfig  #  kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。
cluster-signing-*-file  #签名 TLS Bootstrap 创建的证书;
experimental-cluster-signing-duration  # 指定 TLS Bootstrap 证书的有效期;
root-ca-file  #放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
service-account-private-key-file  #签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 service-account-key-file 指定的公钥文件配对使用;
service-cluster-ip-range   #指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
leader-elect=true  #集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
controllers=*,bootstrapsigner,tokencleaner  #启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
horizontal-pod-autoscaler-*  #custom metrics 相关参数,支持 autoscaling/v2alpha1;
tls-cert-file、tls-private-key-file  #  使用 https 输出 metrics 时使用的 Server 证书和秘钥;
use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver;

5、服务启动和检查

[root@master1 k8s]# ansible master -i /root/udp/hosts.ini -m shell -a "mkdir /data/k8s/k8s/kube-controller-manager/ -pv"
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload;systemctl restart kube-controller-manager.service;systemctl status kube-controller-manager.service"
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "systemctl enable kube-controller-manager"

注意:如果/data/k8s/k8s/kube-controller-manager文件不存在可能会导致kube-controller-manager不断重启

查看metric信息:

[root@master1 kube-controller-manager]# curl -s --cacert /opt/k8s/work/cert/ca.pem --cert /opt/k8s/work/cert/admin.pem --key /opt/k8s/work/cert/admin-key.pem https://192.168.192.222:10252/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

查看kube-controller-amanger的权限:
[root@master1 work]# kubectl describe clusterrole system:kube-controller-manager
Name:         system:kube-controller-manager
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                  Non-Resource URLs  Resource Names  Verbs
  ---------                                  -----------------  --------------  -----
  secrets                                    []                 []              [create delete get update]
  endpoints                                  []                 []              [create get update]
  serviceaccounts                            []                 []              [create get update]
  events                                     []                 []              [create patch update]
  tokenreviews.authentication.k8s.io         []                 []              [create]
  subjectacce***eviews.authorization.k8s.io  []                 []              [create]
  configmaps                                 []                 []              [get]
  namespaces                                 []                 []              [get]
  *.*                                        []                 []              [list watch]

备注:
kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数
main controller会为各 controller 创建对应的 ServiceAccount XXX-controller内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限

[root@master1 work]#  kubectl get clusterrole|grep controller
system:controller:attachdetach-controller                              80m
system:controller:certificate-controller                               80m
system:controller:clusterrole-aggregation-controller                   80m
...

当前leader:

[root@master1 work]#  kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master2_780fc21b-aac1-11e9-ae19-00163e000af5","leaseDurationSeconds":15,"acquireTime":"2019-07-20T07:39:30Z","renewTime":"2019-07-20T07:45:48Z","leaderTransitions":7}'
  creationTimestamp: "2019-07-20T07:07:15Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "2591"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 012f88f2-aabd-11e9-9031-00163e0007ff
[root@master1 work]#  

四、scheduler安装

1、私钥

[root@master1 cert]# vim kube-scheduler-csr.json
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.192.224",
      "192.168.192.223",
      "192.168.192.222"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "HangZhou",
        "L": "HangZhou",
        "O": "system:kube-scheduler",
        "OU": "FirstOne"
      }
    ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem  -config=./ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@master1 cert]# ls kube-scheduler*.pem
kube-scheduler-key.pem  kube-scheduler.pem
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler-key.pem dest=/etc/kubernetes/cert/" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler.pem dest=/etc/kubernetes/cert/" 

2、创建和分发kubeconfig

[root@master1 cert]# kubectl config set-cluster kubernetes   --certificate-authority=./ca.pem   --embed-certs=true   --server=https://127.0.0.1:8443   --kubeconfig=kube-scheduler.kubeconfig
[root@master1 cert]# kubectl config set-credentials system:kube-scheduler   --client-certificate=kube-scheduler.pem   --client-key=kube-scheduler-key.pem   --embed-certs=true   --kubeconfig=kube-scheduler.kubeconfig
[root@master1 cert]# kubectl config set-context system:kube-scheduler   --cluster=kubernetes   --user=system:kube-scheduler   --kubeconfig=kube-scheduler.kubeconfig
[root@master1 cert]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler.kubeconfig  dest=/etc/kubernetes/"

3、创建kube-scheduler配置文件

[root@master1 yaml]# vim kube-scheduler.yaml.template
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: ##NODE_IP##:10251
leaderElection:
  leaderElect: true
metricsBindAddress: ##NODE_IP##:10251

参数说明:
kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

[root@master1 yaml]# for i in 222 223 224 ;do sed  "s@##NODE_IP##@192.168.192.$i@"  ./kube-scheduler.yaml.template  &> ./kube-scheduler.yaml.template.$i ;done 
[root@master1 yaml]# ansible master1 -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler.yaml.template.222 dest=/etc/kubernetes/kube-scheduler.yaml"
[root@master1 yaml]# ansible master2 -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler.yaml.template.223 dest=/etc/kubernetes/kube-scheduler.yaml"
[root@master1 yaml]# ansible master3 -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler.yaml.template.224 dest=/etc/kubernetes/kube-scheduler.yaml"

4、创建service配置文件

[root@master1 yaml]# vim kube-scheduler.service.template
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/k8s/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \
  --config=/etc/kubernetes/kube-scheduler.yaml \
  --bind-address=##NODE_IP## \
  --secure-port=10259 \
  --port=0 \
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-allowed-names="" \
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --logtostderr=true \
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target

[root@master1 service]# sed 's@##NODE_IP##@192.168.192.222@' kube-scheduler.service.template &> ./kube-scheduler.service.master1 
[root@master1 service]# sed 's@##NODE_IP##@192.168.192.223@' kube-scheduler.service.template &> ./kube-scheduler.service.master2
[root@master1 service]# sed 's@##NODE_IP##@192.168.192.224@' kube-scheduler.service.template &> ./kube-scheduler.service.master3
[root@master1 service]# for i in master1 master2 master3 ;do ansible $i -i /root/udp/hosts.ini -m copy -a "src=./kube-scheduler.service.$i dest=/etc/systemd/system/kube-scheduler.service" ;done 

5、启动和检查

[root@master1 service]# ansible  master -i /root/udp/hosts.ini -m shell -a "mkdir /data/k8s/k8s/kube-scheduler" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload;systemctl restart kube-scheduler.service;systemctl status kube-scheduler.service"  
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl status kube-scheduler.service"   |grep -i active
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl enable kube-scheduler.service"  

查看metric内容:

[root@master1 yaml]# curl -s  http://192.168.192.222:10251/metrics |head
[root@master1 yaml]# curl -s --cacert /opt/k8s/work/cert/ca.pem --cert /opt/k8s/work/cert/admin.pem --key /opt/k8s/work/cert/admin-key.pem https://192.168.192.222:10259/metrics |head

查看当前leader:

[root@master1 yaml]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master3_212c242a-ab6a-11e9-b755-00163e0007ff","leaseDurationSeconds":15,"acquireTime":"2019-07-21T03:46:48Z","renewTime":"2019-07-21T03:50:31Z","leaderTransitions":2}'
  creationTimestamp: "2019-07-21T03:38:56Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "60615"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 115a53ae-ab69-11e9-9031-00163e0007ff

五、信息说明

1、csr内容

etcd: [CN:etcd,O:k8s]
flanneld: [CN:flanneld,O:FirstOne]
apiserver:  [CN:kubernetes,O:k8s] 
kubelet: [CN:admin,O:system:masters]
controller-manager: [CN:system:kube-controller-manager,O:system:kube-controller-manager]

2、clusterrolebidning信息

CN对应User,O对应Group
kubelet: [CN:admin,O:system:masters]
ClusterRoleBinding[cluster-admin]=Group[system:master]+Role[cluster-admin] #cluster-admin授予所有API权限

[root@master1 ~]# kubectl get clusterroles cluster-admin 可查看该clusterrole权限

controller-manager: [CN:system:kube-controller-manager,O:system:kube-controller-manager]
ClusterRoleBindings[system:kube-controller-manager]:User[system:kube-controller-manager]+Role[system:kube-controller-manager]

[root@master1 ~]# kubectl get clusterrole system:kube-controller-manager -o yaml  #对应的权限信息

scheduler: [CN:system:kube-scheduler,O:system:kube-scheduler]
ClusterRoleBindings[system:kube-scheduler]:User[system:kube-scheduler]+Role[system:kube-scheduler]

[root@master1 ~]# kubectl get clusterrole system:kube-scheduler -o yaml

apiserver: [CN:kubernetes,O:k8s]
ClusterRoleBindings[kube-apiserver:kubelet-apis]=Role[system:kubelet-api-admin]+user[kubernetes]

[root@master1 ~]# kubectl get clusterrolebinding kube-apiserver:kubelet-apis -o yaml  #查看对应的权限

参考网址:
https://github.com/cloudflare/cfssl
https://github.com/kubernetes/kubernetes/issues/48208
https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值