07.配置 apiserver
以下操作仅在master执行。
无需修改,以下文件仅做存档。
vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
需要配置以下文件
修改项为:
vi /etc/kubernetes/config
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" #打开为true
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.66.77.221:8080" #10.66.77.221主机IP地址,用于本机controller-manager, scheduler, and proxy访问
/etc/kubernetes/config文件配置存档如下:
vi /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.66.77.221:8080"
需要配置以下文件
vi /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #该地址在正式的环境中,应该配置为:127.0.0.1,禁止其他主机登入
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://10.66.77.221:2379" #10.66.77.221为etcd的IP
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.255.0.0/16" #10.255.0.0/16集群中Service,POD对应的IP分配网段
# default admission control policies
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction"
#MutatingAdmissionWebhook,ValidatingAdmissionWebhook对于Istio服务必须要加上,用于WebHook通知
#ServiceAccount对于Istio必须增加
#Kubernetes 1.10以后的版本,使用:enable-admission-plugins
# Add your own!
KUBE_API_ARGS="--insecure-port=8080 \
--secure-port=6443 \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--runtime-config=admissionregistration.k8s.io/v1beta1 \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/server.pem \
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
启动apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
08.配置controller-manager
以下操作仅在master执行。
无需修改,以下文件仅做存档。
vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=kube
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
需要配置以下文件
vi /etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
启动controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
09.配置Scheduler
以下操作仅在master执行。
无需修改,以下文件仅做存档。
vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kube
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
无需修改,以下文件仅做存档。
vi /etc/kubernetes/scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS=""
启动Scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
10.用户创建与验证
以下操作仅在master执行。
创建用户
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
验证检查
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
启动Master节点
systemctl enable kube-apiserver kube-scheduler kube-controller-manager
systemctl restart kube-proxy kube-apiserver kube-scheduler kube-controller-manager
systemctl restart flanneld docker kube-proxy kube-apiserver kube-scheduler kube-controller-manager
成功安装检测
-
ApiServer服务检测
在浏览器访问http://10.66.77.221:8080/有返回,表示安装成功 -
检测组件状态
kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
日志查看
journalctl -xeu kube-controller-manager --no-pager
journalctl -xeu kube-scheduler --no-pager
journalctl -xeu kubelet --no-pager
11.部署Node
说明:
- 以下操作要在所有node节点分别执行;
- Master需要安装kube-proxy并启用该服务,但是各项文件无需配置。
安装node
yum install kubernetes-node-1.10.0 -y
配置kubelet
需要配置以下文件
只需增加‘$KUBELET_POD_INFRA_CONTAINER \’
vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
需要配置以下文件
vi /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" # 改为tru
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=https://10.66.77.221:6443" # 改为规划中的masterIP地址
vi /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node1"
#注意,node1必须在/etc/hosts和/etc/hostname中配置
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.66.77.227:10050/k8s.gcr.io/pause-amd64:3.1" #注意这里的版本号
# Add your own!
KUBELET_ARGS="--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--eviction-hard=memory.available<500Mi,nodefs.available<1Gi,imagefs.available<5Gi \
--system-reserved=cpu=1000m,memory=1024Mi,ephemeral-storage=5Gi \
--kube-reserved=cpu=500m,memory=1024Mi,ephemeral-storage=5Gi \
--cluster_dns=10.255.0.100 \
--cluster_domain=cluster.local \
--cgroup-driver=systemd \
--fail-swap-on=false"
以上配置中需要特别注意,对于不同node,–hostname-override配置的名称不同
KUBELET_HOSTNAME="--hostname-override=node1"
启动kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
配置kube-proxy
需要配置以下文件
只需增加‘$KUBELET_POD_INFRA_CONTAINER \’
vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_POD_INFRA_CONTAINER \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
vi /etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--hostname-override=node1 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
以上配置中需要特别注意,对于不同node,–hostname-override配置的名称不同
KUBE_PROXY_ARGS="--hostname-override=node1
启动kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
12.Approve csr
kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY 2m kubelet-bootstrap Pending
kubectl certificate approve node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY
certificatesigningrequest "node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY" approved
kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY 3m kubelet-bootstrap Approved,Issued
检查集群状态
kubectl get node
NAME STATUS ROLES AGE VERSION
10.66.77.222 Ready <none> 11m v1.10.0
10.66.77.223 NotReady <none> 8s v1.10.0
服务准备
systemctl restart flanneld
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet kube-proxy
iptables -P FORWARD ACCEPT (不开启将导致service无法连通不同node)