kubeadm安装Kubernetes,kubernetes-dashboard

在安装之前,先看一下官方的kubernetes架构图。

External etcd topology

区别仅仅是etcd是否集成在master node,还是单独安装。前者为叠加模式,后者为external etcd模式。那么这2者什么区别呢? 按照官方的描述,集成模式etcd只和同一台机器的apiserver, controller-mananger, scheduler沟通,仅仅就是这个区别。

etcd的高可用模式原理是一台leader,多个follower,leader负责读写,follower负责读,如果有写发送给follower,follower会转发这个请求给leader,leader写入的数据生产log并同步给follower,以此达到数据的一致性。

1. 安装准备:

一共8台机器,3台master,3台slave,2台机器haproxy+keepalived做master的高可用。这里要说明一下,k8s的master不是active-standby模式,而是master-master模式,也就是说所有master同时可以提供服务。

此文采用叠加etcd安装模式,每台master包含apiserver, controller-manager,scheduler, etcd,kubelet, 每台node机器包含kubelet, kube-proxy.

haproxy01: 192.168.1.10
haproxy02: 192.168.1.11

master01: 192.168.1.15
master02: 192.168.1.16
master03: 192.168.1.17

slave01: 192.168.1.12
slave02: 192.168.1.13
slave03: 192.168.1.14

vip: 192.168.1.200:16443

#虚拟网络千万不要和真正的物理网络有叠加,因为是虚拟网络,实际随便怎么写都可以,只要不和你的物理网络有叠加即可。

pod虚拟网络:10.244.0.0/16
service虚拟网络: 10.1.0.0/16

haproxy+keepalived官方参考文档:

https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing

kubeadm安装k8s集群参考文档:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

2. 初始化master及node节点的操作系统,并安装kubeadm, kubelet,kubelet, 此步骤在所有master/node上执行

#本文全部采用root安装,安装过程如果会要求安装别的包自行安装即可。 
#另外,先去下载kubernetes,docker的yum源,否者默认找不到docker和kubernetes

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

#close firewall
systemctl stop firewalld
systemctl disable firewalld


#install docker

yum install docker-ce-19.03.12 -y


#close swap(必须关闭swap,否者不允许安装k8s)

swapoff -a


# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

系统初始化之后,可以试一下docker,比如docker pull mysql, 然后运行容器稍为测试一下,看看是否ok。

 

3. 准备好haproxy+keepalived,如果有类似阿里云的loadbalance, 那么可以不需要此步。如果没有,那么自建vip,此步骤,我完全是参考官方文档,完全没有问题。

2台haproxy配置是一样的:

#编译安装haproxy

make install PREFIX=/usr/local/haproxy

# 配置/usr/local/haproxy/haproxy.cfg

# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind *:16443
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server master01 192.168.1.15:6443 check
        server master02 192.168.1.16:6443 check
        server master03 192.168.1.17:6443 check


#启动haproxy

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

#检查haproxy是否启动16443端口

[root@haproxy01 local]# lsof -i:16443
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
haproxy 10243 root    4u  IPv4  73316      0t0  TCP *:16443 (LISTEN)

keepalived:

第一台keepalived:

[root@haproxy01 local]# cat /usr/local/keepalive/etc/keepalived/keepalived_16443.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
     unicast_peer {
       192.168.1.11
     }
    state BACKUP
    interface eth0
    virtual_router_id 90
    priority 90
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        192.168.1.200
    }
    track_script {
        check_apiserver
    }
}

第二台keepalived:

[root@haproxy02 ~]# cat /usr/local/keepalive/etc/keepalived/keepalived_16443.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
     unicast_peer {
       192.168.1.10
     }
    state BACKUP
    interface eth0
    virtual_router_id 90
    priority 90
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        192.168.1.200
    }
    track_script {
        check_apiserver
    }
}


#监测脚本:

[root@haproxy02 ~]# cat /etc/keepalived/check_apiserver.sh
#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:16443/ -o /dev/null || errorExit "Error GET https://localhost:16443/"
if ip addr | grep -q 192.168.1.200; then
    curl --silent --max-time 2 --insecure https://192.168.1.200:16443/ -o /dev/null || errorExit "Error GET https://192.168.1.200:16443/"
fi


#启动2台keepalived

/usr/local/keepalive/sbin/keepalived -f /usr/local/keepalive/etc/keepalived/keepalived_16443.conf


#通过 ip addr或者ifconfig -a查看vip是否启动

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:1c:42:48:68:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 192.168.1.200/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6146:9304:eec6:aa68/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::8544:9e65:7d52:e75a/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

 

4. kubeadm初始化master01,  只在master01执行,其他节点是通过 join方式加入

kubeadm init --control-plane-endpoint "192.168.1.200:16443" --upload-certs --image-repository registry.aliyuncs.com/google_containers  \
--kubernetes-version=v1.18.1 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16


等待上面的初始化完成,然后会有如下提示:


...
You can now join any number of control-plane node by running the following command on each as a root:
    kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
      
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
      
Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866

上面有2个 kubeadm join,上面是给master用,下面的是给node用。 一般我会先安装flannel网络,再初始化所有的master和node. 

也就是先完成第5步,再去执行上述命令安装其他机器。

5. 安装flannel.yaml, 在master01上执行即可

#flannel.yaml自己去GitHub下载即可

kubectl apply -f flannel.yaml


#查询node 和 pod , 注: 如果没有安装flannel.yaml, status会是not ready

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   29h   v1.18.5

kubectl get pod -n kube-system -w

#确认网络, 出现flannel1.1 一般就ok了。

[root@master01 ~]# ifconfig -a
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255
        inet6 fe80::7c67:e2ff:feb7:563b  prefixlen 64  scopeid 0x20<link>
        ether 7e:67:e2:b7:56:3b  txqueuelen 1000  (Ethernet)
        RX packets 232413  bytes 15967192 (15.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 240113  bytes 87160879 (83.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:da:78:cb:78  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.15  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::d091:b504:240d:5bb1  prefixlen 64  scopeid 0x20<link>
        ether 00:1c:42:5d:4e:2c  txqueuelen 1000  (Ethernet)
        RX packets 6552364  bytes 2031418053 (1.8 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6147076  bytes 1119515556 (1.0 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::88af:4cff:fe0e:f228  prefixlen 64  scopeid 0x20<link>
        ether 8a:af:4c:0e:f2:28  txqueuelen 0  (Ethernet)
        RX packets 5720  bytes 683216 (667.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8237  bytes 1483512 (1.4 MiB)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

 

6. 执行上述命令进行其他master和node的安装即可。

全部安装完成,检查一下pod状态即可。

[root@master01 ~]# kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-8g6xn           1/1     Running   0          29h
coredns-7ff77c879f-krf7c           1/1     Running   0          29h
etcd-master01                      1/1     Running   0          29h
etcd-master02                      1/1     Running   0          28h
etcd-master03                      1/1     Running   0          28h
kube-apiserver-master01            1/1     Running   1          29h
kube-apiserver-master02            1/1     Running   0          28h
kube-apiserver-master03            1/1     Running   0          28h
kube-controller-manager-master01   1/1     Running   3          29h
kube-controller-manager-master02   1/1     Running   1          28h
kube-controller-manager-master03   1/1     Running   1          28h
kube-flannel-ds-amd64-54jkr        1/1     Running   0          29h
kube-flannel-ds-amd64-6f7pb        1/1     Running   0          29h
kube-flannel-ds-amd64-bqx82        1/1     Running   1          29h
kube-flannel-ds-amd64-h42t2        1/1     Running   1          28h
kube-flannel-ds-amd64-jd8v8        1/1     Running   0          29h
kube-flannel-ds-amd64-q772n        1/1     Running   0          28h
kube-proxy-8bk5p                   1/1     Running   0          29h
kube-proxy-9r9m4                   1/1     Running   0          29h
kube-proxy-fcbmz                   1/1     Running   0          28h
kube-proxy-gzv76                   1/1     Running   1          29h
kube-proxy-nm22q                   1/1     Running   0          29h
kube-proxy-vv925                   1/1     Running   0          28h
kube-scheduler-master01            1/1     Running   1          29h
kube-scheduler-master02            1/1     Running   3          28h
kube-scheduler-master03            1/1     Running   2          28h

经过上面6个步骤,整个叠加式etcd的k8s就完成了。

7. 安装kubernetes-dashboard

官方文档:

https://github.com/kubernetes/dashboard

官方的dashboard默认采用cluster-ip,只能是集群内部访问,不方便,我们要修改yaml文件,修改为node port以便暴露。(如果你是初学者,可能不知道什么意思,那么按照我的步骤做就好了)

下载dashboard yaml:

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
用vi打开yaml文件,找到如下一段(在大概43行的样子):

默认:

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

修改后:增加了一行 type: NodePort

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

---

然后安装dashboard:

kubectl apply -f recommended.yaml

#检查服务是否正常

[root@master01 ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-6b4884c9d5-89trv   1/1     Running   0          81m
pod/kubernetes-dashboard-7f99b75bf4-qjwbb        1/1     Running   0          81m

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.1.216.164   <none>        8000/TCP        81m
service/kubernetes-dashboard        NodePort    10.1.40.190    <none>        443:32152/TCP   81m


# 注意上面32152端口,这个端口就是暴露出来的访问端口,从node节点随便选一台机器加上这个端口就可以访问dashboard ui了。

k8s有权限控制,需要创建一个账号来给dashboard登入,账号名:admin-user

[root@master01 ~]# cat admin.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

创建账号:

kubectl apply -f  admin.yaml

获取登入token, 我的账号是 admin-user:

[root@master01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-5r87c
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d14520de-9ff1-4590-807c-275d55711d99

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlFaQU5fVTdLQkdCbkNnU3pRWF9VMDFUaTZRRkhBcnpzbTIwc3dTZTNLX0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTVyODdjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkMTQ1MjBkZS05ZmYxLTQ1OTAtODA3Yy0yNzVkNTU3MTFkOTkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.LNsrfOWLhISUFZ5x37-YvaZBvQVvdPBaZpYJwV_2gZENETcXyb7Ab3zndYo06I0s6qRms4olaSr1iyXZKlE1DKaT3Kew5QLIwvK-FJUK-9-IY7nAXD8ddYNnYq6LhVpCK2LNNzkAMOUPoerNFmoylHx90gmkVVdRoTqRipvEwq28PCFYYKxkFzALHg_jOk-xa3Xkeri78f-54_O1ZYtQBKp7229RXxyF9NUONKo2Pa2FVZNNSzeD0R9Z7XCNtp0GYJ7-BZgFfcP9GYaB6RjkEAUK8FrbaRJmE_e2cV21O94GU9rxrBu848MBbtpqOEu2PlZGc_xXuxl-PI1i6ePzmQ


Name:         admin-user-token-pztm2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d14520de-9ff1-4590-807c-275d55711d99

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlFaQU5fVTdLQkdCbkNnU3pRWF9VMDFUaTZRRkhBcnpzbTIwc3dTZTNLX0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXB6dG0yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkMTQ1MjBkZS05ZmYxLTQ1OTAtODA3Yy0yNzVkNTU3MTFkOTkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.FaigvsBYfNxnsujzWZZkjFqZ5s6hjHthxuV8gKqioFCf50M2_39qPtpUGTNzfqNHSQriZyMbsimwXbSbhbkZ76gfcx8RMSkRJTnaEhE6jtl8wPsCgCeyaZMX_Wr3i2ppVuAMVjvMXfTZQeoRW1lJjSQDGML0QU6a5dZxL7DX5CB8oWxD0HLct2nBQYdWk52kIPYSm315dpE0gTYhafW0sLmjFbop4aC7sO6W0u3p9Gp3blKmEbUJK67XasERK8_7u08M7uImwEf2UeP_VIXbWaR2rSJox_ydcI3oFbRM39Wu4ERO5SCkrsp5foS0TFNAvBhFGvfe0Rmx0vddMVX_tw
ca.crt:     1025 bytes
namespace:  20 bytes
[root@master01 ~]# 

随便选择一个admin-user的token即可登入。

注:因为证书是不安全的,如果你用google浏览器,默认打不开页面,页面显示err cert之类的。直接敲击键盘,输入神奇字符:thisisunsafe 即可。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

tom_fans

谢谢打赏

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值