kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。
官方文档:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
1.安装要求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
- 一台或多台机器,操作系统 CentOS7.x-86_x64;
- 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多;
- 集群中所有机器之间网络互通;
- 可以访问外网,需要拉取镜像;
- 禁止swap分区。
2.环境准备
角色 | IP |
k8s-master | 172.18.74.71 |
k8s-node | 172.18.74.72 |
关闭防火墙:
# systemctl stop firewalld
# systemctl disable firewalld
关闭selinux:
# sed -i 's/enforcing/disabled/'/etc/selinux/config # 永久 需重启
# setenforce 0# 临时
关闭swap:
# swapoff -a # 临时
# vim /etc/fstab # 永久 将swap那一行注释
根据规划设置主机名
在master添加hosts:
# cat >>/etc/hosts << EOF
172.18.74.71 k8s-master
172.18.74.72 k8s-node1
EOF
将桥接的IPv4流量传递到iptables的链:
# cat >/etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
EOF
# sysctl --system # 生效
时间同步:
# yum install ntpdate -y
# ntpdate cn.pool.ntp.org
3.安装Docker、kubeadm、kubelet、kubectl【所有节点】
3.1安装docker
# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
使用官方源地址(比较慢)
# sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
阿里云
# sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装 Docker Engine-Community
# sudo yum install docker-ce docker-ce-cli containerd.io
# sudo systemctl start docker
# sudo systemctl enable docker
3.2安装kubeadm、kubelet、kubectl
- kubelet:systemd守护进程管理
- kubeadm:部署工具
- kubectl:k8s命令行管理工具
添加阿里云YUM源
cat >/etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
指定版本安装软件
# yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
# systemctl enable kubelet
4.部署Kubernetes Master
在master(172.18.74.71)节点执行。
# kubeadm init \
--apiserver-advertise-address=172.18.74.71 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=1.19.0 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12
- --apiserver-advertise-address 集群通告地址
- --image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
- --kubernetes-version K8s版本,与上面安装的一致
- --service-cidr 集群内部虚拟网络,Pod统一访问入口
- --pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致
以下为日志输出
# kubeadm init --apiserver-advertise-address=172.18.74.71 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=1.19.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
W1112 12:34:47.083204 9195 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
...............................
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.18.74.71:6443 --token wftkck.zipgpx31m7s41u75 \
--discovery-token-ca-cert-hash sha256:c613f2be6ddfa8ef61cc88500b72a5885f7dc4107ddcf221418fbd936b7a3992
根据输出日志操作
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看node
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 34m v1.19.0
5.Node节点加入集群
在node节点执行。向集群添加新节点,执行在kubeadm init输出的kubeadm join命令。
# kubeadm join 172.18.74.71:6443 --token wftkck.zipgpx31m7s41u75 \
--discovery-token-ca-cert-hash sha256:c613f2be6ddfa8ef61cc88500b72a5885f7dc4107ddcf221418fbd936b7a3992
token默认有效期为24小时,过期后需要重新创建:
查看token(master节点)
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
wftkck.zipgpx31m7s41u75 23h 2020-11-13T12:38:26+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
创建token
# kubeadm token create --print-join-command
再次查看
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
4p6xja.hn1gm580829kmhny 23h 2020-11-13T13:17:42+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
wftkck.zipgpx31m7s41u75 23h 2020-11-13T12:38:26+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
6.部署CNI网络
解决容器跨主机网络通信
【flannel 、calico部署一个就可以】
flannel 网络插件
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
查看
# kubectl get ds kube-flannel-ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds 2 2 2 2 2 <none> 5m47s
# kubectl get pods -n kube-system -l app=flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-crhw4 1/1 Running 0 6m55s
kube-flannel-ds-vdllr 1/1 Running 0 6m55s
-----------------------------------------------------------------------------------
calico 网络插件
# kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
-----------------------------------------------------------------------------------
查看node状态
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 56m v1.19.0
k8s-node1 Ready <none> 20m v1.19.0
7.部署Dashboard
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部。
# vim recommended.yaml
...............................................................
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001 # 添加
type: NodePort # 添加
selector:
k8s-app: kubernetes-dashboard
...............................................................
# kubectl apply -f recommended.yaml
# kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-7b59f7d4df-6fjtm 1/1 Running 0 3m25s
pod/kubernetes-dashboard-665f4c5ff-sh92f 1/1 Running 0 3m25s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.109.119.160 <none> 8000/TCP 3m25s
service/kubernetes-dashboard NodePort 10.99.33.26 <none> 443:30001/TCP 3m26s
访问方式:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
# kubectl create serviceaccount dashboard-admin -n kube-system
# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
8.集群测试
8.1部署deployment、service等
通过deployment部署一个nginx
# kubectl create deployment nginx-deploy --image=nginx:1.18
通过svc暴露服务
# kubectl expose deployment nginx-deploy --name=nginx-svc --port=80 --target-port=80 --type=NodePort
查看
# kubectl get deploy,pods,svc
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 65s
NAME READY STATUS RESTARTS AGE
pod/nginx-deploy-fb74b55d4-2vgdq 1/1 Running 0 65s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 109m
service/nginx-svc NodePort 10.101.165.148 <none> 80:31972/TCP 18s
在node节点访问
# curl -I 10.244.1.5 此为pod IP
HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Thu, 12 Nov 2020 06:30:35 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Apr 2020 12:43:12 GMT
Connection: keep-alive
ETag: "5e9eea60-264"
Accept-Ranges: bytes
# curl -I 10.101.165.148:80 此为svc的IP
HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Thu, 12 Nov 2020 06:30:54 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Apr 2020 12:43:12 GMT
Connection: keep-alive
ETag: "5e9eea60-264"
Accept-Ranges: bytes
8.2集群DNS测试
kubeadm方式默认已部署coredns
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-hj8cl 1/1 Running 0 117m
coredns-6d56c8448f-qjmqd 1/1 Running 0 117m
# kubectl run busybox-test --rm -it --image=busybox:1.28.4 -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-svc 此svc刚才是刚才创建的
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-svc
Address 1: 10.101.165.148 nginx-svc.default.svc.cluster.local
解析正常