使用kubeadm 安装k8s
操作笔记环境版本说明:
- 三台虚拟机,系统版本CentOS7.6。
- Kubernetes v1.20.2,当前最新版本
- 网络插件 flanel
- docker 20.10.1
前言
该文章借鉴:[https://segmentfault.com/a/1190000020738509] 这篇博客编写,用于个人部署操作笔记一、硬件环境
- k8s-master 192.168.74.128 集群管理节点
- k8s-node1 192.168.74.129 集群工作节点
- k8s-node2 192.168.74.130 集群工作节点
二、准备工作
所有节点都需执行以下操作
1. 安装必要的rpm工具
yum install -y wget vim net-tools epel-release
2. 关闭防火墙
systemctl disable firewalld
systemctl stop firewalld
3. 关闭selinux
# 临时禁用selinux
setenforce 0
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
4. 禁用交换分区
swapoff -a
# 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab
5. 修改 /etc/hosts
cat <<EOF >> /etc/hosts
192.168.10.20 k8s-master
192.168.10.21 k8s-node1
192.168.10.22 k8s-node2
EOF
6. 修改内核参数 ,配置将桥接流量传递到iptables并执行生效
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
7. 安装docker
## 配置默认源
## 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
## 下载阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
## 刷新
yum makecache fast
## 配置k8s源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
## 重建yum缓存
yum clean all
yum makecache fast
yum -y update
## 下载docker yum源
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
## 查看docker 版本
yum list docker-ce --showduplicates |sort -r
## 如果已安装,需要更新版本的话,请卸载它们以及相关的依赖项
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
## 安装指定版本
yum install -y docker-ce-19.03.14
systemctl enable docker
systemctl start docker
8. 修改docker镜像源
cat > /etc/docker/daemon.json <<EOF
{
"data-root": "/data/docker",
"registry-mirrors": ["https://lmq0814g.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
三、安装k8s
管理节点配置
k8s-master1 上安装管理节点
下载kubeadm,kubelet
yum install -y kubeadm kubelet
初始化kubeadm
查看所需镜像版本
[root@k8s-master ~]# kubeadm config images list
kubeadm config images list
W1210 01:51:58.663776 7887 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1210 01:51:58.663878 7887 version.go:102] falling back to the local client version: v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
注: 这里不直接初始化,因为国内用户不能直接拉取相关的镜像,所以这里想查看需要的镜像版本。
当然,若镜像源可用,下载速度还行的话可以跳过,直接初始化。同时node节点也需要拉取kube-proxy、pause 镜像
1. 下载所需镜像
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.2
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.2
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.20.2
docker pull registry.aliyuncs.com/google_containers/pause:3.2
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0
##修改images 名称
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
或者用以下脚本一键执行
# vim kubeadm.sh
#!/bin/bash
## 使用如下脚本下载国内镜像,并修改tag为google的tag
set -e
KUBE_VERSION=v1.20.2
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.13-0
CORE_DNS_VERSION=1.7.0
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
2. 初始化
sudo kubeadm init \
--apiserver-advertise-address 192.168.74.128 \
--kubernetes-version=v1.20.2 \
--pod-network-cidr=10.244.0.0/16
输出如下即为成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.74.128:6443 --token hdo0p8.7mevl9yl1u3q7hfx \
--discovery-token-ca-cert-hash sha256:7eced79eb7bd989373fcb7de7322eee93eed401e5e6e08f106e8c1679b80421e
在master节点执行以下命令,令kubectl可用
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3. 添加node节点
##添加节点
kubeadm join 192.168.74.128:6443 --token anisbh.ascd1c5bgi208jel\
--discovery-token-ca-cert-hash sha256:7eced79eb7bd989373fcb7de7322eee93eed401e5e6e08f106e8c1679b80421e
如果key和token过时,执行以下命令重新生成
# 重新生成添加节点命令
[root@k8s-master k8s-sh]# kubeadm token create --print-join-command
kubeadm join 192.168.74.128:6443 --token se0rs3.eps9vjtl0lm2t682 --discovery-token-ca-cert-hash sha256:7eced79eb7bd989373fcb7de7322eee93eed401e5e6e08f106e8c1679b80421e
如果添加节点失败,或是想重新添加,可以使用命令
kubeadm reset
注:不要在轻易master上使用,它会删除所有kubeadm配置
这时在节点上就可以使用命令查看添加的节点信息了
查看节点信息
[root@k8s-master k8s-sh]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m8s v1.20.2
节点状态显示NotReady ,需要安装CNI网络插件,可用的有Flannel、Calico、Canal和Weave。一般使用Flannel、Calico,此处我们使用Flannel
安装flanneld
因为kube-flannel.yml文件中使用的镜像为quay.io的,国内无法拉取,所以同样的先从国内源上下载,再修改tag,脚本如下,此脚本需要在每个节点上运行,拉取镜像。执行kube-flannel.yml 其实一般都可
拉取镜像,这脚本不执行其实也可以
# vim flanneld.sh
#!/bin/bash
set -e
FLANNEL_VERSION=v0.11.0
# 在这里修改源
QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos
images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)
for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done
安装
kubectl apply -f http://zabbix.itunesapplestore.com.cn/ray/k8s-sh/kube-flannel.yml
[root@k8s-master k8s-sh]# kubectl get pods -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74ff55c5b-4k2tp 1/1 Running 0 11m 10.244.0.3 k8s-master <none> <none>
kube-system coredns-74ff55c5b-swz5l 1/1 Running 0 11m 10.244.0.2 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 0 11m 192.168.74.128 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 0 11m 192.168.74.128 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 0 11m 192.168.74.128 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-gg5zt 0/1 Init:0/1 0 63s 192.168.74.130 node2 <none> <none>
kube-system kube-flannel-ds-amd64-k5bdd 1/1 Running 0 63s 192.168.74.128 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-v9s5b 0/1 Init:0/1 0 63s 192.168.74.129 node1 <none> <none>
kube-system kube-proxy-bqz9b 1/1 Running 0 11m 192.168.74.128 k8s-master <none> <none>
kube-system kube-proxy-br2wn 0/1 ContainerCreating 0 5m20s 192.168.74.129 node1 <none> <none>
kube-system kube-proxy-x8mnx 0/1 ContainerCreating 0 4m53s 192.168.74.130 node2 <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 0 11m 192.168.74.128 k8s-master <none> <none>
异常
1、node节点的kube-proxy 一直处于ContainerCreating 状态
kube-system kube-proxy-br2wn 0/1 ContainerCreating 0 33s 192.168.74.129 node1 <none> <none>
kube-system kube-proxy-x8mnx 0/1 ContainerCreating 0 6s 192.168.74.130 node2 <none> <none>
进入pod 查看详情
kubectl describe pod kube-proxy-br2wn -n kube-system
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m48s default-scheduler Successfully assigned kube-system/kube-proxy-br2wn to node1
Warning FailedCreatePodSandBox 10s (x6 over 2m32s) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.2": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
从报错信息来看是没有拉取到镜像。然后我查看其他节点确实看到镜像没有,重新拉取就好