一、环境
三台master节点虚拟机;
两台worker节点虚拟机;
一台nginx节点虚拟机(或者nginx集群的虚拟IP);
centos7.9
host配置,仅配置到master节点上即可
sudo vim /etc/hosts # host 配置
x.x.x.a k8s-master-1
x.x.x.b k8s-master-2
x.x.x.c k8s-master-3
x.x.x.v k8s-nginx-vip
x.x.x.d k8s-node-1
x.x.x.e k8s-node-2
二、时间同步
各虚拟机开启sudo systemctl start ntpd
三、在nginx节点中增加四层转发配置
sudo yum install nginx-mod-stream #安装stream模块
sudo vim /etc/nginx/nginx.conf
stream和http同级
stream {
upstream k8sapi {
hash $remote_addr consistent;
server x.x.x.a:6443 max_fails=3 fail_timeout=30s;
server x.x.x.b:6443 max_fails=3 fail_timeout=30s;
server x.x.x.c:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 6443;
proxy_connect_timeout 2s;
proxy_pass k8sapi;
}
}
四、master和worker节点都关闭swap、selinux、firewall
关于为什么要关闭,可以参考这篇博文
部署Kubernetes(k8s)时,为什么要关闭swap、selinux、firewall 防火墙?_k8s为什么要关闭swap_妖四灵.Shuen的博客-CSDN博客
sudo swapoff -a #临时关闭
sudo vim /etc/fstab #永久,编辑/etc/fstab,注释关于swap那行
sudo setenforce 0 #临时关闭
sudo vim /etc/selinux/config #永久,将config 中SELINUX设置为disabled
sudo systemctl stop firewalld
sudo systemctl disable firewalld
五、各节点端口范围及作用
控制节点
协议 方向 端口范围 作用 使用者
TCP入站6443KubernetesAPl服务器所有组件
TCP入站2379-2380etcd服务器客户端APIkube-apiserver,etcd
TCP入站10250Kubelet APlkubelet自身、控制平面组件
TCP入站10251kube-schedulerkube-scheduler自身
TCP入站10252kube-controller-managerkube-controller-manager自身
工作节点
协议 方向 端口范围 作用 使用者
TCP入站10250Kubelet APlkubelet自身、控制平面组件
TCP入站30000-32767NodePort 所有组件
六、设置iptables
master 和work节点上的 iptables 调整设置,以实现正确地查看桥接流量
sudo vim /etc/sysctl.conf
sudo sysctl -p
配置如下
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
七、安装docker
# master和work节点上安装docker
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum -y install docker-ce-19.03.15
sudo systemctl start docker
sudo systemctl enable docker
sudo vim /etc/docker/daemon.json
#添加内容
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
八、安装k8s
注意worker节点不用装 kubectl
#master 组件安装
#三个master节点 安装kubelet,kubeadm,kubectl,使用阿里云上稳定版本
sudo vim /etc/yum.repos.d/kubernetes.repo
sudo yum install kubelet-1.22.3 kubeadm-1.22.3 kubectl-1.22.3
sudo systemctl start kubelet.service
sudo systemctl enable kubelet.service
#work组件安装
#两个在work节点 安装kubelet,kubeadm
sudo vim /etc/yum.repos.d/kubernetes.repo
sudo yum install kubelet-1.22.3 kubeadm-1.22.3
sudo systemctl start kubelet.service
sudo systemctl enable kubelet.service
kubernetes.repo,配置如下
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
九、初始化master1
#在master1上执行下面命令,仅在master1执行 ,且建议在root状态执行
因为选用了flannel作为网络插件,默认是推荐使用--pod-network-cidr=10.244.0.0/16
但是我本身的网段就在10下,于是改为了192.244.0.0/16
在根目录先写好配置yaml
sudo vim kubeadm-config.yaml
将对应的${}替换成自己的IP,网段配置改为自己想要的配置,不知道怎么配就默认用10.244.0.0/16即可;替换${k8s_version}为自己的版本
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: ${k8s_version}
controlPlaneEndpoint: $vip:6443
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServer:
certSANs:
- ${master1}
- ${master2}
- ${master3}
- ${vip}
networking:
podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
执行初始化命令
sudo kubeadm init --config ./kubeadm-config.yaml
注意!此处可能有坑,要先保证nginx配置的虚拟IP一定是能ping通的,再执行初始化命令;
后会得到结果,按照提示执行三行命令;
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
剩下的建议保存下来,后续新增节点时会用到里面的token
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join x.x.x.v:6443 --token wnr0fg.ffff \
--discovery-token-ca-cert-hash sha256:16ef3a56ed094ed95xxxxxxxxxxxx50199df3 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join x.x.x.v:6443 --token wnr0fg.fff \
--discovery-token-ca-cert-hash sha256:16ef3a56ed09xxxxxxxx40a6bf109750199df3
十、延长k8s证书
k8s证书默认是一年过期的,续约较麻烦,直接改为10年
1)查看当前证书时间:
kubeadm certs check-expiration
2) master 上需要安装两个工具 git 和go
sudo yum install git
# go当前用的官网的最新版,1.21.4,官网右键复制地址;
wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
# 解压到/usr/local
sudo tar -C /usr/local/ -xzf go1.21.4.linux-amd64.tar.gz
#配置go环境
sudo vim /etc/profil
#内容
export GO111MODULE=on
export GOROOT=/usr/local/go
export GOPATH=/home/gopath
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
#创建gopath
mkdir /home/gopath
#加载环境
source /etc/profile
3)获取对应版本的kubernetes,重新编译 然后覆盖
git clone -b v1.22.3 --depth=1 https://github.com/kubernetes/kubernetes.git
cd kubernetes/
# 修改 cmd/kubeadm/app/constants/constants.go 文件
# 找到 CertificateValidity = time.Hour * 24 * 365, 修改为下面一行内容CertificateValidity = time.Hour * 24 * 365 * 10
make WHAT=cmd/kubeadm #执行编译
mv /usr/bin/kubeadm /usr/bin/kubeadm_bk
cp _output/bin/kubeadm /usr/bin/kubeadm
cp -rf /etc/kubernetes/pki/ /etc/kubernetes/pki_bk
sudo kubeadm certs renew all#重新生成证书
十一、安装网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
配置关键部分,修改为之前初始化设置的pod网段
net-conf.json: |
{
"Network": "10.244.0.0/16", #此处网段注意自己配置
"Backend": {
"Type": "vxlan"
}
}
十二、添加work到集群
执行上面第二条join命令(不含control-plane)加入k8s集群。加入集群需要些时间,应该work需要下载初始化所需要的镜像。
忘记 token和hash可以使用查询;
sudo kubeadm token list #查询token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' #获取token的hash值
kubeadm join k8s-nginx-vip:6443 --token f3fff \ --discovery-token-ca-cert-hash sha256:8a9824206ffffffc88e #输入自己的token和hash
初始化命令结束后,在master1执行kubectl get node;查看各节点的状态。STATUS列需为ready
十三、添加 master 到集群
# 执行下面命令,获取certificate key,注册其余 master
kubeadm init phase upload-certs --upload-certs
将得到的certificate key拼接到kubeadm init获取到的第一条join命令后面。certificate key只有两小时有效性。
kubeadm join k8s-master-api.aeroht.local:6443 --token f3n3q马赛克ds0wcx \
--discovery-token-ca-cert-hash sha256:8a98242062cbebb马赛克1bdaf7a7e95c939805c05a0b3aa1e6c88e \
--control-plane \
--certificate-key 596a04303f543b3dff71837c5e6097bb1159aca95d6bd4521cead646efb3e592
十四、配置ingress
下载地址中的yaml配置文件;
在master1上执行
kubectl create -f deploy.yaml
在集群中,我们可以设定一个默认的 Ingress Class,以便处理所有没有指定 Ingress Class 的 Ingress 资源。在 IngressClass 资源上,我们可以通过将 ingressclass.kubernetes.io/is-default-class 注解的值设定为 true,来使没有设置 ingressClassName 的 Ingress 使用此默认的 IngressClass。
kubectl edit ingressclass nginx
在metadata下,creationTimestamp[同级添加:
annotation信息
metadata:
annotations: # 添加
ingressclass.kubernetes.io/is-default-class: "true" # 添加
creationTimestamp: "2023-11-27T07:00:23Z"
generation: 1
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.3.0
……………………
查看ingress-nginx组件状态
kubectl get pod -n ingress-nginx
[root@adc-k8s-master-1 ~]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-8tmk2 0/1 Completed 0 19s
ingress-nginx-admission-patch--1-fhjw5 0/1 Completed 0 19s
ingress-nginx-controller-764d999ff-rw5p6 0/1 Running 0 19s
查看创建的ingress service暴露的端口
kubectl get svc -n ingress-nginx
[root@adc-k8s-master-1 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.99.57.167 <pending> 80:30152/TCP,443:30064/TCP 25s
ingress-nginx-controller-admission ClusterIP 10.99.234.187 <none> 443/TCP 25s
增加Nginx的 ingress 转发
(此处暂未完成验证)
十五、配置metrics
在master1上执行
下载metrics的yaml文件
kubectl create -f metrics-components.yaml
十六、配置dashboard看板
在master1上执行
下载dashboard的yaml文件
https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml