目录
3.2、配置kubernetes的阿里云apt源(所有节点服务器都需要执行)
3.3、yum安装kubeadm、kubelet、kubectl(所有节点都执行)
3.4、初始化master节点的控制面板(master节点执行)
1、服务器准备
3台虚拟机,1个master节点,2个node节点。
主机 | IP | 说明 |
master | 192.168.2.79 | master节点,能连外网,ubuntu 22.04版本,至少2核CPU,2G内存 |
node1 | 192.168.2.115 | master节点,能连外网,ubuntu 22.04版本,至少2核CPU,2G内存 |
node2 | 192.168.2.118 | master节点,能连外网,ubuntu 22.04版本,至少2核CPU,2G内存 |
2、服务器配置
使用root账号登录,执行以下命令:
# 1、关闭防火墙
#ufw查看当前的防火墙状态:inactive状态是防火墙关闭状态 active是开启状态
ufw status
#启动、关闭防火墙
ufw disable
# 2、禁用selinux
#默认ubunt默认是不安装selinux的,如果没有selinux命令和配置文件则说明没有安装selinux,则下面步骤就不用做了
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
#3、关闭swap分区(必须,因为k8s官网要求)
#注意:最好是安装虚拟机时就不要创建swap交换分区**
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
# 4、设置主机名
cat >> /etc/hosts <<EOF
192.168.2.79 master
192.168.2.115 node1
192.168.2.118 node2
EOF
# 5、时间同步
#查看时区,时间
date
#先查看时区是否正常,不正确则替换为上海时区
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd
# 6、将桥接的IPv4流量传递到iptables的链
#(有一些ipv4的流量不能走iptables链,因为linux内核的一个过滤器,每个流量都会经过他,然后再匹配是否可进入当前应用进程去处理,所以会导致流量丢失),配置k8s.conf文件(k8s.conf文件原来不存在,需要自己创建的)
touch /etc/sysctl.d/k8s.conf
cat >> /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system
# 7、设置服务器之间免密登陆(3台彼此之间均设置)
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.2.115
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.2.118
ssh node1
ssh node2
3、使用kubeadm安装k8s
核心命令:
创建master节点:kubeadm init
将node节点加入k8s集群:kubeadm join <master_ip:port>
3.1、安装container(所有节点执行)
#安装containerd
wget -c https://github.com/containerd/containerd/releases/download/v1.6.8/containerd-1.6.8-linux-amd64.tar.gz
tar -xzvf containerd-1.6.8-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
#使用systemcd来管理containerd
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
mv containerd.service /usr/lib/systemd/system/
systemctl daemon-reload && systemctl enable --now containerd
systemctl status containerd
#安装runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.amd64 && \
install -m 755 runc.amd64 /usr/local/sbin/runc
#安装 CNI plugins
wget -c https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml
#修改配置文件
vim /etc/containerd/config.toml
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" #搜索sandbox_image,把原来的k8s.gcr.io/pause:3.6改为"registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
SystemdCgroup = true #搜索SystemdCgroup,把这个false改为true
#创建镜像加速的目录
mkdir /etc/containerd/certs.d/docker.io -pv
#配置加速
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://b9pmyelo.mirror.aliyuncs.com"]
capabilities = ["pull", "resolve"]
EOF
#加载containerd的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
#重启containerd
systemctl restart containerd
systemctl status containerd
#拉取镜像,测试containerd是否能创建和启动成功
ctr i pull docker.io/library/nginx:alpine #能正常拉取镜像说明没啥问题
ctr images ls #查看镜像
ctr c create --net-host docker.io/library/nginx:alpine nginx #创建容器
ctr task start -d nginx #启动容器,正常说明containerd没啥问题
ctr containers ls #查看容器
ctr tasks kill -s SIGKILL nginx #终止容器
ctr containers rm nginx #删除容器
3.2、配置kubernetes的阿里云apt源(所有节点服务器都需要执行)
#更改源,ubuntu 20.04(focal) (选做)
cp /etc/apt/sources.list /etc/apt/sources.list.backup
cat > /etc/apt/sources.list <<EOF
deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
EOF
apt update
apt install apt-transport-https ca-certificates -y
apt install vim lsof net-tools zip unzip tree wget curl bash-completion pciutils gcc make lrzsz tcpdump bind9-utils -y
# 编辑镜像源文件,文件末尾加入阿里云k8s镜像源配置
echo 'deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main' >> /etc/apt/sources.list
#更新证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add
#更新源
apt update
3.3、yum安装kubeadm、kubelet、kubectl(所有节点都执行)
#在3台虚拟机上都执行安装kubeadm、kubelet、kubectl
#查看apt可获取的kubeadm版本,这里安装1.24.0版本,不指定版本的话默认安装最新版本
apt-cache madison kubeadm
#在所有节点上安装kubeadm、kubelet、kubectl
apt install -y kubelet=1.24.0-00 kubeadm=1.24.0-00 kubectl=1.24.0-00
#设置kubelet开机自启(先不用启动,也起不了,后面kubeadm init初始化master时会自动拉起kubelet)
systemctl enable kubelet
3.4、初始化master节点的控制面板(master节点执行)
#列出所需镜像,可以提前拉取镜像
kubeadm config images list --kubernetes-version=v1.24.0 --image-repository=registry.aliyuncs.com/google_containers
kubeadm config images pull --kubernetes-version=v1.24.0 --image-repository=registry.aliyuncs.com/google_containers
#下载的镜像默认存放在containerd的k8s.io命名空间
初始化kubernetes:kubeadm init --help可以查看命令的具体参数用法
kubeadm init \
--apiserver-advertise-address=192.168.2.79 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.24.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
初始化成功
#最后kubeadm init初始化成功,提示信息如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.79:6443 --token hrp60s.a1efs0p7ct912y19 \
--discovery-token-ca-cert-hash sha256:5346e2b70c2f113ddfaf5f36166de73a2edd6f8fdaaec422663a9e1bcfbbd81e
#我们根据输入的提示信息复制粘贴照着做即可
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
3.5、node节点加入k8s集群(node节点执行)
将kubeadm init成功执行后的kubeadm join信息,复制到node节点执行。
注意:这段kubeamd join命令的token只有24h,24h就过期,需要执行kubeadm token create --print-join-command
重新生成。
#node节点执行
kubeadm join 192.168.2.79:6443 --token hrp60s.a1efs0p7ct912y19 \
--discovery-token-ca-cert-hash sha256:5346e2b70c2f113ddfaf5f36166de73a2edd6f8fdaaec422663a9e1bcfbbd81e
3.6、部署容器网络,CNI网络插件
#下载calico
wget https://docs.projectcalico.org/manifests/calico.yaml
#编辑文件,找到下面这两句,去掉注释,修改ip为当前你设置的pod ip段
vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
#镜像拉取没有问题的话最好
kubectl apply -f calico.yaml
如果cni镜像拉取失败,会导致kubectl get node 显示的节点状态为NotReady,可通过手动拉取的方式临时解决:
ctr image pull docker.io/calico/cni:v3.25.0
3.7、The connection to the server localhost:8080 was refused - did you specify the right host or port?
如果要在从节点执行kubectl命令,需要将主节点/etc/kubernetes/目录下的admin.conf文件拷贝到从节点/etc/kubernetes目录,然后设置环境变量:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.8、配置kubectl命令自动补全
# kubectl 配置命令自动补全,master节点配置即可
apt install -y bash-completion
echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrc
echo 'source <(kubectl completion bash)' >> ~/.bashrc
source ~/.bashrc
kubectl describe nodes
3.9、测试集群
#创建一个httpd服务测试
kubectl create deployment httpd --image=httpd
#暴露服务,端口就写80,如果你写其他的可能防火墙拦截了
kubectl expose deployment httpd --port=80 --type=NodePort
#查看pod是否是Running状态,查看service/httpd的端口
kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/httpd-757fb56c8d-w42l5 1/1 Running 0 39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/httpd NodePort 10.109.29.1 <none> 80:32569/TCP 42s #外部端口32569
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h22m
#网页测试访问,使用master节点的IP或者node节点的IP都可以访问,端口就是32569
http://192.168.118.145:32569/
It works! #成功了