kubeadm安装k8s完整教程
一:硬件环境准备
三台机器,计划为:一台master,两台node
序号 | ip | 系统版本 | hostname | 配置 | 节点类型 |
---|---|---|---|---|---|
1 | 192.168.159.210 | CentOS 7.7.1908 (Core) | vm210 | 2核2G | Master |
2 | 192.168.159.211 | CentOS 7.7.1908 (Core) | vm211 | 2核2G | node |
3 | 192.168.159.212 | CentOS 7.7.1908 (Core) | vm212 | 2核2G | node |
二:系统软件环境预置
1.设置hosts
vi /etc/hosts
加入以下内容
127.0.0.1 vm210
2.关闭防火墙
[root@vm210 ~]# systemctl stop firewalld
[root@vm210 ~]# systemctl disable firewalld
[root@vm210 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@vm210 ~]#
3.安装docker
使用yum安装,若已安装可跳过
yum -y install docker
4.配置yum源
vi /etc/yum.repos.d/kubernetes.repo
加入以下内容
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
5.设置SELinux
vi /etc/selinux/config
添加如下SELINUX=disabled
注释掉SELINUX=enforcing,SELINUXTYPE=targeted
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
#SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
#SELINUXTYPE=targeted
SELINUX=disabled
6.关闭swap内存
使用swap会影响性能。kubelet禁用swap
1)系统级零时关闭
swapoff -a ,重启后失效
2)系统级全部关闭
vi /etc/fstab,注释掉swap那一行
需要重启。重启后不失效
#
# /etc/fstab
# Created by anaconda on Mon Dec 2 21:02:22 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=b232659c-bd84-46f0-928b-a46d55500934 /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap
7.设置iptables
解决iptables而导致流量无法正确路由的问题
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
三:Master节点安装kubeadm
1.安装kubelet 和kubeadm以及kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
2.启动docker
systemctl enable docker && systemctl start docker
3.下载所需要的镜像
for i in `kubeadm config images list`; do
imageName=${i#k8s.gcr.io/}
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done;
4.更改kubelet的参数
vi /etc/sysconfig/kubelet
改为如下参数
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
5.kubeadm初始化
kubeadm init
完成之后有如下结果
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.159.210:6443 --token ct4248.2egr8dv9k4avqul7 \
--discovery-token-ca-cert-hash sha256:4ca4f6835e9cd70b43be16b81d8340876dca0e064c6168342c140140d17f449b
最后的命令需要在node节点中执行,从而加入的k8s集群
依据提示执行如下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
四:node节点安装kubeadm
1.安装kubeadm kubelet
yum -y install kubeadm kubelet
2.启动docker
systemctl enable docker && systemctl start docker
3.下载所需要的镜像
for i in `kubeadm config images list`; do
imageName=${i#k8s.gcr.io/}
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done;
4.更改kubelet的参数
vi /etc/sysconfig/kubelet
改为如下参数
KUBELET_EXTRA_ARGS=--cgroup-driver=systed
5)加入master
token来自master节点执行kubeinit的结果
kubeadm join 192.168.159.210:6443 --token ct4248.2egr8dv9k4avqul7 \
--discovery-token-ca-cert-hash sha256:4ca4f6835e9cd70b43be16b81d8340876dca0e064c6168342c140140d17f449b
五:安装网络插件
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
六:查询k8s集群状态
1.查询node节点
[root@vm210 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm210 Ready master 58m v1.16.3
vm211 Ready <none> 21m v1.16.3
vm212 Ready <none> 6m29s v1.16.3
2.查询pods状态
root@vm210 k8s]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-55754f75c-7wvrb 1/1 Running 0 6m20s
calico-node-9x82m 1/1 Running 0 6m20s
calico-node-gn5qh 1/1 Running 0 6m20s
calico-node-h8kvz 0/1 PodInitializing 0 6m20s
coredns-5644d7b6d9-h9sn2 1/1 Running 0 59m
coredns-5644d7b6d9-pwfl5 1/1 Running 0 59m
etcd-vm210 1/1 Running 0 58m
kube-apiserver-vm210 1/1 Running 0 58m
kube-controller-manager-vm210 1/1 Running 0 58m
kube-proxy-6hjk2 1/1 Running 0 22m
kube-proxy-bcmhh 1/1 Running 0 7m31s
kube-proxy-bt9rn 1/1 Running 0 59m
kube-scheduler-vm210 1/1 Running 0 58m
3.查询pod具体状态
kubectl --namespace=kube-system describe pod <pod_name>
kubectl --namespace=kube-system describe pod calico-node-h8kvz
4.master节点也可以像node节点一样调度pod
kubectl taint nodes --all node-role.kubernetes.io/master-
[root@vm210 k8s]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/vm210 untainted
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
5.查询k8s版本
[root@vm210 k8s]# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
至此, 通过kubeadm工具就实现了Kubernetes集群的快速搭建。 如果安装失败, 则可以执行kubeadm reset命令将主机恢复原状, 重新执行kubeadm init,或者kubeadm join命令, 再次进行安装。
安装的过程中,若有任何问题,尽可在下方留言,我会回复。欢迎添加vx:xydjun 。大家一起交流探讨