CentOS-7安装Kubernetes-1.12.1
标签: CentOS-7-安装Kubernetes-1.12.1
环境描述:
系统: CentOS-7 4.19.0-1.el7.elrepo.x86_64
Kubernetes:Kubernetes-1.12.1
架构一台master一台node
先决条件(每台服务器都要执行):
1.开启iptables转发
xian使用命令cat /proc/sys/net/bridge/bridge-nf-call-iptables
查看值是否为1
,如果为1
则如下步骤不需要执行,否则请继续下面的步骤开启相关功能。
1.1修改文件
sed -i 7,9s/0/1/g /usr/lib/sysctl.d/00-system.conf
1.2加载netfilter模块(可以使用 lsmod | grep netfilter
命令查看是否加载了模块)
modprobe br_netfilter
2.1.3使做的更改生效
sysctl -p /usr/lib/sysctl.d/00-system.conf
2.关闭Swap交换空间
2.1修改文件
echo 'vm.swappiness = 0' >> /usr/lib/sysctl.d/00-system.conf
2.2使做的更改生效
sysctl -p /usr/lib/sysctl.d/00-system.conf
2.3关闭swap
swapoff -a
2.4注释掉“/etc/fstab”文件中关于swap的挂载代码 (关闭开机自动挂载)
更改前:
/dev/mapper/cl-swap swap swap defaults 0 0
更改后:
#/dev/mapper/cl-swap swap swap defaults 0 0
3.添加hosts保证主机名解析正常
echo -e '192.168.2.168 node1.ztpt.com\n192.168.2.162 node2.ztpt.com\n192.168.2.170 node3.ztpt.com' >> /etc/hosts
4.关闭iptables、selinux和firewalld服务
[root@node1 ~]# getenforce
Disabled
[root@node1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@node1 ~]# systemctl status iptables
● iptables.service - IPv4 firewall with iptables
Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled; vendor preset: disabled)
Active: inactive (dead)
5.三台服务器都要安装 docker、kubelet、kubectl、kubeladm
- 安装Docker-CE(可参照:https://blog.51cto.com/wangxiaoke/2174103)
增加docker仓库wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装依赖
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
安装docker-ce
yum install -y docker-ce
设置docker服务开机启动
systemctl enable docker.service
设置docker-registry-mirrors地址(阿里云提供免费的镜像服务)
sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://kzflpq4b.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker
- 安装kubernets的kubelet、kubectl、kubeadm
增加kubernetes仓库tee /etc/yum.repos.d/kubernets.repo << EOF [Kubernetes] name=kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg,https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg,https://mirrors.aliyun.com/kubernetes/yum/doc/apt-key.gpg EOF #创建yum元数据 sudo yum makecache
安装kubelet、kubectl、kubeadm
注意:如果报密钥的问题请使用rpm --import
命令手动倒入key,或者禁用gpgcheck
yum install -y kubelet kubectl kubeadm
设置kubelet开启自启动
systemctl enable kubelet.service
注意:在各个节点上统一操作的部分完结,接下来要看好哪些是在master上的操作哪些是在node上的操作
Master上的操作
- 初始化master节点
注意:由于中国长城的阻拦,无法访问k8s的镜像库,解决办法有两个
①:将k8s的库拉到阿里云或者dockerHUB上,然后再改标签,网上还有人特意写的脚本
②:设置docker代理
我这里使用的是使用别人脚本的办法
脚本内容:#!/bin/sh #拉取镜像 docker pull mirrorgooglecontainers/kube-apiserver:v1.12.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.12.1 docker pull mirrorgooglecontainers/kube-proxy:v1.12.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.2 #修改标签 docker tag mirrorgooglecontainers/kube-proxy:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1 docker tag mirrorgooglecontainers/kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 #删除无用镜像 docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.12.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.2
使用命令初始化master节点
kubeadm init --kubernetes-version=stable-1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
输出信息如下(保存好输出的信息,以后会用到):
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.2.168:6443 --token j1v9o1.wxd0xz5mv1qgo6b1 --discovery-token-ca-cert-hash sha256:6ae6c734198b0a69e73c8d7b576e8692514e3aa642f9431d21234e86f35b316f
根据提示使用如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装flannel(这一步程序会自动下载镜像和运行pod,根据网络情况时间可能会有些慢,耐心等待!)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
查看节点信息
[root@node1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1.ztpt.com NotReady master 6m45s v1.12.1
查看namespace
[root@node2 ~]# kubectl get namespace
NAME STATUS AGE
default Active 12h
kube-public Active 12h
kube-system Active 12h
查看pod(查看pod注意READY列和STATUS列,如果有问题请查看pod日志和kubelet日志,具体命令查看文章下面内容,实在不行就重置初始化,重置命令也请查看文章下面内容)
[root@node1 ~]# kubectl get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-576cbf47c7-5tgnm 1/1 Running 1 18h 10.244.0.47 node1.ztpt.com <none>
coredns-576cbf47c7-r9fr6 1/1 Running 1 18h 10.244.0.46 node1.ztpt.com <none>
etcd-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-apiserver-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-controller-manager-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-flannel-ds-amd64-rx9jw 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-proxy-nnmpj 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-scheduler-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
NODE节点加入集群(如果忘记了join命令,请查看文章下面内容)
首先要保证做了之前先决条件里的事情,并且保证master上所有的pod都属于正常状态
执行join命令
因为被墙的原因,我们要将Master上的3个镜像导出到NODE节点上
k8s.gcr.io/kube-proxy v1.12.1
quay.io/coreos/flannel v0.10.0-amd64
k8s.gcr.io/pause 3.1
导出命令 docker save 镜像名 > 镜像名.tar
导入命令 docker load < 镜像名.tar
我join的时候报错没有找到VIPS支持模块,最后使用命令modprobe
命令加载ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs 模块后就好了
kubeadm join 192.168.2.168:6443 --token 1evrs8.iz8bl6l77jtal4na --discovery-token-ca-cert-hash sha256:fd509be1a3362afbff39ed807b5c25ef7a5034feb6876df1b76c0a0d8eb637db
移除NODE节点
#先将节点设置为维护模式(node2.ztpt.com是节点名称)
kubectl drain node2.ztpt.com --delete-local-data --force --ignore-daemonsets
#然后删除节点
kubectl delete node node2.ztpt.com
杂谈记录
如果初始化失败,请使用如下代码清除后重新初始化
kubeadm reset
ip link delete flannel.1
ip link delete cni0
rm -rf /var/lib/etcd/*
查看kubelet日志命令
[root@node2 ~]# journalctl -u kubelet -f
查看pod日志
[root@node2 ~]# kubectl logs -f kube-apiserver-node2.ztpt.com --namespace=kube-system
#-f 是滚动输出,就像是tail -f中的-f一样
#kube-apiserver-node2.ztpt.com是pod名字
如果忘记了kubeadm join怎么办?
- kubeadm join使用的token默认有效期24小时,过期后可使用
kubeadm token create
创建 - 如果忘记了可使用
kubeadm token list
查看,如果过期了还是得重新创建 - 如果连--discovery-token-ca-cert-hash的值也忘记了,那就用命令
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
查看吧!然后用新的token和ca-hash加入集群
转载于:https://blog.51cto.com/wangxiaoke/2309457