一:环境机准备
master: 192.168.1.11
node: 192.168.1.12
node2: 192.168.1.13
kubeadm:可以把kubeadmin看成一个部署工具,它简化K8s的部署过程。
二:准备工作(master、node1、node2执行)
检查master、node1、node2是否能上外网
关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
关闭SELinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
getenforce
禁用Swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
配置主机ip映射及主机名
cat <<EOF >> /etc/hosts
192.168.1.10 master
192.168.1.11 node1
192.168.1.12 node2
EOF
cat /etc/hosts
reboot
安装docker(所有)
yum -y install wget
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repowget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version
配置Docker的镜像加速器(所有)
mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry.docker-cn.com","https://51lfh9e0.mirror.aliyuncs.com"]
}
EOF systemctl daemon-reload
systemctl restart docker
配置阿里云的Kubernetes源(所有)
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置内核参数,将桥接的IPv4流量传递到iptables的链(所有)
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
systemd驱动(所有)
vim /usr/lib/systemd/system/docker.service
#
修改为
systemd
ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
systemctl daemon-reload
systemctl restart docker
安装k8s(所有)
yum install -y kubeadm-1.15.0-0 kubectl-1.15.0-0 kubelet-1.15.0-0
systemctl enable kubelet
三:初始化(master执行)
kubeadm init --apiserver-advertise-address=192.168.181.142 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
–image-repository string:这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers
最后生成的命令记录下来,后边使用kubeadm join往集群中添加节点时会用到;
kubeadm join 192.168.1.11:6443 --token 2cv3mq.x0zbmqrdopz75ov2 \
--discovery-token-ca-cert-hash sha256:1d7cff9a6ee4a1c9007d0a391b36e49a2ecc0f03ecab74c9fb11b0075fc21a96
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
四:安装flannel(master执行)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
没有翻墙有可能安装失败,kubectl get pod -n kube-system查看容器安装进度,docker images 查看是否下载完镜像,也可以下载flannel.yml要求的容器,手动导入master节点内。等待几分钟便安装成功。个人建议是拉取对应镜像放到master节点后,在执行kubectl apply -f kube-flannel.yml
如果node节点还是notready,手动将flannel镜像上传至node节点容器镜像仓库内,即可变为ready
#docker load < flannel.tar
kubectl apply -f kube-flannel.yml
kubectl get pod -n kube-system
kubectl get nodes
五:加入node1、node2节点(node1、node2节点运行)
由于网络问题,node节点无法下载到flannel镜像,导致node节点一直是notready状态,手动将flannel镜像上传至node节点容器镜像仓库内
#docker load < flannel.tar 使用前面生成的token,如果忘记使用kubeadm token create --print-join-command
kubeadm join 192.168.181.140:6443 --token qd3apb.ly13dx5944yxhykw --discovery-token-ca-cert-hash sha256:71de75d66c44eb49c4a330714df6183ec3eb46d1952bc951474724eb44aef0d5