目录
一、准备工作
地址规划
IP ADDRESS | HOST NAME |
172.240.24.188 | master188 |
172.240.24.111 | node111 |
172.240.24.112 | node112 |
以下步骤,如未声明节点,即为每个节点都要操作
在三台服务器上都配置好 /etc/hosts 文件
# cat >> /etc/hosts << EOF
172.240.24.188 master188 master188.qufujin.top
172.240.24.111 node111 node111.qufujin.top
172.240.24.112 node112 node112.qufujin.top
EOF
ssh 建立信任关系
# ssh-keygen -t rsa
#!/bin/bash
ip=(172.240.24.111 \
172.240.24.112 \
172.240.24.113)
pass="pwd@123"
port="1804"
yum install -y expect
for i in ${ip[@]}
do
expect -c "
spawn ssh-copy-id -p${port} -i /root/.ssh/id_rsa.pub root@${i}
expect {
\"*yes/no*\" {send \"yes\r\"; exp_continue}
\"*password*\" {send \"${pass}\r\"; exp_continue}
\"*Password*\" {send \"${pass}\r\";}
} "
done
# bash ssh.sh
关闭防火墙和 selinux
# systemctl stop firewalld && systemctl disable firewalld
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
关闭各节点swap (注释/etc/fstab文件里swap相关的行)
# sed -i '/swap/ s@^\(.*\)$@#\1@g' /etc/fstab
配置各节点系统内核参数使流过网桥的流量也进入 iptables/netfilter 框架中
# cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# modprobe br_netfilter && sysctl -p /etc/sysctl.d/kubernetes.conf
重启生效(修改 SElinux 后必须重启)
# reboot
二、部署Kubernetes集群
以下步骤,如未声明节点,即为每个节点都要操作
安装 Docker
# yum install -y epel-release yum-utils device-mapper-persistent-data lvm2 wget
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum install -y docker-ce
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
# systemctl enable docker && systemctl restart docker
安装 kubeadm、kubelet、kubeadm
# cat >>/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name = Kubernetes Repo
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled = 1
gpgcheck = 1
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# yum install -y kubeadm-1.11.3-0.x86_64 kubelet-1.11.3-0.x86_64 kubectl
# systemctl enable kubelet && systemctl restart kubelet
因为 Google 提供镜像的站点在墙外,因此需要在国内下载镜像,然后重新打上 kubeadm 初始化时可以识别的标签
# cat k8s_images.sh
#!/bin/bash
images=(kube-proxy-amd64:v1.11.3 \
kube-scheduler-amd64:v1.11.3 \
kube-controller-manager-amd64:v1.11.3 \
kube-apiserver-amd64:v1.11.3 \
etcd-amd64:3.2.18 \
coredns:1.1.3 \
pause:3.1)
for imageName in ${images[@]}
do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
在 master188 上完成初始化
# kubeadm init \
--kubernetes-version=v1.11.3 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=172.240.24.188
kubeadm init 选项说明
# 选项说明:
# kubeadm init --help
--apiserver-advertise-address \\apiserver向外公告自己的地址是什么,default:0.0.0.0
--apiserver-bind-port \\apiserver向外监听的端口,default:6443
--cert-dir STRING \\保存证书的相关目录,default "/etc/kubernetes/pki"
--config \\配置文件
--ignore-preflight-errors \\在预检查的时候可以忽略一些错误,比如swap
--kubernetes-version \\kubernetes的版本
--pod-network-cidr \\pod所使用的网络
--service-cidr \\service的网络
.......................................................................
You can now join any number of machines by running the following on each node
as root:kubeadm join 172.240.24.188:6443 --token aye96c.hdo6xzlskgmk4anz --discovery-token-ca-cert-hash sha256:b2ff321b46be33766a73191f31f6401eccac455eac9faeb6bf4c30a9e8407923
初始化成功后会生成一行类似于令牌的东西,保存下来,等下在 各 node 上执行(node111,node112),将各 node 加入集群
# kubeadm join 172.240.24.188:6443 --token aye96c.hdo6xzlskgmk4anz --discovery-token-ca-cert-hash sha256:b2ff321b46be33766a73191f31f6401eccac455eac9faeb6bf4c30a9e8407923
在master上执行后面的所有操作
运行以下命令进行 kubelet 的 kubeconfig 配置,以便让当前用户使用集群
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
配置 kubectl 命令补全 。注:kubectl:apiserver的命令行客户端
# yum install -y bash-completion
# source /usr/share/bash-completion/bash_completion
# echo 'source <(kubectl completion bash)' >> /etc/profile
# source /etc/profile
如果是kubenetes 1.7以上版本,只需要执行这条命令就OK了,会自动获取在线的部署清单,基于此清单下载镜像,启动并部署flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
查看集群中各组件状态信息,此处没有apiserver,但没事。因为如果apiserver不健康,那么这个请求也发不过去
# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
查看集群中各节点的状态
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master188.qufujin.top Ready master 11m v1.11.3
node111.qufujin.top Ready <none> 8m v1.11.3
node112.qufujin.top Ready <none> 8m v1.11.3
查看集群信息
# kubectl cluster-info
Kubernetes master is running at https://172.240.24.188:6443
KubeDNS is running at https://172.240.24.188:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.