192.168.33.15 master-1 master
192.168.33.16 node-1 slave
192.168.33.17 node-2 slave
echo '
192.168.33.15 master-1
192.168.33.16 node-1
192.168.33.17 node-2' >> /etc/hosts
CentOS版本:
1、关闭firewalld和selinux
所有机器上执行:
systemctl stop firewalld && systemctl disable firewalld
执行:
firewall-cmd --state #查看默认防火墙状态(关闭后显示not running,开启后显示running)
关闭selinux 需要重启机器
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
重启后输入 sestatus 查看selinux状态,此时应该显示disabled
关闭swap: swapoff -a
2、时间同步
yum -y install ntp
ntptime
timedatectl
systemctl enable ntpd
systemctl restart ntpd.service
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
3、到网盘上下载k8s_images.tar.bz2压缩包
master节点操作
链接:https://pan.baidu.com/s/1ExMT7DzpardBWarYwSaM9Q#list/path=%2F 密码:otpq
k8s_images.tar.bz2包下载后上传到192.168.33.15服务器的/root目录下
配置互信 生成ssh 密钥对
[root@master-1 ~]# ssh-keygen
把本地的ssh公钥文件安装到远程主机对应的账户
[root@master-1 ~]# ssh-copy-id master-1
[root@master-1 ~]# ssh-copy-id node-1
[root@master-1 ~]# ssh-copy-id node-2
把master服务器上的k8s_images.tar.bz2拷贝到node节点服务器上
scp /root/k8s_images.tar.bz2 root@192.168.33.16:/root
scp /root/k8s_images.tar.bz2 root@192.168.33.17:/root
4、解压tar.bz包,并开始安装
[root@master-1 ~]# tar xvf k8s_images.tar.bz2
安装docker-ce,解决依赖
[root@master-1 k8s_images]# rpm -ivh libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm libseccomp-2.3.1-3.el7.x86_64.rpm --force --nodeps
[root@master-1 k8s_images]# rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm --force --nodeps
[root@master-1 k8s_images]# rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm --force --nodeps
5、修改docker的镜像源为国内的daocloud的
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://a58c8480.m.daocloud.io
启动docker,并设置开机启动
[root@master-1 ~]# systemctl restart docker && systemctl enable docker
配置系统路由参数,防止kubeadm报路由警告
echo "
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
sysctl -p
[root@master-1 k8s_images]# rpm -ivh kubectl-1.9.0-0.x86_64.rpm kubeadm-1.9.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm \
kubernetes-cni-0.6.0-0.x86_64.rpm socat-1.7.3.2-2.el7.x86_64.rpm
7、加载镜像
[root@master-1 k8s_images]# cd docker_images/
[root@master-1 docker_images]# for image in `ls -l . |awk '{print $9}'`;do echo "$image is loading"&&docker load < ${image};done
输入docker images查看镜像
二、master 节点操作
1、启动kubelet
[root@master-1 ~]# systemctl start kubelet&& systemctl enable kubelet
开始初始化master节点
kubernetes 默认支持多重网络插件如flannel、weave、calico,这里使用flannel,就必须设置--pod-network-cidr 参数,
[root@master-1 ~]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.33.15
如报以下错误:
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
Jan 20 15:22:53 master-1 systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jan 20 15:22:53 master-1 systemd: Unit kubelet.service entered failed state.
Jan 20 15:22:53 master-1 systemd: kubelet.service failed.
原因:
kubelet 的cgroup dirver与docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd
解决办法:如下
1.1 修改配置文件cgroup-driver中systemd值改为cgroupfs,vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
1.2 systemctl daemon-reload && systemctl restart kubelet
1.3 kubeadm reset
1.4继续执行kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.33.15
当出现以下内容则说明kubeadm初始化成功.
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 67f800.7b64ede266ca7bf4 192.168.33.15:6443 --discovery-token-ca-cert-hash sha256:9ed80c1d6ab0bfb7b9f759504dff916a98ca224b7e7bf82af4eda6b4bd6de5ee
备注:kubeadm join xxxx 可以保留下来,如果忘记了,可以通过kubeadm token list 获取
2、按照上面提示,此时还不能用kubectl 控制集群。需要配置环境变量
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3、kubectl version测试
[root@master-1 k8s_images]# kubectl version
4、直接使用离线包里面的flannel
执行
[root@master-1 k8s_images]#kubectl create -f kube-flannel.yml
5、查看所pod状态,已经都running
[root@master-1 k8s_images]# kubectl get pod --all-namespaces
6、部署kubernetes-dashboard,直接使用离线包里面的kubernetes-dashboard.yaml
[root@master-1 k8s_images]# kubectl create -f kubernetes-dashboard.yaml
[root@master-1 k8s_images]# kubectl get pod --all-namespaces
三、node节点操作
1、修改kubelet配置文件根上面有一将cgroup的driver由systemd改为cgroupfs
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs”
2、重启kubelet
systemctl daemon-reload
systemctl enable kubelet&&systemctl restart kubelet
3、使用刚刚在master上执行kubeadm后的kubeadm join –xxx 语句
kubeadm join --token 67f800.7b64ede266ca7bf4 192.168.33.15:6443 --discovery-token-ca-cert-hash sha256:9ed80c1d6ab0bfb7b9f759504dff916a98ca224b7e7bf82af4eda6b4bd6de5ee
在master节点上check一下
[root@node5 ~]# kubectl get nodes