一、环境规划
操作系统:CentOS7.4_x64
kubernetes安装目录:/opt/kubernetes
版本说明:
Kubernetes:v1.9
Docker:v17.12.0-ce
Etcd:3.1
二、安装Docker
在所有节点执行:
setenforce 0
iptables -F
iptables -t nat -F
iptables -I FORWARD -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
在Node上安装Docker:
# 安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加Docker软件包源
yum-config-manager
--add-repo
https://download.docker.com/linux/centos/docker-ce.repo
# 更新yum包索引
yum makecache fast
# 安装Docker CE
yum install docker-ce
设置默认从中国镜像仓库中拉取:
# vi /etc/docker/daemon.json
{
"registry-mirrors": [ "https://registry.docker-cn.com"]
}
# systemctl start docker
# systemctl enable docker
测试:
# docker info
三、安装Etcd
2.1 下载二进制包
# yum install etcd –y
# vi /etc/etcd/etcd.conf
# systemctl start etcd
# systemctl enable etcd
四、部署Flannel网络
1)写入分配的子网段到etcd,供flanneld使用
# etcdctl -endpoint="http://192.168.1.195:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
2)下载二进制包
# wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
# tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
# mv flanneld mk-docker-opts.sh /usr/bin
3)配置Flannel
vi /etc/sysconfig/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.1.195:2379 --ip-masq=true"
4)systemd管理Flannel
# vi /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
ExecStart=/usr/bin/flanneld $FLANNEL_OPTIONS
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
5)配置Docker启动指定子网段
修改成如图:
6)启动
# systemctl daemon-reload
# systemctl start flanneld
# systemctl enable flanneld
# systemctl restart docker
五、获取Kubernetes二进制包
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
这个二进制包里面包含了master和node的组件。
六、运行Master组件
解压准备好的包:unzip master.zip
# mkdir -p /opt/kubernetes/{bin,cfg}
# mv kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin
# chmod +x /opt/kubernetes/bin/* && chmod +x *.sh
# ./apiserver.sh 192.168.1.195 http://127.0.0.1:2379
# ./scheduler.sh 127.0.0.1
# ./controller-manager.sh 127.0.0.1
# echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
# source /etc/profile
七、运行Node组件
解压准备好的包:unzip node.zip
# mkdir -p /opt/kubernetes/{bin,cfg}
# mv kubelet kube-proxy /opt/kubernetes/bin
# chmod +x /opt/kubernetes/bin/* && chmod +x *.sh
# mv *.kubeconfig /opt/kubernetes/cfg/
# ./kubelet.sh 192.168.1.196 10.10.10.2
# ./proxy.sh 192.168.1.196
本节点IP是本机eth0网卡IP地址。
八、查询集群状态
在Master执行查看集群节点:
# kubectl get node
查看组件状态:
# kubectl get cs
九、启动一个测试示例
启动一个Nginx副本示例:
# kubectl run nginx --image=nginx --replicas=3
pod正常运行后,创建Service:
# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
# kubectl get svc nginx
在Node节点访问:
curl :PORT