一:环境准备
环境说明如下表:
对所有机器进行初始化:
systemctl stop firewalld
systemctl disable firewalld
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
setenforce 0
每个节点上修改对应的主机名:
master:
hostnamectl set-hostname master
node1:
hostnamectl set-hostname node1
node2:
hostnamectl set-hostname node2
重启机器: reboot
二:运行master组件
安装etcd:
下载etcd-v3.0.0-linux-amd64.tar.gz包地址:https://github.com/coreos/etcd/releases/tag/v3.0.0
解压包:
$ tar xzvf etcd-v3.0.0-linux-amd64.tar.gz
$ cd etcd-v3.0.0-linux-amd64
$ chmod 755 etcd etcdctl
$ cp etcd /usr/bin/etcd
$ cp etcdctl /usr/bin/etcdctl
新建etcd 配置文件: vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/etcd \
-name=etcd \
--data-dir=/var/lig/etcd \
--advertise-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
创建etcd的数据保存目录(需要在启动etcd服务之前进行创建) :
$ mkdir /var/lib/etcd
加入到开机启动列表:
$ systemctl daemon-reload
$ systemctl enable etcd.service
$ systemctl start etcd.service
查看 etcd 启动的端口:
$ netstat -tnlp
检查etcd是否健康:
$ etcdctl -C http://127.0.0.1:4001 cluster-health
准备kubernets软件包:
kubernetes v1.3.0下载地址:https://github.com/kubernetes/kubernetes/releases/tag/v1.3.0
解压:
$ tar zxvf kubernetes.tar.gz
$ cd kubernetes/server
$ tar zxvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes/server/bin
$ cp kube-apiserver /usr/bin/kube-apserver
$ cp kube-controller-manager /usr/bin/kube-controller-manager
$ cp kube-scheduler /usr/bin/kube-scheduler
$ cp kubectl /usr/bin/kubectl
安装kube-apiserver服务:
新建启动文件: vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
After=etcd.service
Wants=etcd.service
[Service]
ExecStart=/usr/bin/kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd_servers=http://127.0.0.1:4001 \
--service-cluster-ip-range=172.17.0.0/16 \
--service-node-port-range=1-65535
-logtostderr=true \
-v=0 \
--log_dir=/var/log/kube_apiserver.log
Restart=on-failure
Type=notify
[Install]
WantedBy=multi-user.target
加入开机启动项:
$ systemctl daemon-reload
$ systemctl enable kube-apiserver.service
$ systemctl start kube-apiserver.service
安装kube-controller-manager服务:
新建启动文件: vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Etcd Server
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--master=http://127.0.0.1:8080 \
--logtostderr=true --v=0 \
>> /var/log/kube-controller-manager.log 2>&1 &
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
加入开机启动项:
$ systemctl daemon-reload
$ systemctl enable kube-controller-manager.service
$ systemctl start kube-controller-manager.service
启动kube-scheduler服务:
新建启动文件: vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
ExecStart=/usr/bin/kube-scheduler \
--master=http://0.0.0.0:8080 \
--logtostderr=true --v=0 \
>> /var/log/kube-scheduler.log 2>&1 &
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
加入开机启动项:
$ systemctl daemon-reload
$ systemctl enable kube-scheduler.service
$ systemctl start kube-scheduler.service
检查各组件健康状态:
kubectl -s http://172.16.198.142:8080 get componentstatus
三:运行node组件(各组件安装类似)
安装docker 高版本 api 为 1.24
cat > /etc/yum.repos.d/docker.repo <<-EOF
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/experimental/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
确认旧的docker相关的组件并删除:
rpm -qa |grep docker
docker-selinux-1.10.3-44.el7.centos.x86_64
docker-common-1.10.3-44.el7.centos.x86_64
docker-forward-journald-1.10.3-44.el7.centos.x86_64
docker-1.10.3-44.el7.centos.x86_64
[root@node1 ~]# yum -y remove docker-selinux-1.10.3-44.el7.centos.x86_64
[root@node1 ~]# yum -y remove docker-common-1.10.3-44.el7.centos.x86_64
[root@node1 ~]# yum -y remove docker-forward-journald-1.10.3-44.el7.centos.x86_64
[root@node1 ~]# yum -y remove docker-1.10.3-44.el7.centos.x86_64
安装docker-engine:
yum -y install docker-engine
设定自启动:
$ systemctl enable docker
$ systemctl start docker
查看docker版本:
$ docker version
从master目录下:kubernetes/server/kubernetes/server/bin 拷贝 kubelet kube-proxy两组件
$ scp kubelet root@172.16.198.136:/root/oyfm
$ scp kube-proxy root@172.16.198.136:/root/oyfm
node1节点:
$ cd /root/oyfm
$ cp kubelet /usr/bin/kubelet
$ cp kube-proxy /usr/bin/kube-proxy
配置kubelet启动文件:
新建文件: vim /usr/lib/systemd/system/kubelet.service
[Unit]
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
--api-servers=http://172.16.198.142:8080 \
--hostname-override=node1 \
--logtostderr=false \
--pod_infra_container_image=index.tenxcloud.com/google_containers/pause:0.8.0 \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
[Install]
WantedBy=multi-user.target
加入开机启动项:
$ systemctl daemon-reload
$ systemctl enable kubelet.service
$ systemctl start kubelet.service
[Unit]
After=network.target
Requires=network.service
[Service]
ExecStart=/usr/bin/kube-proxy \
--master=http://172.16.198.142:8080 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
[Install]
WantedBy=multi-user.target
加入开机启动项:
$ systemctl daemon-reload
$ systemctl enable kube-proxy.service
$ systemctl start kube-proxy.service
搭建完成,其他节点类似上面node安装
查看node状态:
$ kubectl get node