CentOS7.2搭建k8s集群环境

第1章 环境介绍及准备
1.1 虚拟机操作系统(CentOS7.2)

[root@k8s-master ~]# uname -a
Linux k8s-master 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@k8s-master ~]# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core)

1.2 主机信息
本文准备了两台虚拟机用于部署k8s的运行环境,细节如下:
节点及功能 主机名 IP
Master、etcd、registry K8s-master 192.168.1.200
Node1 K8s-node-1 192.168.1.201
1.3 设置两台虚拟机的主机名:
1.3.1 Master上执行:

[root@k8s-master ~]# hostnamectl --static set-hostname  k8s-master

1.3.2 Node1上执行:

[root@k8s-node-1 ~]# hostnamectl --static set-hostname  k8s-node-1

1.3.3 在两台虚拟机上设置hosts,均执行如下命令:

echo192.168.1.200 k8s-master
192.168.1.200 etcd
192.168.1.200 registry
192.168.1.201 k8s-node-1’ >>/etc/hosts

1.3.4 关闭两台机器上的防火墙

systemctl disable firewalld.service
systemctl stop firewalld.service

第2章 部署etcd(此服务可以单独部署也可以部署在master上)
2.1 k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:

[root@k8s-master ~]# yum install etcd -y

2.2 yum安装的etcd默认配置文件在/etc/etcd/etcd.conf,编辑配置文件

[root@k8s-master ~]# egrep -v "^$|^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.200:2379"

2.3 启动并验证状态

[root@k8s-master ~]# systemctl start etcd
[root@k8s-master ~]# systemctl enable etcd
[root@k8s-master ~]# etcdctl set testdir/testkey0 0
0
[root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.1.200:2379
cluster is healthy
[root@k8s-master ~]# 

第3章 部署master
3.1 安装Docker

yum install docker -y

3.2 设置开机自启动并开启服务

[root@k8s-master ~]# chkconfig docker on
[root@k8s-master ~]# service docker start

3.3 安装kubernets

yum install kubernetes -y

3.4 配置并启动kubernetes

[root@k8s-master ~]# egrep -v "^$|^#" /etc/kubernetes/apiserver 
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.200:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
###################################################################################################
[root@k8s-master ~]# egrep -v "^$|^#" /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s-master:8080"

3.4.1 启动服务并设置开机自启动(先重启os)

[root@k8s-master ~]# systemctl enable kube-apiserver.service
[root@k8s-master ~]# systemctl start kube-apiserver.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
[root@k8s-master ~]# systemctl start kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
[root@k8s-master ~]# systemctl start kube-scheduler.service

第4章 部署node
4.1 安装docker

[root@k8s-node-1 ~]# yum install docker -y

4.2 设置开机自启动并开启服务

[root@k8s-node-1 ~]# chkconfig docker on
[root@k8s-node-1 ~]# service docker start

4.3 安装kubernets

[root@k8s-node-1 ~]# yum install kubernetes -y

4.3.1 配置并启动kubernetes

[root@k8s-node-1 ~]# egrep -v "^$|^#" /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.1.200:8080"
###################################################################################################
[root@k8s-node-1 ~]# egrep -v "^$|^#" /etc/kubernetes/kubelet 
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.1.201"
KUBELET_API_SERVER="--api-servers=http://192.168.1.200:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--cluster-dns=192.168.1.200 --cluster-domain=cluster.local"

4.3.2 启动服务并设置开机自启动

[root@k8s-node-1 ~]# systemctl enable kubelet.service
[root@k8s-node-1 ~]# systemctl start kubelet.service
[root@k8s-node-1 ~]# systemctl enable kube-proxy.service
[root@k8s-node-1 ~]# systemctl start kube-proxy.service

4.4 查看状态

[root@k8s-master ~]# kubectl get nodes
NAME            STATUS    AGE
192.168.1.201   Ready     16h

第5章 创建覆盖网络——Flannel
5.1 安装Flannel
5.1.1 在master、node上均执行如下命令,进行安装

yum install flannel -y

5.1.2 配置Flannel
5.1.2.1 master、node上均编辑/etc/sysconfig/flanneld

master配置
[root@k8s-master ~]# egrep -v "^$|^#" /etc/sysconfig/flanneld 
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.200:2379"
FLANNEL_ETCD_PREFIX="/k8s/network"
node配置
[root@k8s-node-1 ~]# egrep -v "^$|^#" /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.200:2379"
FLANNEL_ETCD_PREFIX="/k8s/network"

5.2 配置etcd中关于flannel的key

[root@k8s-master ~]# etcdctl set /k8s/network/config '{"Network": "172.20.0/16"}'
{"Network": "172.20.0.0/16"}
[root@k8s-master ~]# etcdctl get /k8s/network/config
{"Network": "172.20.0.0/16"}

5.3 启动
5.3.1 启动Flannel之后,需要依次重启docker、kubernete。
5.3.1.1 在master执行:

[root@k8s-master ~]# systemctl enable flanneld.service 
[root@k8s-master ~]# systemctl start flanneld.service 
[root@k8s-master ~]# service docker restart
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service

5.3.1.2 在node上执行:

[root@k8s-node-1 ~]# systemctl enable flanneld.service 
[root@k8s-node-1 ~]# systemctl start flanneld.service 
[root@k8s-node-1 ~]# service docker restart
[root@k8s-node-1 ~]# systemctl restart kubelet.service
[root@k8s-node-1 ~]# systemctl restart kube-proxy.service
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

运维那些事~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值