一. 部署本地yum源
二. 前置环境准备
1.配置主机名和hosts文件(每个节点)
(1)hostnamectl set-hostname xxx
(2)编辑各节点/etc/hosts文件,修改对应ip
2.配置yum安装源(每个节点)
编辑/etc/yum.repo.d/base.repo
[base]
name=base
baseurl=http://xxx
gpgcheck=0
enabled=1
执行yum update –y 检测
3.安装docker并验证(每个节点)
yum localinstall docker-engine-selinux-1.12.6-1.el7.centos.noarch.rpm –y && yum localinstall docker-engine-1.12.6-1.el7.centos.x86_64.rpm –y
验证:systemctl start docker; docker --version
4.搭建etcd集群
1.关闭各节点firewalld,selinux,NetworkManager
systemctl stop firewalld && systemctl disable firewalld
setenforce 0;systemctl stop NetworkManager&&systemctl disable NetworkManager
2.启动etcd集群(集群数量必须为奇数个)
nohup ./etcd --name infra0 --initial-advertise-peer-urls http://10.123.0.4:2380 \
--listen-peer-urls http://10.123.0.4:2380 \
--listen-client-urls http://10.123.0.4:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.123.0.4:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.123.0.4:2380,infra1=http://10.123.0.5:2380,infra2=http://10.123.0.6:2380 \
--initial-cluster-state new &
nohup ./etcd --name infra1 --initial-advertise-peer-urls http://10.123.0.5:2380 \
--listen-peer-urls http://10.123.0.5:2380 \
--listen-client-urls http://10.123.0.5:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.123.0.5:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.123.0.4:2380,infra1=http://10.123.0.5:2380,infra2=http://10.123.0.6:2380 \
--initial-cluster-state new &
nohup ./etcd --name infra2 --initial-advertise-peer-urls http://10.123.0.6:2380 \
--listen-peer-urls http://10.123.0.6:2380 \
--listen-client-urls http://10.123.0.6:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.123.0.6:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.123.0.4:2380,infra1=http://10.123.0.5:2380,infra2=http://10.123.0.6:2380 \
--initial-cluster-state new &
5.搭建K8S集群
1.安装K8S集群工具(每个节点)
yum install socat –y
rpm –ivh *.rpm
2.导入K8S镜像(每个节点)
docker load -i dnsmasq.tar && docker load -i dns-nanny.tar && docker load -i dns.tar && docker load -i k8s.tar && docker load -i pause.tar && docker load -i sidecar.tar &&docker load -i cni.tar && docker load -i node.tar && docker load -i policy.tar
3.修改K8S集群驱动
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf(每个节点)
systemd改成cgroupfs(每个节点)
4.修改后重新加载(每个节点)
systemctl daemon-reload
systemctl restart kubelet
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
5.修改kubeadm.config文件,注意版本号为1.7.0(主节点),启动主节点
kubectl init –config kubeadm.config
6.主节点配置用户识别文件,保证从节点顺利加入集群
vim /etc/profile
#################
export KUBECONFIG=$HOME/admin.conf
#################
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
source /etc/profile
export KUBECONFIG=$HOME/admin.conf
7.从节点加入集群,拷贝主节点文件,删除从节点manifests文件
scp -r /etc/kubernetes/* root@10.123.0.9:/etc/kubernetes/
rm -rf /etc/kubernetes/manifests
8.创建集群网络
vim calico.yaml
###############
etcd_endpoints: "http://10.122.0.9:2379,http://10.122.0.10:2379,http://10.122.0.11:2379"
###############
kubectl create -f calico.yaml
6.busybox验证K8S集群状态是否正常
1.创建busybox集群工具
kubectl create -f busybox.yaml
2.执行nslookup命令查看是否能够解析default命名空间
kubectl exec -it busybox -- nslookup kubernetes.default
三. 部署监控服务
1.部署heapster
cd heapster/ && kubectl create -f heapster.yaml && kubectl create -f influxdb.yaml && kubectl create -f heapster-rbac.yaml
2.部署promethues
创建monitoring命名空间
kubectl create namespace monitoring
创建集群认证证书
kubectl --namespace=monitoring create secret generic --from-literal=ca.pem=123 --from-literal=client-key.pem=123 etcd-tls-client-certs
创建集群服务帐户
kubectl -n monitoring create serviceaccount prometheus
创建集群role
kubectl -n monitoring create clusterrolebinding prometheus --clusterrole cluster-admin --serviceaccount=monitoring:prometheus
创建完成后,执行./deploy.sh 查看monitoring集群状态,镜像本地导入,若长时间无法创建成功:
执行
kubectl get deployments -n monitoring
kubectl edit deploy servicename -n monitoring
修改imagePullPolicy: IfNotPresent
3.部署中间件garden-monitoring
获取当前服务的端口映射状态
kubectl get svc -n monitoring
做端口映射(heapster,promethues,grafana等需要暴露端口的服务):
kubectl -n kube-system patch service tiller-deploy -p '{"spec": {"type": "NodePort"}}'
修改garden-monitoring.conf文件