k8s之进程版部署

该博客详细介绍了如何在三台服务器上搭建Kubernetes集群,包括关闭防火墙和SELinux,安装EPEL源,配置hosts,部署master节点的etcd、docker和kubernetes组件,以及node节点的docker和kubernetes组件。此外,还涉及了flannel网络插件的配置和本地镜像仓库registry的搭建与管理,包括推送和验证镜像。
摘要由CSDN通过智能技术生成

环境

192.168.102.53 k8s-master etcd registry
192.168.102.54 k8s-node1
192.168.102.55 k8s-node2
所有机器关闭防火墙和selinux

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
sed -ir 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux
setenforce 0
getenforce

所有机器安装epel-release源

yum -y install epel-release

所以机器添加hosts

Vim /etc/hosts
192.168.102.53 k8s-master
192.168.102.53 etcd
192.168.102.54 k8s-node1
192.168.102.55 k8s-node2

部署master

安装etcd:

[root@k8s-master ~]# Yum -y install etcd
[root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[root@k8s-master ~]# Vim /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379,http://0.0.0.0:4001”
ETCD_NAME=”master”
ETCD_ADVERTISE_CLIENT_URLS=”http://etcd:2379,http://etcd:4001”

启动并验证

[root@k8s-master ~]# systemctl start etcd 
[root@k8s-master ~]# systemctl enable etcd
[root@k8s-master ~]# systemctl status etcd

测试etcd数据的可用性

[root@k8s-master ~]#Etcdctl get testdir/testkey0 //查看值
[root@k8s-master ~]#Etcdctl -C http://etcd:4001 cluster-health //查看集群的健康性
[root@k8s-master ~]#Etcdctl -C http://etcd:2379 cluster-health

安装docker

[root@k8s-master ~]#Yum -y install docker

配置docker配置文件,使其允许从registry中拉取镜像

[root@k8s-master ~]# cp /etc/sysconfig/docker /etc/sysconfig/docker.bak
[root@k8s-master ~]# Vim /etc/sysconfig/docker
OPTIONS=”--insecure-registry registry:5000” 
[root@k8s-master ~]# Systemctl enable docker
[root@k8s-master ~]# Systemctl start docker

安装kubernetes

[root@k8s-master ~]#Yum -y install kubernetes

配置并启动kubernetes:在kubernetes master上需要运行一下组件,Kubernetes API server、Kubernetes controller manager、Kubernetes scheduler

[root@k8s-master ~]# cp /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak
[root@k8s-master ~]# Vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS=”--insercure-bind-address=0.0.0.0” //监控本地所有的网卡
KUBE_API_PORT=”--port=8080KUBE_ETCD_SERVERS=”--etcd-servers=http://etcd:2379”
KUBE_ADMINSSION_CONTROL=”--adminssion-control=NamespaceLifecycle,NamespaceExists,LImitRanger,SecurityContexDeny,ResourceQuota” //配置控制器
[root@k8s-master ~]# cp /etc/kubernetes/config /etc/kubernetes/config.bak
[root@k8s-master ~]# Vim /etc/kubernetes/config
KUBE_MASTER=”--master=http://k8s-master:8080”

启动服务并设置开机启动

[root@k8s-master ~]# systemctl enable kube-apiserver.service
[root@k8s-master ~]# Systemctl start kube-apiserver.service
[root@k8s-master ~]# Systemctl enable kube-controller-manager.service
[root@k8s-master ~]# Systemctl start kube-controller-manager.service
[root@k8s-master ~]# Systemctl enable kube-scheduler.service
[root@k8s-master ~]# Systemctl start kube-scheduler.service

部署node

安装docker

安装配置启动docker(两台node一样)

yum -y install docker
cp /etc/sysconfig/docker /etc/sysconfig/docker.bak
Vim /etc/sysconfig/docker
OPTIONS=”--insecure-registry registry:5000”
Systemctl enable docker
Systemctl start docker

安装kubernetes

安装配置启动kubernetes(两台node一样)

Yum -y install kubernetes

在kubernetes node上需要运行以下组件:kubelet、kubernetes proxy

cp /etc/kubernetes/config /etc/kubernetes/config.bak
Vim /etc/kubernetes/config
KUBE_MASTER=”--master=http://k8s-master:8080”
cp /etc/kubernetes/kubelet /etc/kubernetes/kubelet.bak
Vim /etc/kubernetes/kubelet
KUBELET_ADDRESS=”--address=0.0.0.0”
KUBELET_HOSTNAME=”--hostname-override=node节点主机名”
KUBELET_API_SERVER=”--api-servers=http://k8s-master:8080”

启动服务并设置开机自启动

Systemctl enable kubelet.service
Systemctl start kubelet.service
Systemctl enable kube-proxy.service
Systemctl start kube-proxy.service

查看状态,在master上查看集群中节点及节点状态

[root@k8s-master ~]# Kubectl -s http://k8s-master:8080 get node
[root@k8s-master ~]# Kubectl get nodes

安装flannel

在master、node上均执行如下命令,进行安装

Yum -y install flannel

配置flannel,在master、node上均配置

cp /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=”http://etcd:2379”

配置etcd中关于flannel的key,flannel使用etcd进行配置,来保证多个flannel实例之间的配置一致性,所以需要在etcd上进行如下配置,管理员配置flannel使用的network,并将配置保存在etcd中。

[root@k8s-master ~]# Etcdctl mk /atomic.io/network/config ‘{ “Network”:”172.17.0.0/16”}’
[root@k8s-master ~]# Etcdctl update /atomic.io/network/config ‘{ “Network”:”172.17.0.0/16”}’(排错时用)

在每个节点上flannel启动,它从etcd中获取network配置,并为本节点产生一个subnet,也保存在etcd中。并且产生/run/flannel/subnet.env文件

cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.17.0.0/16 //全局的falnnel network
FALNNEL_SUBNET=172.17.78.1/24 //本节点上falnnel subnet
FALANNEL_MTU=1400 //本节点上的flannel mtu
FALANNEL_IPMASQ=false

启动,启动flannel之后,需要依次启动相关组件
在master执行

[root@k8s-master ~]# systemctl enable flanneld.service
[root@k8s-master ~]# systemctl start flanneld.service 
[root@k8s-master ~]# Systemctl restart kube-apiserver.service
[root@k8s-master ~]# Systemctl restart kube-controller-manager.service
[root@k8s-master ~]# Systemctl restart kube-scheduller.service

在node上执行

systemctl enable flanneld.service
systemctl start flanneld.service
Systemctl restart docker
Systemctl restart kubelet.service
Systemctl restart kube-proxy.service

部署本地镜像仓库

可以使用独立的宿主机
部署docker registry,在master上搭建registry
拉取registry镜像

[root@k8s-master ~]# docker pull docker.io/registry
[root@k8s-master ~]# Docker images

启动registry

[root@k8s-master ~]# docker run -d -p 5000:5000 --name=registry --restart=always --privileged=true --log-driver=none -v /home/data/registrydata:/tmp/registry registry

其中,/home/data/registrydata是一个比较大的系统分区,今后镜像仓库中的全部数据都会保存在这个外挂目录下。
更改名称和标签并推送

[root@k8s-master ~]# docker pull nginx
[root@k8s-master ~]# docker pull centos
[root@k8s-master ~]# docker tag  docker.io/nginx:latest registry:5000/nginx:v1 
[root@k8s-master ~]# docker tag docker.io/centos:latest registry:5000/centos:v1
[root@k8s-master ~]# docker push registry:5000/nginx:v1 
[root@k8s-master ~]# docker push registry:5000/centos:v1 
[root@k8s-master ~]# curl -XGET http://192.168.102.53:5000/v2/_catalog //查看仓库中的镜像
[root@k8s-master ~]# curl -XGET http://192.168.102.53:5000/v2/centos/tags/list //查看镜像版本
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值