最近被业务折腾的死去活来,实在没时间发帖,花了好多个晚上才写好这篇帖子,后续会加油的!
【利用K8S技术栈打造个人私有云系列文章目录】
- 利用K8S技术栈打造个人私有云(连载之:初章)
- 利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)
- 利用K8S技术栈打造个人私有云(连载之:K8S环境理解和练手)
- 利用K8S技术栈打造个人私有云(连载之:基础镜像制作与实验)
- 利用K8S技术栈打造个人私有云(连载之:资源控制研究)
- 利用K8S技术栈打造个人私有云(连载之:私有云客户端打造)
注: 本文首发于 My 公众号 CodeSheep ,可 长按 或 扫描 下面的 小心心 来订阅 ↓ ↓ ↓
环境介绍
玩集群嘛,当然要搞几台机器做节点!无赖自己并没有性能很强劲的多余机器,在家里翻箱倒柜,找出了几台破旧的本子,试试看吧,与其垫桌脚不如拿出来遛遛弯...
总体环境安排如下图所示:
各部分简介如下:
Master节点 ( 一台08年买的Hedy笔记本 Centos7.3 64bit )
- docker
- etcd
- flannel
- kube-apiserver
- kube-scheduler
- kube-controller-manager
Slave节点 ( 一台二手Thinkpad T420s Centos7.3 64bit )
- docker
- flannel
- kubelet
- kube-proxy
Client节点( 一台12年的Sony Vaio SVS13 Win7 Ultimate)
- 客户端嘛,毕竟甲方,不需要安装啥东西,有个ssh客户端能连到master和slave节点就OK
Docker镜像仓库
- 一般企业内部应用的话,其会搭建自己的docker registry,用作镜像仓库,我这里就直接用Docker Gub作为镜像仓库,自己不搭建了(其实主要是没机子啊!)
Wireless Router (雷猴子家的小米路由器3)
- 最好能穿墙,因为我家路由器放在客厅,但我实验是在卧室里做的啊!
各部分全部都是由wifi进行互联,我个人不太喜欢一大堆线绕来绕去
环境准备
- 先设置master节点和所有slave节点的主机名
master上执行:
hostnamectl --static set-hostname k8s-master
复制代码
slave上执行:
hostnamectl --static set-hostname k8s-node-1
复制代码
- 修改master和slave上的hosts
在master和slave的/etc/hosts
文件中均加入以下内容:
192.168.31.166 k8s-master
192.168.31.166 etcd
192.168.31.166 registry
192.168.31.199 k8s-node-1
复制代码
- 关闭master和slave上的防火墙
systemctl disable firewalld.service
systemctl stop firewalld.service
复制代码
部署Master节点
master节点需要安装以下组件:
- etcd
- flannel
- docker
- kubernets
下面按顺序阐述
1. etcd安装
- 安装命令:
yum install etcd -y
- 编辑etcd的默认配置文件
/etc/etcd/etcd.conf
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#ETCD_ENABLE_V2="true"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[auth]
#ETCD_AUTH_TOKEN="simple"
复制代码
- 启动etcd并验证
首先启动etcd服务
systemctl start etcd // 启动etcd服务
复制代码
再获取etcd的健康指标看看:
etcdctl -C http://etcd:2379 cluster-health
etcdctl -C http://etcd:4001 cluster-health
复制代码
2. flannel安装
- 安装命令:
yum install flannel
- 配置flannel:
/etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
复制代码
- 配置etcd中关于flannel的key
etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
复制代码
- 启动flannel并设置开机自启
systemctl start flanneld.service
systemctl enable flanneld.service
复制代码
3. docker安装
该部分网上教程太多了,主要步骤如下
- 安装命令:
yum install docker -y
- 开启docker服务:
service docker start
- 设置docker开启自启动:
chkconfig docker on
4. kubernets安装
k8s的安装命令很简单,执行:
yum install kubernetes
复制代码
但k8s需要配置的东西比较多,正如第一节“环境介绍”中提及的,毕竟master上需要运行以下组件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
下面详细阐述:
- 配置
/etc/kubernetes/apiserver
文件
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
复制代码
- 配置
/etc/kubernetes/config
文件
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
复制代码
- 启动k8s各个组件
systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
复制代码
- 设置k8s各组件开机启动
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service
复制代码
部署Slave节点
slave节点需要安装以下组件:
- flannel
- docker
- kubernetes
下面按顺序阐述:
1. flannel安装
- 安装命令:
yum install flannel
- 配置flannel:
/etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
复制代码
- 启动flannel并设置开机自启
systemctl start flanneld.service
systemctl enable flanneld.service
复制代码
2. docker安装
参考前文master节点上部署docker过程,此处不再赘述
3. kubernetes安装
安装命令:yum install kubernetes
不同于master节点,slave节点上需要运行kubernetes的如下组件:
- kubelet
- kubernets-proxy
下面详细阐述要配置的东西:
- 配置
/etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
复制代码
- 配置
/etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
复制代码
- 启动kube服务
systemctl start kubelet.service
systemctl start kube-proxy.service
复制代码
- 设置k8s组件开机自启
systemctl enable kubelet.service
systemctl enable kube-proxy.service
复制代码
至此为止,k8s集群的搭建过程就完成了,下面来验证一下集群是否搭建成功了
验证集群状态
- 查看端点信息:
kubectl get endpoints
- 查看集群信息:
kubectl cluster-info
- 获取集群中的节点状态:
kubectl get nodes
OK,节点已经就绪,可以在上面做实验了!
参考文献
- https://www.kubernetes.org.cn/
后记
作者更多的SpringBt实践文章在此:
- Spring Boot应用监控实战
- SpringBoot应用部署于外置Tomcat容器
- ElasticSearch搜索引擎在SpringBt中的实践
- 初探Kotlin+SpringBoot联合编程
- Spring Boot日志框架实践
- SpringBoot优雅编码之:Lombok加持
如果有兴趣,也可以抽点时间看看作者一些关于容器化、微服务化方面的文章:
- 利用K8S技术栈打造个人私有云 连载文章
- 从一份配置清单详解Nginx服务器配置
- Docker容器可视化监控中心搭建
- 利用ELK搭建Docker容器化应用日志中心
- RPC框架实践之:Apache Thrift
- RPC框架实践之:Google gRPC
- 微服务调用链追踪中心搭建
- Docker容器跨主机通信
- Docker Swarm集群初探
- 高效编写Dockerfile的几条准则