图文示例VMware部署Kubernetes服务最小化集群指导
一、k8s概述
kuberbetes(k8s)是什么?
官网说明:
Kubernetes, also known as K8s, is an open-source system for automating
deployment, scaling, and management of containerized applications. It
groups containers that make up an application into logical units for
easy management and discovery. Kubernetes builds upon 15 years of
experience of running production workloads at Google, combined with
best-of-breed ideas and practices from the community.
images/flower.svg
译:Kubernetes,也称为K8,是一个开源系统,用于自动化部署、扩展和管理容器化应用程序。
它将组成应用程序的容器组为逻辑单元,以便于管理和发现。Kubernetes建立在 Google 15 年运行生产工作负载的经验,以及社区最一流的方案和实践经验。
详细说明见官网:https://kubernetes.io/
推荐学习书籍:《kubernetes权威指南》
电子版百度网盘:https://pan.baidu.com/s/15nMW2pDGSw0iDfbUu6bUlQ 提取码:1fn9\
二、k8s核心架构
2.1 k8s整体组件架构
学好k8s必须了解k8s各核心关键组件功能,因资源有限,本文着重通过VMware搭建最小规格k8s集群实验,只对部分功能组件简单说明,不同于企业应用,无高可用架构;
三、kubernets的核心组件功能和术语
3.1 Master\
Master指的是kuberbets集群控制节点,在每个Kubernetes集群里都需要有一个Master来负责整个集群的管理和控制,基本上Kubernetes的所有控制命令都发给它,Master是kubernetes的大脑。\
Master上运行以下关键服务进程
Kubernetes API Server(kube-apiserver)
(1)kube-apiserver提供了HTTPRest接口的关键服务进程,是Kubernetes里所有资源的增、删、改、查等操作
的唯一入口,也是集群控制的入口进程。
Kubernetes Scheduler(kube-scheduler)
(2) kube-scheduler负责资源调度,按照预定的调度策略将POD调度到相应的机器上
Kubernetes Controller Manager(kube-controller-manager):
(3) kube-controler-manager负责集群状态,比如故障检测、自动扩展、滚动更新等
****ETCD :**
(4) 在Master上通常还需要部署etcd服务,因为Kubernetes里的所
有资源对象的数据都被保存在etcd中。
3.2 Node
Node节点可以理解为k8s的计算节点,可以为物理主机也可以为虚拟机。跑在node节点上的服务一般包括kubelet、Docker Engine、pod网络(nsx-t、flannel等)
kubelet
(1)kubelet :负责Pod对应的容器的创建、启停等任务,同时与
Master密切协作,实现集群管理的基本功能。
kube-proxy
(2) kube-proxy:实现Kubernetes Service的通信与负载均衡机制的重要组件。
Docker Engine
(3) Docker Engine(docker):Docker引擎,负责本机的容器创建和管理工作。
四、k8s vmware虚拟化初始化环境准备
4.1 VM初始化环境配置
VM规格和ip可自定义,可通公网即可,我使用的是网卡NAT模式,跟本地PC机同网段,也可以使用桥接模式。
假如网卡使用桥接模式,可将下图的
BOOTPROTO=DHCP修改为BOOTPROTO=static
IPADDR、NETMASK和GATEWAY自定义填写(跟本地PC机网关相同,可访问公网)
vi /etc/sysconfig/network-scripts/ifcfg-ens32配置网络
三台VM上都要执行的操作
(1)关闭selinux
执行setenforce 0后,编辑配置文件
vi /etc/selinux/config,如图所示把SELINUX参数改为disabled
(2)关闭防火墙
systemctl stop firewalld
iptables -F
systemctl disable firewalld
(3) 配置域名解析
效果:使用域名即可ping通各节点
(4)关闭Swap交换分区
swapoff -a
sed -i ‘s@(.a2f86.)@#\1@g’ /etc/fstab
(5)配置阿里源
rm -rfv /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
(6)配置时间同步
master节点执行
yum install -y chrony (安装chrony)
编辑/etc/chrony.conf文件,注释默认NTP服务器,指定上游公共NTP服务器,并允许其他节点同步时间
sed -i ‘s/^server/#&/’ /etc/chrony.conf
cat >> /etc/chrony.conf << EOF
local stratum 10
server master iburst
allow all
EOF
systemctl enable chronyd && systemctl restart chronyd (重启chronyd服务)
node节点每个node节点执行
echo “server 192.168.136.130 iburst” >> /etc/chrony.conf
systemctl enable chronyd && systemctl restart chronyd
192.168.136.130为master节点的ip地址
每个节点node执行chronyc sources,回显“^*”说明时间同步成功。
(7)配置内核参数,配置路由转发
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
(8)安装常用软件包
安装必要的一些系统工具
yum install vim bash-completion net-tools gcc -y
yum install -y yum-utils device-mapper-persistent-data lvm2
yum makecache fast
五 、kubernetes环境安装部署
master节点和node节点都需安装执行,图文只以master安装作为示例:
5.1 使用使用aliyun源安装docker-ce
查找Docker-CE的版本
yum list docker-ce.x86_64 --showduplicates | sort -r
如:指定安装18.09.7版本
#yum install -y --setopt=obsoletes=0 docker-ce-18.09.7-3.el7
#mkdir -p /data/docker /etc/docker
#cat > /etc/docker/daemon.json <<EOF
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“data-root”: “/data/docker”,
“registry-mirrors”: [ “https://um3mpznp.mirror.aliyuncs.com” ],
“live-restore”: true,
“log-driver”: “json-file”,
“log-opts”:{
“max-size”: “100m”
},
“storage-driver”: “overlay2”,
“storage-opts”: [
“overlay2.override_kernel_check=true”
]
}
EOF
#systemctl daemon-reload
#systemctl restart docker
#systemctl enable docker
相关依赖包被安装成功后的最后部分回显如下图:
5.2 添加aliyundocker仓库加速器
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-‘EOF’
{
“registry-mirrors”:
[“https://fl791z1h.mirror.aliyuncs.com”]
}
EOF
加载daemon-reload服务,重启docker服务
systemctl daemon-reload 加载daemon-reload服务
systemctl restart docker 重启docker服务
systemctl status docker 查看docker服务是否启动成功
systemctl enable docker 使docker服务开机自启动
5.3 安装kubectl、kubelet、kubeadm, 添加阿里kubernetes源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
查找kubeadm kubectl kubelet的版本
#yum list kubeadm kubectl kubelet --showduplicates |sort -r
指定安装1.18.0版本,其他版本直接改参数即可
#yum install kubeadm-1.18.0 kubectl-1.18.0 kubelet-1.18.0
vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=–fail-swap-on=false
执行如下命令
systemctl restart kubelet 重启kubelet服务
systemctl enable kubelet 使kubelet服务开机自启
systemctl status kubelet 查看kubelet服务运行状态**
5.4 初始化k8s集群
执行
kubeadm init --apiserver-advertise-address=192.168.136.130
–image-repository registry.aliyuncs.com/google_containers
–kubernetes-version v1.18.0
–service-cidr=10.1.0.0/16
–pod-network-cidr=10.244.0.0/16
初始化k8s集群
**注意:192.168.136.130为master节点IP
kubernetes-version的1.18.0 具体版本可在5.3步获取,或使用rpm -qa |grep kubernets查询。
10.1.0.0/16为POD的IP地址段
**
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file\ “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder\ “/etc/kubernetes/pki”
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs\ [192.168.136.130 127.0.0.1 ::1]
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.136.130 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.136.130 ]
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy\
Your Kubernetes control-plane has initialized successfully!\
To start using your cluster, you need to run the following as a regular user:\
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf\ $HOME/.kube/config
sudo chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g)\ $HOME/.kube/config\
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/\
Then you can join any number of worker nodes by running the following on each as root:\
kubeadm join 192.168.136.130 :6443 --token d9k5y2.wrgrun1fldavpn0m
–discovery-token-ca-cert-hash sha256:3dda021ab58d28081554bfe4969f0fb3660d49b83096fd0fe1c04b7629b814c9
生成的token命令,需存至Node节点上执行,把node节点加入集群。
由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。
依次执行如下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g) $HOME/.kube/config
source <(kubectl completion bash)
kubectl get nodes
kubectl get pod --all-namespaces
node节点为NotReady,因为corednspod没有启动,缺少网络pod
5.5 安装flannet网络
执行:
docker pull registry.cn-hangzhou.aliyuncs.com/wy18301/flannel-v0.11.0-amd64:v0.11.0-amd64\
docker tag registry.cn-hangzhou.aliyuncs.com/wy18301/flannel-v0.11.0-amd64:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64\
或执行:wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml\
kubectl apply -f kube-flannel.yml\
5.6 验证集群节点状态\
执行kectl get cs
查看一下集群状态,确认个组件都处于healthy状态:
\
执行kubectl get node
执行:kubectl get pod -A
\
六、安装Dashboard
使用kubectl apply命令安装Dashboard
yaml和镜像需要事先上传入环境
kubectl apply -f kubernetes-dashboard.yaml
或通过官网络创建,一般国外网站的限制是无法安装:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml\
访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}’)
使用输出的token登录Dashboard。
输入: 在当前页,直接键盘敲入 thisisunsafe
或者网络安装命令:
最好成功的UI页面:
\