本文将介绍k8s的集群环境搭建、持续部署镜像、以及利用kubernetes dashboard来管理监控服务.
官方文档地址:https://kubernetes.io/zh/docs/home/
kubernetes中文文档:http://docs.kubernetes.org.cn/683.html
说在前面
环境:centos 7 + 预安装docker环境 + 预安装jdk环境
- kubeadm
官方提供了多种搭建集群环境的工具,本文将采用kubeadm工具来搭建k8s的集群搭建 - kubectl
执行k8s命令的工具 - kubelet
管理pod启动、关闭的工具
(一)k8s集群环境的搭建
以下如无特殊说明,三台虚拟机均需执行相同配置
- 准备三台虚拟机
- 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)2 CPU 核或更多
- 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
- 节点之中不可以有重复的主机名、MAC 地址或 product_uuid
- 设置相同时区、主机名
timedatectl set-timezone Asia/Shanghai #三台虚拟机均需设置
hostnamectl set-hostname master #选择一台虚拟机作为master集群管理节点
hostnamectl set-hostname node1 #节点1
hostnamectl set-hostname node2 #节点2
- 添加hosts网络主机配置,三台虚拟机都要设置
- vim /etc/hosts
192.168.47.131 master
192.168.47.129 node1
192.168.47.130 node2
#注意:ip地址根据主机ip进行设置,此处是我的虚拟机ip
- 配置完成,测试三台虚拟机的连通性
ping node1 或者 ping master 或者 ping node2
若ping的通,则hosts文件设置正常
- 网桥配置
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
- kubeadm、kubectl、kubelet工具安装
①设置yum镜像包,此处可能需要翻墙
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
②关闭安全设置以及防火墙
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo systemctl stop firewalld;systemctl disable firewalld;
③安装工具
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
④kubelet添加开机自启动
sudo systemctl enable --now kubelet
- 配置cgroup驱动程序
按照官方说法,不推荐使用cgroupfs驱动,更改为systemd驱动
- 修改kubelet的cgroup驱动
①找到 kubelet 的 ConfigMap 名称
kubectl get cm -n kube-system | grep kubelet-config
②修改cgroupDriver的配置 注意:(命令中x.yy替换为上一步查出来的configMap版本号)
kubectl edit cm kubelet-config-x.yy -n kube-system
③找到配置并修改保存如无则新增以下
cgroupDriver: systemd
- 修改docker的cgroup驱动
查看/etc/docker下面是否已经存在daemon.json,一般镜像加速器也是配置在此文件中,此时只需要按照一下更新文件内容即可。
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
- 重启docker
systemctl stop docker.socket
systemctl daemom-reload
systemctl start docker.socket
- 禁用交换分区
swapoff -a #禁用分区交换
vim /etc/fstab
注释文件中此行
#/dev/mapper/cl-swap swap swap defaults 0 0
- 下载kubernetes相关的镜像包
- 可以通过一下命令查看搭建集群所需要的镜像包
kubeadm config images list #查看
kubeadm config images pull #拉取相关的包
很遗憾,这里也需要翻墙才能拉取相关镜像,不过我为大家准备了一个kubernetes v1.22.1版本的镜像包,下载地址在文章最后,大家下载jar包后可以导入到本地镜像库,这样就不用翻墙拉取了~
docker load < kubernetes.v1.22.1.jar
至此,三台机器的基本环境都准备好了,下面我们来运行一下集群搭建的初始化命令
kubeadm init --kubernetes-version=v1.22.1 --pod-network-cidr=10.244.0.0/16 ##初始化命令
kubeadm reset ##重置命令,此命令不需要执行
等待几秒。。。。
若出现此日志,则表示集群初始化成功!!!
kubeadm join 192.168.47.131:6443 --token pwymjo.y9nrkixjh7h0dn4m \
--discovery-token-ca-cert-hash sha256:75477b27df79b1059fb92a80ee6c3fdc03fa0a3a3ce79ad64225d6a3b2265fc1
还需复制集群信息的文件到用户空间目录下:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
若是管理员,可以设置环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
同学们是不是以为这样就完成了集群初始化的工作了呢?稍等,莫急,还有两项重要的事情要做:
- 添加底层网络组件:flannel
在pod之间的网络通讯协议都是基于flannel的,所有我们需要添加flannel服务
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
以上地址可能被墙,可以使用我提供的yaml
- ①创建pod资源文件:vim kube-flannel.yaml
- ②创建flannel pod : kubectl create -f kube-flannel.yaml
kube-flannel.yaml:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- node加入到master节点,构成一主节多从的集群架构
在node1和node2中分别执行加入命令:
kubeadm join 192.168.47.131:6443 --token fh6yxx.afd4rkeyxqnxdj07 \
--discovery-token-ca-cert-hash sha256:7e6ac478cab19d7d54267e63bf4e6c1b311968b04ef627184923e7401bf6bae7
若出现此界面,则表示加入成功!!!
master下执行:
kubectl get node ##查看节点的状态
kubectl get pod --all-namespaces ##查看所有命名空间下pod的状态
至此,可以看得三个节点都是ready的状态,所有的pod都是running状态,则表示集群环境搭建成功了~~
----剩下k8s的容器管理、发布、监控我们留到下一节再说----
kubernetes.v1.22.1.jar包下载地址:
链接:https://pan.baidu.com/s/1Oq4Pvbyis7bls2lTiYvCxQ
提取码:abcd