云原生 - Kubernetes (k8s)

Kubernetes (K8s)

第 1 章 Ubuntu 安装 K8s 集群

Ubutun 20.04 TLS;K8s v1.23.5

master 服务器要求 4CPU 8G,硬盘 20G;2 个 worker 服务器要求 8CPU 16G,硬盘 40G。

1.1 修改主机名

在 master 服务器设置名称为 k8s-master:

sudo hostnamectl set-hostname k8s-master

在 worker 服务器设置名称为 k8s-worker1、k8s-worker2:

sudo hostnamectl set-hostname k8s-worker1
sudo hostnamectl set-hostname k8s-worker2
1.2 解析主机

修改 hosts 文件:

sudo vim /etc/hosts

添加 ip 与主机名的映射:

192.168.234.141 k8s-master
192.168.234.142 k8s-worker1
192.168.234.143 k8s-worker2
1.3 关闭防火墙
sudo systemctl stop ufw.service
sudo systemctl disable ufw.service
1.4 关闭 swap
sudo systemctl stop swap.target
sudo systemctl disable swap.target

sudo systemctl stop swap.img.swap

修改 fstab 文件:

sudo vim /etc/fstab

将下面行注释掉:

/swapfile  none  swap  sw  0  0

重启你的虚拟机,然后查看 swap 是否关闭:

free -m
1.5 开启 IPv4 转发
sudo tee /etc/sysctl.d/k8s.conf <<-'EOF' 
net.ipv4.ip_forward = 1
EOF

sudo sysctl -p /etc/sysctl.d/k8s.conf

安装 ipvsadm:

sudo apt install -y ipvsadm 

把以下模块都加到内核中:

# 切换为 root 用户
sudo su


sudo cat > /etc/modules-load.d/ipvs.conf << EOF 
ip_vs_dh
ip_vs_fo
ip_vs_ftp
ip_vs
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_mh
ip_vs_nq
ip_vs_ovf
ip_vs_pe_sip
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr
nf_conntrack
EOF


systemctl enable --now systemd-modules-load.service
reboot

重启你的虚拟机,然后执行命令:

lsmod | grep ip_vs
1.6 安装 docker

使用脚本安装 docker:

sudo apt install -y curl
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh --mirror Aliyun

然后执行修改 docker 启动项:

mkdir -vp /etc/docker/

sudo tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "https://registry.docker-cn.com"
    ],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "storage-driver": "overlay2"
}
EOF

保存后退出,然后启动 docker:

sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl enable docker

最后查看 docker 版本:

docker --version
1.7 安装 kubeadm

官方文档:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

阿里云镜像文档:https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11xLY6VN

更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包:

sudo apt-get update 
sudo apt-get install -y apt-transport-https

安装 GPG 密匙:

sudo su
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

配置源:

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

开始安装:

sudo apt-get update
sudo apt-get install -y kubelet=1.23.5-00 kubeadm=1.23.5-00 kubectl=1.23.5-00

注意下载的版本不能太高,否则会出现 Initial timeout of 40s passed 的问题,暂时没找到解决办法

启动 kubelet:

sudo systemctl enable kubelet.service 
kubeadm version
1.8 初始化 master 节点

在 master 节点上执行以下命令(master ip 换成自己的):

sudo kubeadm init \
--apiserver-advertise-address=192.168.234.141 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 
参数说明
–apiserver-advertise-address指定 apiserver 的地址
–control-plane-endpoint控制平面端点,值为主机名
–image-repository指定镜像仓库
–pod-network-cidrPod 节点之间网络可以使用的 IP 段,固定使用 10.244.0.0/16
–service-cidrService 层服务网络 IP 段,固定使用 10.96.0.0/12

显示最后 2 行内容进行复制:

kubeadm join 192.168.234.141:6443 --token sp4wi7.fhwy9cm5ye0835lp \
        --discovery-token-ca-cert-hash sha256:d86a24672a3aaa6da9c53407f6f6a3317932ebd552c3c7ee59278c03ee01b672

在 master 上以非 root 用户执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

在 master 节点上安装 Flannel 网络插件:

curl https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml -O

不断使用 kubectl get pod -A 命令查看,直到所有服务都是 running 状态。

1.9 worker 节点加入集群

将 master 中的 /etc/kubernetes/admin.conf 文件拷贝到 worker 相同目录下。然后在两个 worker 下执行命令:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

将之间复制的最后 2 行内容打开,在两个 worker 上执行:

kubeadm join 192.168.234.141:6443 --token sp4wi7.fhwy9cm5ye0835lp \
        --discovery-token-ca-cert-hash sha256:d86a24672a3aaa6da9c53407f6f6a3317932ebd552c3c7ee59278c03ee01b672

使用 kubectl get nodes 命令查看(大概需要 3-5 分钟):

1.10 安装集群指标监控组件

在 master 根目录下新建 metrics.yaml 文件:

vim metrics.yaml

复制以下内容:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
kubectl apply -f metrics.yaml
# 等待半分钟
kubectl top nodes
kubectl top pods -A
1.11 安装可视化面板

在 master 根目录下新建 dashboard.yaml 文件:

vim dashboard.yaml

复制以下内容:

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
kubectl apply -f dashboard.yaml
kubectl get pod -A

等待两个 kubernetes-dashboard 服务都是 running 状态后:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

然后将文件中的 type:Cluster 修改为 type:NodePort。然后运行以下命令查看 kubernetes-dashboard 服务端口(假设这里查到的是 30753):

kubectl get svc -A | grep kubernetes-dashboard

在浏览器输入 K8s 集群任意一台节点的 https://ip:30753 就可以访问,如果出现【你的连接不是专用连接】的警告,键盘输入 thisisunsafe 即可进入页面:

在 master 根目录下新建 dashboard-user.yaml 文件:

vim dashboard-user.yaml

复制以下内容:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f dashboard-user.yaml
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

可以拿到登录 token,在刚才的页面输入即可跳转到以下页面:

第 2 章 使用 KubeKey 部署高可用 K8s 集群

KubeKey(由 Go 语言开发)是一种全新的安装工具。可以使用 KubeKey 仅安装 Kubernetes,也可以同时安装 Kubernetes 和 KubeSphere。

KubeKey 安装要求:

  1. 2 核 4 GB,20G 存储
  2. 建议操作系统处于干净状态(不安装任何其他软件),否则可能会发生冲突
  3. 集群所有节点时钟同步,能够使用 ssh 访问,能够使用 sudo、curl、openssl 命令
  4. 集群所有节点关闭 selinux
  5. 集群所有节点关闭防火墙
2.1 负载均衡方案
2.1.1 外部负载均衡

使用 keepalived 和 haproxy 组成的负载均衡方案:

2.1.2 内部负载均衡(推荐)

每个 worker 节点通过 haproxy,把 master 节点控制平面的 apiserver 服务代理到本地, worker 节点中的 kubelet 和 kube-proxy 组件通过本地的地址访问 apiserver。

2.2 搭建高可用 K8s 集群(内部负载均衡)
节点名节点地址节点角色
node1192.168.234.145master、etcd
node2192.168.234.146master、etcd
node3192.168.234.147master、etcd
node4192.168.234.148worker
node5192.168.234.149worker
192.168.234.150vip
2.2.1 检查每个节点安装环境

Ubuntu 初始化配置见 Linux 文档第 8 章。

安装 socat、conntrack、curl:

sudo apt install socat
sudo apt install conntrack
sudo apt install curl -y

检查 selinux 是否关闭,如果显示为 disable 则 selinux 是关闭的:

sudo apt install selinux-utils

getenforce

检查 swap 是否关闭:

sudo systemctl stop swap.target
sudo systemctl disable swap.target

sudo systemctl stop swap.img.swap

# 修改 fstab 文件
sudo vim /etc/fstab
# 将下面行注释掉
/swapfile  none  swap  sw  0  0

# 重启服务器,查看 swap 是否关闭
sudo reboot
free -m

检查防火墙是否关闭:

sudo systemctl stop ufw.service
sudo systemctl disable ufw.service

修改时区:

tzselect

# 我们选择亚洲 Asia,确认之后选择中国(China),最后选择北京(Beijing),选择 1


date -R
# 查看时间是否正确,是否为 +0800 时区

修改 ssh 配置:

sudo vim /etc/ssh/sshd_config
# PasswordAuthentication no 改为 yes,保存更改并重启 sshd 服务
sudo systemctl restart sshd
2.2.2 下载 KubeKey

如果能正常访问 GitHub/Googleapis(多试几次):

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk

如果访问 GitHub/Googleapis 受限:

export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk

查看当前 KubeKey 支持安装哪些 K8s 版本:

./kk version --show-supported-k8s
2.2.3 创建默认配置文件
  • –with-kubernetes 指定安装 K8s 集群版本
  • -f config.yaml 指定配置文件的名字,如果不指定文件名默认为 config-sample.yaml
./kk create config --with-kubernetes v1.21.5 -f config.yaml

修改 config.yaml:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.234.145, internalAddress: 192.168.234.145, user: root, password: "123"}
  - {name: node2, address: 192.168.234.146, internalAddress: 192.168.234.146, user: root, password: "123"}
  - {name: node3, address: 192.168.234.147, internalAddress: 192.168.234.147, user: root, password: "123"}
  - {name: node4, address: 192.168.234.148, internalAddress: 192.168.234.148, user: root, password: "123"}
  - {name: node5, address: 192.168.234.149, internalAddress: 192.168.234.149, user: root, password: "123"}
  roleGroups:
    etcd:
    - node1                   # 集群中用作 etcd 节点的所有节点
    - node2
    - node3
    master: 
    - node1                   # 集群中用作 master 的所有节点
    - node2
    - node3
    worker:
    - node4                   # 集群中用作 worker 的所有节点
    - node5
  controlPlaneEndpoint:
    # 服务器的内部负载均衡器。支持 haproxy, kube-vip
    internalLoadbalancer: haproxy
    # 负载均衡器默认的内部访问域名是 lb.kubesphere.local
    domain: lb.kubesphere.local  
    # 负载均衡器的 IP 地址。如果在 "kube-vip" 模式下使用 internalLoadblancer,则这里需要一个 VIP。
    address: ""     
    port: 6443
  kubernetes:
    # Kubernetes 安装版本
    version: v1.21.5
    # Kubernetes 集群名称
    clusterName: cluster.local
    # 容器运行时,支持 docker、containd、cri-o、isula,默认 docker
    containerManager: docker
  etcd:
    # 指定集群使用的 etcd 类型。支持 kubekey、kubeadm、external,默认值 kubekey 
    type: kubekey
  network:
    plugin: calico
    # Pod 子网的有效 CIDR 块
    kubePodsCIDR: 10.233.64.0/18
    # 服务的有效 CIDR 块
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    # 配置 Docker 仓库镜像以加速下载
    registryMirrors: []
    # 设置不安全镜像仓库的地址
    insecureRegistries: []
  # 用于安装云原生插件(Chart or YAML) 
  addons: []
2.2.4 使用配置文件创建集群以及 KubeSphere
  • –with-kubesphere 指定在安装 K8s 集群的同时安装 kubesphere,如果不指定版本号则会安装最新版本的kubesphere

先保证所有节点服务器已开启,然后执行:

export KKZONE=cn
./kk create cluster -f config.yaml --with-kubesphere v3.2.0
# 输入 yes,剩下就是 15 分钟的等待
2.2.5 使用账密登录

将主节点 node1 中的 /etc/kubernetes/admin.conf 文件拷贝到从节点 node4、node5 相同目录下。然后在两个 worker 下执行命令:

sudo chmod 777 /etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

2.3 启用应用商店以及其他可插拔组件
  1. 以 admin 身份登录控制台,点击左上角的【平台管理】 ,选择【集群管理】
  2. 点击【自定义资源 CRD】 ,在搜索栏中输入 clusterconfiguration,点击结果查看其详细页面
  3. 在【资源列表】 中,点击【ks-installer】右侧的三个点,选择【编辑配置文件】
  4. 在该 YAML 文件中,把需要插件的 enabled: false 改为 enabled: true。完成后,点击右下角的【更新】 ,保存配置。
openpitrix:
  store:
    enabled: true 
devops:
  enabled: true
logging:
  enabled: true
events:
  enabled: true 
alerting:
  enabled: true
auditing:
  enabled: true
servicemesh:
  enabled: true 
network:
  networkpolicy:
    enabled: true
metrics_server:
  enabled: true 
network:
  ippool:
    type: calico   # 将 none 更改为 calico

默认情况下,如果启用了日志和事件系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在该 YAML 文件中设置以下值,尤其是 externalElasticsearchUrl 和 externalElasticsearchPort。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch:

es:                                    # Storage backend for logging, tracing, events and auditing.
  elasticsearchMasterReplicas: 1       # The total number of master nodes. Even numbers are not allowed.
  elasticsearchDataReplicas: 1         # The total number of data nodes.
  elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
  elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
  logMaxAge: 7                         # Log retention day in built-in Elasticsearch. It is 7 days by default.
  elkPrefix: logstash                  # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
  externalElasticsearchUrl:            # The URL of external Elasticsearch.
  externalElasticsearchPort:           # The port of external Elasticsearch.

第 3 章 K8s 基础

3.1 K8s 特性
  • 服务发现和负载均衡:K8s 可以负载均衡并分配网络流量使部署稳定
  • 存储编排:K8s 允许自动挂载选择的存储系统
  • 自动部署和回滚:使用 K8s 描述已部署容器
  • 自动完成装箱计算:K8s 允许指定每个容器所需 CPU 和内存
  • 自我修复:K8s 重新启动失败容器、替换容器、杀死不响应容器,并且在准备好服务之前不通告给客户端
  • 密钥和配置管理:允许存储和管理敏感信息,例如密码,OAuth 令牌和 SSH 令牌
3.2 K8s 组件架构(⭐)

3.2.1 控制平面组件(Control Plane Components)
控制平面组件功能
controller-manager在主节点上运行控制器,维持副本期望数
apiserver所有服务访问的统一入口
etcdK8s 默认的存储系统,主要用于共享配置和服务发现,它通过 Raft 一致性算法处理日志复制以保证强一致性
scheduler负责接受任务,选择合适的节点分配任务
3.2.2 Node 组件
Node 组件功能
kubelet集群中每个节点上运行的代理服务进程,对 Pod 进行创建、调度和维护
kube-proxy集群中每个节点上运行的网络代理,负责为 Pod 提供服务和负载均衡功能
docker提供容器运行环境
3.2.3 插件(Addons)
插件功能
CoreDNS服务发现,为集群中的服务创建一个域名 IP 的对应关系解析
DashboardKubernetes 集群的通用的、基于 Web 的用户界面
Ingress Controller官方只能实现四层代理,Ingress 可以实现七层代理
Prometheus监控 k8s 集群
ELKk8s 集群日志统一分析接入平台
网络插件实现容器网络接口(CNI)规范的软件组件。它们负责为 Pod 分配 IP 地址,并使这些 Pod 能在集群内部相互通信。
3.3 K8s yaml 文件
字段类型说明
apiVersionString定义 api 版本(一般是 v1),可以通过 kubectl api-versions 命令查看
kindString定义资源类别,有 Pod、Deployment、Job、Service、Ingress、PersistentVolume等
metadataObject定义资源的元数据
metadata.nameString资源名称
metadata.namespaceString资源命名空间
metadata.labelsObject资源的标签,k-v 形式
specObject定义资源所需的参数属性
spec.restartPolicyStringpod 重启策略Always:pod 一旦退出就要进行重启(默认)OnFailure:只有非正常退出才进行重启Nerver:退出后不再拉起
spec.hostNetworkBool是否使用主机网络,默认值 false。设置为 true 表示与主机在同一个网络空间
spec.nodeSelectorObject标签选择器,k-v 形式
容器相关参数类型说明
spec.containers[]List容器对象列表
spec.containers[].nameString容器的名称
spec.containers[].imageString容器使用的镜像
spec.containers[].imagePullPolicyString镜像拉取策略Always:每次都重新下载IfNotPresent:如果本地存在则使用本地镜像,不重新拉取Never:表示仅使用本地镜像
spec.containers[].command[]List指定容器启动命令,可以是多个,如果不指定则使用镜像中启动命令
spec.containers[].args[]List启动命令参数,可以是多个
spec.containers[].workingDirString指定容器的工作目录
容器挂载卷相关参数类型说明
spec.containers[].volumeMounts[]List指定容器的挂载卷,可以多个
spec.containers[].volumeMounts[].nameString挂载卷名称
spec.containers[].volumeMounts[].mountPathString挂载卷路径
spec.containers[].volumeMounts[].readOnlyBool读写模式,true 只读(默认),false 读写
容器端口相关参数类型说明
spec.containers[].ports[]List指定容器用到的端口列表
spec.containers[].ports[].nameString指定端口名称
spec.containers[].ports[].containerPortString指定端口号
spec.containers[].ports[].hostPortNumber指定主机需要监听的端口号,默认值和 containerPort 相同
spec.containers[].ports[].protocolString监听协议,默认是 tcp
容器环境相关参数类型说明
spec.containers[].env[]List容器的环境变量列表
spec.containers[].env[].nameString环境变量名称
spec.containers[].env[].valueString环境变量值
3.4 K8s 命名空间

K8s 命名空间 Namespace 用来对集群资源进行隔离划分,只隔离资源不隔离网络。

3.4.1 常见 Namespace 命令行命令
命令说明
kubectl get ns查看所有名称空间。k8s 会创建三个初始命名空间default:所有未指定命名空间的对象都会被分配在 default 命名空间kube-system:所有由 K8s 系统创建的资源都处于这个命名空间kube-public:此命名空间下的资源可以被所有人访问(包括未认证用户)
kubectl create ns 命名空间名称创建命名空间
kubectl delete ns 命名空间名称删除命名空间
kubectl get pods -n kube-system查看 kube-system 命名空间的 Pod
kubectl config set-context --current --namespace=kube-system切换至 kube-system 的命名空间
3.4.2 使用 yaml 配置文件操作命名空间

创建 hello.yaml 配置文件:

vim hello.yaml

输入以下内容:

apiVersion: v1
kind: Namespace
metadata:
   name: hello
   labels:
     name: hello

使用 yaml 配置文件创建命名空间 hello:

kubectl apply -f hello.yaml

删除对应的 yaml 配置文件即删除该命名空间:

kubectl delete -f hello.yaml
3.5 K8s 容器组 Pod

K8s 为了管理容器,在 Docker 容器上又封装了一层容器组 Pod。Pod 是 K8s 中应用的最小单元,包含一个或多个容器。Pod 中的容器共享同一个 IP 地址和命名空间,共享相同的存储卷和主机资源。

3.5.1 Pod 状态
状态说明
Pending容器组已被节点接受,但由于网络原因还未运行起来
RunningPod 已经绑定到了某个节点,Pod 中所有的容器都已被创建。至少有一个容器仍在运行,或者正处于启动或重启状态
SucceededPod 中的所有容器都已成功终止,并且不会再重启
FailedPod 中的所有容器都已终止,并且至少有一个容器是因为失败终止。也就是说,容器以非 0 状态退出或者被系统终止
Unknown因为某些原因无法取得 Pod 的状态。这种情况通常是因为与 Pod 所在主机通信失败。

当一个 Pod 被删除时,执行一些 kubectl 命令会展示这个 Pod 的状态为 Terminating。 这个状态并不是 Pod 阶段之一。 Pod 被赋予一个可以体面终止的期限,默认为 30 秒

3.5.2 Pod 常用命令行命令
命令说明
kubectl run Pod名称 --image=镜像名称 [-n 命名空间名称]创建一个 Pod(指定命名空间可选)
kubectl get pod -A查看所有 Pod 运行信息
kubectl get pod -o wide查看所有 Pod 更详细的运行信息
kubectl get pod -w持续监控 Pod 运行状态
kubectl describe pod Pod名称查看某个 Pod 具体状态
kubectl logs Pod名称 [-c 容器名]查看 Pod 日志
kubectl exec -it Pod名称 -c 容器名 /bin/bash进入 Pod 中容器终端
kubectl delete Pod名称删除 Pod
3.6 K8s 工作负载

在 K8s 中,我们一般不直接创建 Pod,而是通过工作负载如 Deployment、StatefulSet、DaemonSet、Job / CronJob, 为 Pod 提供水平伸缩,版本更新,故障恢复等能力。

Deployment 使 Pod 拥有多副本,自愈,扩缩容等能力。deployment 命令有:

创建、扩缩容、删除命令说明
kubectl create deploy deployment名 --image=镜像名创建 deployment
kubectl create deploy deployment名 --image=镜像名 --replicas=3创建 3 份 deployment
kubectl scale deploy/deployment名 --replicas=5扩容到 5 份 deployment
kubectl scale deploy/ --replicas=2缩容到 2 份 deployment
kubectl get deploy查看创建的 deployment
kubectl delete deploy deployment名删除 deployment

应用的更新一般就是镜像版本的更新:

滚动更新命令说明
kubectl set image deploy/deployment名 镜像名=镜像名:版本号 --record滚动更新镜像版本
kubectl get deploy/deployment名 -oyamlgrep image
kubectl rollout history deploy/deployment名查看版本历史记录
kubectl rollout undo deploy/deployment名 --to-revision=版本号回退到指定版本
kubectl rollout undo deploy/deployment名回退到上个版本
3.7 K8s 服务
3.7.1 Service 服务访问

Service 是将一组 Pod 公开为网络服务的抽象方法,实现 Pod 的网络服务发现与负载均衡。

命令说明
kubectl get svc查看所有服务
kubectl expose deploy deployment名 --port=对外服务端口号 --target-port=Pod端口号 --type=ClusterIP只对集群内暴露服务(ClusterIP)
kubectl expose deploy deployment名 --port=对外服务端口号 --target-port=Pod端口号 --type=NodePort对集群外也暴露服务(NodePort)
3.7.2 Ingress 统一网关入口

Ingress 是 Service 的统一网关入口,底层实际上就是反向代理 nginx,通过域名、路径匹配规则指向不同的 Service:

在 master 节点上安装 Ingress:

vim ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.1
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              hostPort: 80
              protocol: TCP
            - name: https
              hostPort: 443
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.15
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.15
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
kubectl apply -f ingress.yaml
kubectl get pod -A

获取 Ingress 暴露端口:

kubectl get svc -A

外部浏览器可以通过集群任意 ip:30266 以及 https://ip:31192 访问。

3.8 K8s 存储抽象
3.8.1 nfs 存储系统

我们知道 Docker 中保存容器数据的方法是挂载数据卷。但是在 k8s 中,如果我们 3 号机器上的某个 Pod 节点(黑色)宕机,根据故障转移 k8s 在 2 号机器再启动该节点。但是原来 3 号机器挂载的目录无法转移到 2 号机器上,造成数据存储丢失。

k8s 使用存储层统一管理所有 Pod 挂载的目录。 以 nfs 存储层为例,它在某个机器下创建 /nfs/data 目录用于挂载数据,在其他机器中都存在备份目录 /bak/data。只要 Pod 在一台机器上挂载了数据,该数据都会同步到其他机器上,这就叫做存储抽象。

3.8.2 PV & PVC 目录挂载

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置

PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格

k8s 使用 PV & PVC 方式进行数据挂载。Pod 想要挂载多大的空间,先使用 PVC 申请 PV 空间。如果 Pod 被删除,PVC 随之被删除,对应的 PV 空间也被回收。
在这里插入图片描述

3.9 K8s ConfigMap

ConfigMap 可以抽取应用配置,并且可以自动更新,被多个 Pod 共享,非常适合配置文件类的数据挂载

3.9.1 把配置文件创建为配置集

在 master 根目录下创建 Redis 的配置文件:

vim redis.conf

redis.conf 中的内容为:

appendonly yes

将该配置文件创建为名为 redis-conf 的配置集,创建过后配置文件可以删除:

kubectl create cm redis-conf --from-file=redis.conf
kubectl get cm redis-conf -o yaml
rm -rf redis.conf
3.9.2 创建 Pod

在 master 根目录下创建 redis.yaml,内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    command:
      - redis-server
      - "/redis-master/redis.conf"  
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: /data
      name: data                 # 20-21 行为 data 挂载配置
    - mountPath: /redis-master
      name: config               # 22-27 行为 config 挂载配置
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: redis-conf
        items:
        - key: redis.conf
          path: redis.conf

在这里插入图片描述

kubectl apply -f redis.yaml
kubectl get pod

等待名为 redis 的 Pod 状态为 running 后,进入 redis 容器查看我们的配置:

kubectl exec -it redis -- redis-cli
127.0.0.1:6379> CONFIG GET appendonly
127.0.0.1:6379> exit

如果我们修改配置文件:

kubectl edit cm redis-conf

做一些配置修改后退出保存。进入 redis 容器查看我们的配置值未更改,需要重新启动 redis Pod 才能从关联的 ConfigMap 中获取更新的值。 因为我们的 Pod 部署的中间件自己本身没有热更新能力。

3.10 Secret 服务敏感配置文件

Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥,将这些信息放在 Secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活

正常来说,我们不能下载一个镜像仓库里的私有镜像,但是我们可以通过 K8s 的 Secret 服务惊醒密钥配置。

在 K8s 中,创建一个 Secret 如下:

kubectl create secret docker-registry 密钥名 \
  --docker-server=<镜像仓库服务器名> \
  --docker-username=<镜像仓库用户名> \
  --docker-password=<镜像仓库密码> \
  --docker-email=<镜像仓库邮箱地址>
  
kubectl get secret 密钥名 -o yaml

然后,我们可以使用该密钥去拉取镜像,来创建 Pod:

apiVersion: v1
kind: Pod
metadata:
  name: Pod名
spec:
  containers:
  - name: 容器名
    image: 镜像仓库名/私有镜像:版本号
  imagePullSecrets:
  - name: 密钥名

第 4 章 K8s 集群上安装 KubeSphere(v3.3.2)

Kubernetes 上安装 KubeSphere

  • Kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x 和 * v1.24.x
  • 确保您的机器满足最低硬件要求:master 4 核 8 G,worker 8 核 16 G
  • 在安装之前,需要配置 Kubernetes 集群中的默认存储类型
4.1 安装默认存储类型

在每个节点上安装 nfs 服务器:

sudo apt install -y nfs-kernel-server

在 master 节点上执行以下命令:

sudo su
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
mkdir -p /nfs/data
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
exportfs -r
exportfs

出现以下内容即为成功:
在这里插入图片描述

在 worker 节点上执行以下命令(ip 换成自己 master 的 ip):

showmount -e 192.168.234.141
sudo mkdir -p /nfs/data
sudo mount -t nfs 192.168.234.141:/nfs/data /nfs/data

在 master 节点上创建 sc.yaml(注释地方进行修改):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.234.141 ## 指定自己 nfs 服务器地址(主节点ip)
            - name: NFS_PATH  
              value: /nfs/data       ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.234.141  ## 指定自己 nfs 服务器地址(主节点ip)
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f sc.yaml
kubectl get sc
4.2 部署 KubeSphere

在 master 上:

wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml
4.3 修改 cluster-configuration.yaml
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.2
spec:
  persistence:
    storageClass: ""        
  authentication:
    jwtSecret: ""           
  local_registry: ""        
  etcd:
    monitoring: true        
    endpointIps: 192.168.234.141  ## 改为自己 master 的 ip
    port: 2379             
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  
        port: 30880
        type: NodePort
    redis:
      enabled: true
      enableHA: false
      volumeSize: 2Gi   # Redis PVC size.
    openldap:
      enabled: true
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi  # Minio PVC size.
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 
      GPUMonitoring:     
        enabled: false
    gpu:               
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   
      logMaxAge: 7             
      elkPrefix: logstash     
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:               
    enabled: true 
  auditing:                
    enabled: true         
  devops:                  
    enabled: true 
    jenkinsMemoryLim: 2Gi      
    jenkinsMemoryReq: 1500Mi   
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g  
  events:                  
    enabled: true 
  logging:                
    enabled: true 
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                   
    enabled: false                   
  monitoring:
    storageClass: ""              
    node_exporter:
      port: 9100
    gpu:                           
      nvidia_dcgm_exporter:        
        enabled: false            
  multicluster:
    clusterRole: none  
  network:
    networkpolicy: 
      enabled: true 
    ippool: 
      type: calico
    topology: 
      type: none 
  openpitrix: 
    store:
      enabled: true 
  servicemesh:        
    enabled: true 
    istio:  
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: true
        cni:
          enabled: true
  edgeruntime:          
    enabled: false
    kubeedge:       
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress: 
            - ""            
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
      iptables-manager:
        enabled: true 
        mode: "external"
  gatekeeper:       
    enabled: false   
  terminal:
    timeout: 600         
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""        
  authentication:
    jwtSecret: ""           
  local_registry: ""       
  etcd:
    monitoring: true        
    endpointIps: 192.168.234.141   ## 改为自己 master 的 ip
    port: 2379              
    tlsEnable: true
  common:
    redis:
      enabled: true
    openldap:
      enabled: true
    minioVolumeSize: 20Gi 
    openldapVolumeSize: 2Gi   
    redisVolumSize: 2Gi 
    monitoring:
      # type: external   
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 
    es:   
      # elasticsearchMasterReplicas: 1   
      # elasticsearchDataReplicas: 1     
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi    
      logMaxAge: 7                     
      elkPrefix: logstash              
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  
    port: 30880
  alerting:                
    enabled: true         
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                
    enabled: true         
  devops:                  
    enabled: true             
    jenkinsMemoryLim: 2Gi      
    jenkinsMemoryReq: 1500Mi   
    jenkinsVolumeSize: 8Gi     
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:                 
    enabled: true         
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    
    enabled: false                   
  monitoring:
    storageClass: ""                 
    # prometheusReplicas: 1          
    prometheusMemoryRequest: 400Mi   
    prometheusVolumeSize: 20Gi       
    # alertmanagerReplicas: 1          
  multicluster:
    clusterRole: none  
  network:
    networkpolicy: 
      enabled: true 
    ippool: 
      type: calico
    topology: 
      type: none 
  openpitrix: 
    store:
      enabled: true
  servicemesh:         
    enabled: true
  kubeedge:          
    enabled: false   
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""            
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []
kubectl apply -f kubesphere-installer.yaml
# 等待大概需要 3 分钟安装好
kubectl get pod -A


kubectl apply -f cluster-configuration.yaml
# 查看 kubesphere-system 安装进程(大概 5-10 分钟)
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

jenkins 反复重启问题修复:https://ask.kubesphere.io/forum/d/9252-331-jenkins

# 解决 etcd 监控证书找不到问题
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

kubectl get pod -A

# 查看每个 Pod 具体状态,如果是在 Pulling image 就只能等待
kubectl describe pod -n [NAMESPACE] [NAME]
4.4 访问 KubeSphere

拿机器的公网 IP:30880 访问 KubeSphere 网页,使用以下默认账户和密码登录访问 KubeSphere 网页:

4.5 安装后启用可插拔组件
  1. 点击左上角的「平台管理」 ,然后选择「集群管理」。
  2. 点击「自定义资源 CRD」,然后在搜索栏中输入 clusterconfiguration
  3. 在资源列表中,点击 ks-installer 右侧的三个点,然后选择「编辑配置文件」
  4. 在该配置文件中,将所需组件 enabled 的 false 更改为 true,以启用要安装的组件。完成后点击「更新」
  5. 等待组件安装成功,登录 KubeSphere 控制台,在「服务组件」中可以查看不同组件的状态
配置项功能组件描述
alerting告警系统使用户能够自定义告警策略,及时向接收器发送告警信息
auditing审计日志系统记录了平台上不同租户的活动
devopsDevOps 系统基于 Jenkins 提供开箱即用的 CI/CD 功能,提供一站式 DevOps 方案,内置 Jenkins 流水线与 B2I & S2I
events事件系统导出、过滤和警告多租户 Kubernetes 集群中的 Kubernetes 事件
logging日志系统在统一的控制台中提供灵活的日志查询、收集和管理功能
metrics_serverHPA根据设定指标对 Pod 数量进行动态伸缩,使运行在上面的服务对指标的变化有一定的自适应能力
networkpolicy网络策略可以在同一个集群内部之间设置网络策略
notification通知系统允许用户将警告信息发送出来的告警通知到电子邮件、企业微信和 Slack
openpitrix应用商店基于 Helm 的应用程序商店,允许用户管理应用整个生命周期
servicemesh服务网格 (基于 Istio)支持灰度发布、流量拓扑、流量治理、流量跟踪

第 5 章 KubeSphere 实战

5.1 多租户系统实战

5.1.1 创建企业空间管理员

使用平台管理员 platform-admin 身份登录 KubeSphere。点击「平台管理」->「访问控制」->「用户」->「创建」:

5.1.2 创建企业空间人员

使用平台管理员 platform-admin 身份登录。点击「平台管理」->「访问控制」->「用户」->「创建」多个人员:

5.1.3 创建企业空间

使用企业空间管理员 boss 身份登录,创建企业空间:

5.1.4 邀请人员进入企业空间

使用企业空间管理员 boss 身份登录,点击创建的企业空间 ->「企业空间设置」->「企业空间成员」->「邀请」以下成员进入企业空间:

5.1.5 创建项目并邀请成员

使用项目总监 pm-he 身份登录,点击「项目」 ->「创建」:

点击进入创建好的一个项目 -> 点击「项目设置」 ->「项目成员」->「邀请」:

5.2 添加应用仓库

使用深圳分公司的管理员 shenzhen-boss 身份登录,点击「应用管理」->「应用仓库」-> 「添加」:

等待应用仓库同步成功。

5.3 部署中间件
中间件集群内地址外部访问地址
MySQLhis-mysql.his:3306192.168.234.141:32029
Redishis-redis.his:6379192.168.234.141:31325
RabbitMQhis-rabbitmq.his:5672192.168.234.141:30764
Sentinelhis-sentinel.his:8080192.168.234.141:31402
MongoDBhis-mongodb.his:27017192.168.234.141:30345
RabbitMQhis-rabbitmq.his:5672192.168.234.141:30764
ElasticSearchhis-es.his:9200192.168.234.141:31402
5.3.1 MySQL
  1. 创建配置文件

点击进入项目 ->【配置】->【配置字典】->【创建】:

  • 【名称】填 【mysql-conf】
  • 【添加数据】-> 【键】填 【my.cnf】 -> 【值】填
[client]
default-character-set=utf8mb4
 
[mysql]
default-character-set=utf8mb4
 
[mysqld]
init_connect='SET collation_connection = utf8mb4_unicode_ci'
init_connect='SET NAMES utf8mb4'
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
lower_case_table_names=1
  1. 创建 StatefulSet

点击【应用负载】->【工作负载】->【有状态副本集】->【创建】:

  1. 创建 ClusterIp 和 NodePort 两种服务
  • ClusterIp:

  • NodePort:

5.3.2 Redis
  1. 创建配置文件

点击【配置】->【配置字典】->【创建】:

  • 【名称】填 【redis-conf】
  • 【添加数据】-> 【键】填 【redis.conf】 -> 【值】填
appendonly yes
port 6379
bind 0.0.0.0

  1. 创建 StatefulSet

点击【应用负载】->【工作负载】->【有状态副本集】->【创建】:

  1. 创建 ClusterIp 和 NodePort 两种服务:与 MySQL 同理
5.3.3 elasticSearch
  1. 创建配置文件

点击进入项目 ->「配置」->「配置字典」->「创建」:

  • 【名称】填 【es-conf】
  • 【添加数据】-> 【键】填 【elasticsearch.yml】 -> 【值】填
cluster.name: "docker-cluster"
network.host: 0.0.0.0
  • 【添加数据】-> 【键】填 【jvm.options】 -> 【值】填
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

14-:-XX:+UseG1GC

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
  1. 创建 StatefulSet

点击【应用负载】->【工作负载】->【有状态副本集】->【创建】:

  1. 创建 ClusterIp 和 NodePort 两种服务
  • ClusterIp:

  • NodePort:

5.3.4 rabbitmq

点击左上角【应用商店】-> 选择【RabbitMQ】->【安装】->【创建】:

5.3.5 MongoDB

点击【应用负载】->【应用】->【创建】->【从应用模板】->【应用仓库】选择【bitnami】-> 搜索【mongo】-> 选择【mongodb】->【安装】-> 【名称】填【mongodb】-> 【下一步】-> 取消勾选【Enable Authentication】->【安装】。

创建 mongodb 的 CLusterIP 和 NodePort 服务同上。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值