Docker Kubernetes监控----HPA和Helm

1. HPA实例

官网:https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

1.1 单一限制
1.1.1 运行php-apache 服务器
## 1 拉取镜像
[root@server1 harbor]# docker pull mirrorgooglecontainers/hpa-example   ##下载测试镜像
[root@server1 harbor]# docker tag mirrorgooglecontainers/hpa-example reg.westos.org/library/hpa-example
[root@server1 harbor]# docker push reg.westos.org/library/hpa-example

## 2. 操作
[root@server2 ~]# mkdir hpa
[root@server2 ~]# cd hpa/
[root@server2 hpa]# vim hpa-apache.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        run: php-apache
    spec:
      containers:
      - name: php-apache
        image: hpa-example
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

[root@server2 hpa]# kubectl apply -f hpa-apache.yaml 
deployment.apps/php-apache created
service/php-apache created
[root@server2 hpa]# kubectl  get svc
[root@server2 hpa]# kubectl  get pod
NAME                          READY   STATUS    RESTARTS   AGE
php-apache-6cc67f7957-fqnp6   1/1     Running   0          60s
[root@server2 hpa]# kubectl  describr svc php-apache
[root@server2 hpa]# curl 10.103.202.209
OK![root@server2 hpa]# 

在这里插入图片描述

1.1.2 创建 Horizontal Pod Autoscaler
  • php-apache 服务器已经运行,我们将通过 kubectl autoscale 命令创建 Horizontal Pod Autoscaler。 以下命令将创建一个 Horizontal Pod Autoscaler 用于控制我们上一步骤中创建的 Deployment,使 Pod 的副本数量维持在 1 到 10 之间。 大致来说,HPA 将(通过 Deployment)增加或者减少 Pod 副本的数量以保持所有 Pod 的平均 CPU 利用率在 50% 左右(由于每个 Pod 请求 200 毫核的 CPU,这意味着平均 CPU 用量为 100 毫核)。自动扩缩完成副本数量的改变可能需要几分钟的时间。Hpa会根据Pod的CPU使用率动态调节Pod的数量
[root@server2 hpa]# kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10     ##设置
horizontalpodautoscaler.autoscaling/php-apache autoscaled         
[root@server2 hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/50%   1         10        0          7s
[root@server2 hpa]# kubectl top pod
NAME                          CPU(cores)   MEMORY(bytes)   
php-apache-6cc67f7957-fqnp6   1m           6Mi             
[root@server2 hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/50%    1         10        1          32s

在这里插入图片描述

1.1.3 增加负载
[root@server2 hpa]# kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"   ##增加负载命令

[root@server2 ~]# kubectl get pod  
[root@server2 ~]# kubectl top pod    ##查看pod占用量    
[root@server2 ~]# kubectl describe svc php-apache  ##详细查看服务情况
[root@server2 ~]# kubectl get hpa php-apache    ##查看占用量   

在这里插入图片描述

在这里插入图片描述在这里插入图片描述

1.2 基于多项指标的自动扩缩
[root@server2 hpa]# vim hpa-v2.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: php-example
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-example
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        averageUtilization: 60
        type: Utilization
  - type: Resource
    resource:
      name: memory
      target:
        averageValue: 50Mi
        type: AverageValue
[root@server2 hpa]# kubectl apply -f hpa-v2.yaml 
horizontalpodautoscaler.autoscaling/php-example created
[root@server2 hpa]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
php-apache-6cc67f7957-j6qzk   1/1     Running   0          37s
[root@server2 hpa]# kubectl get hpa
NAME          REFERENCE                TARGETS                         MINPODS   MAXPODS   REPLICAS   AGE
php-example   Deployment/php-examlpe   <unknown>/50Mi, <unknown>/60%   1         10        0          11s
[root@server2 hpa]# kubectl top pod
AME                          CPU(cores)   MEMORY(bytes)   
php-apache-6cc67f7957-j6qzk   1m           5Mi  

在这里插入图片描述

2. Helm

2.1 简介
  • Helm是Kubernetes 应用的包管理工具,主要用来管理 Charts,类似Linux系统的yum。

  • Helm Chart 是用来封装 Kubernetes 原生应用程序的一系列 YAML 文件。可以在你部署应用的时候自定义应用程序的一些 Metadata,以便于应用程序的分发。

  • 对于应用发布者而言,可以通过 Helm 打包应用、管理应用依赖关系、管理应用版本并发布应用到软件仓库。

  • 对于使用者而言,使用 Helm 后不用需要编写复杂的应用部署文件,可以以简单的方式在 Kubernetes 上查找、安装、升级、回滚、卸载应用程序。

  • Helm V3 与 V2 最大的区别在于去掉了tiller

2.2 前期配置

Helm官网: https://helm.sh/docs/intro/

[root@server2 ~]# mkdir helm
[root@server2 ~]# cd helm/
[root@server2 helm]# ls
helm-v3.4.1-linux-amd64.tar.gz
[root@server2 helm]# tar zxf helm-v3.4.1-linux-amd64.tar.gz  
[root@server2 helm]# ll
total 13012
-rwxr-xr-x 1 root root 13323294 Mar  3 16:09 helm-v3.4.1-linux-amd64.tar.gz
drwxr-xr-x 2 3434 3434       50 Nov 12 03:52 linux-amd64
[root@server2 helm]# cd linux-amd64/
[root@server2 linux-amd64]# ls
helm  LICENSE  README.md
[root@server2 linux-amd64]# cp helm /usr/local/bin/   ##设置环境变量
[root@server2 linux-amd64]# helm env     ##查看环境变量
[root@server2 linux-amd64]# helm list     ##查看应用
helm search hub nginx 

[root@server2 ~]# echo "source <(helm completion bash)" >> ~/.bashrc  ##设置补齐命令
[root@server2 ~]# cat .bashrc      ##查看环境变量
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi
source <(kubectl completion bash)
source <(helm completion bash)
[root@server2 ~]# source .bashrc    ##更新环境变量
[root@server2 ~]# helm search hub redis-ha   ##查找官方库
[root@server2 ~]# helm repo add stable http://mirror.azure.cn/kubernetes/charts/  ##添加库,作为测试
[root@server2 ~]# helm repo list
[root@server2 ~]# helm search repo nginx 
[root@server2 ~]# helm repo remove stable    ##测试完成删除即可

在这里插入图片描述
在这里插入图片描述

2.3 实验
2.3.1 安装webserver
## 1. 配置镜像
[root@server1 ~]# docker pull bitnami/nginx:1.19.7-debian-10-r1
[root@server1 ~]# docker tag bitnami/nginx:1.19.7-debian-10-r1 reg.westos.org/bitnami/nginx:1.19.7-debian-10-r1
[root@server1 ~]# docker push reg.westos.org/bitnami/nginx:1.19.7-debian-10-r1

## 2. 配置
[root@server2 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami ##官网上搜索nginx,然后复制仓库网址
[root@server2 ~]# helm search repo nginx    ##搜索nginx
[root@server2 ~]# helm pull bitnami/nginx    ##从仓库拉取nginx文件
[root@server2 ~]# helm pull bitnami/nginx --version 8.7.0
[root@server2 ~]# tar zxf nginx-8.7.0.tgz     ##解压
[root@server2 ~]# cd nginx/       ##进入文件夹
[root@server2 nginx]# ls
Chart.lock  charts  Chart.yaml  ci  README.md  templates  values.schema.json  values.yaml
[root@server2 nginx]# ls templates/

## 修改配置文件
[root@server2 nginx]# vim values.yaml 
   5 global:
   6    imageRegistry: reg.westos.org
  16   tag: 1.19.7-debian-10-r1
 332   #type: LoadBalancer
 333   type: ClusterIP
## 安装测试
[root@server2 nginx]# helm install webserver .  ##安装

[root@server2 nginx]# cd 
[root@server2 ~]# cd hpa/
[root@server2 hpa]# kubectl delete -f hpa-v2.yaml
[root@server2 hpa]# kubectl delete -f hpa-apache.yaml
[root@server2 hpa]# kubectl get all
[root@server2 hpa]# helm list
[root@server2 hpa]# helm list -n kube-system 
[root@server2 hpa]# helm list --all-namespaces 
[root@server2 hpa]# kubectl get pod
[root@server2 hpa]# kubectl get svc
[root@server2 hpa]# curl 10.106.162.13
[root@server2 hpa]# helm status webserver

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
修改配置文件

[root@server2 nginx]# vim values.yaml
163 replicaCount: 3 
[root@server2 nginx]# helm upgrade webserver .
[root@server2 nginx]# kubectl get pod
[root@server2 nginx]# kubectl describe svc

安装与测试
在这里插入图片描述
在这里插入图片描述

2.3.2 回滚
[root@server2 nginx]# helm history webserver 
REVISION	UPDATED                 	STATUS    	CHART      	APP VERSION	DESCRIPTION     
1       	Fri Mar 26 19:19:44 2021	superseded	nginx-8.7.0	1.19.7     	Install complete
2       	Fri Mar 26 19:38:50 2021	deployed  	nginx-8.7.0	1.19.7     	Upgrade complete
[root@server2 nginx]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
webserver-nginx-6b797bf86c-dwhlx   1/1     Running   0          4m2s
webserver-nginx-6b797bf86c-jk54p   1/1     Running   0          23m
webserver-nginx-6b797bf86c-v8vs7   1/1     Running   0          4m2s
[root@server2 nginx]# helm rollback webserver 1
[root@server2 nginx]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
webserver-nginx-6b797bf86c-dwhlx   1/1     Running   0          4m41s
[root@server2 nginx]# helm history webserver 
REVISION	UPDATED                 	STATUS    	CHART      	APP VERSION	DESCRIPTION     
1       	Fri Mar 26 19:19:44 2021	superseded	nginx-8.7.0	1.19.7     	Install complete
2       	Fri Mar 26 19:38:50 2021	superseded	nginx-8.7.0	1.19.7     	Upgrade complete
3       	Fri Mar 26 19:43:02 2021	deployed  	nginx-8.7.0	1.19.7     	Rollback to 1   

在这里插入图片描述

2.3.3 卸载
[root@server2 nginx]# helm uninstall webserver 
release "webserver" uninstalled
[root@server2 nginx]# kubectl get pod
No resources found in default namespace.
[root@server2 nginx]# kubectl get all
[root@server2 nginx]# helm list
NAME	NAMESPACE	REVISION	UPDATED	STATUS	CHART	APP VERSION

在这里插入图片描述

2.4 搭建一个Helm Chart
2.4.1 简单创建
[root@server2 helm]# helm  create mychart   ##创建
[root@server2 ~]# cd helm/
[root@server2 helm]# helm  create mychart   ##创建
[root@server2 helm]# tree mychart/    ##查看
mychart/
├── charts
├── Chart.yaml  #编写mychart的应用描述信息
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml    #编写应用部署信息
[root@server2 helm]# cd mychart/
[root@server2 mychart]# ls
charts  Chart.yaml  templates  values.yaml
[root@server2 mychart]# vim Chart.yaml 
appVersion: v1
[root@server2 mychart]# vim values.yaml
  8   repository: myapp
 11   tag: "v1"

[root@server2 mychart]# helm lint .   ##检查依赖和模板配置是否正确
[root@server2 helm]# helm package mychart/    ##打包应用 
[root@server2 helm]# cd /etc/docker/
[root@server2 docker]# ls
ca.pem  certs.d  daemon.json  key.json  plugins  server-key.pem  server.pem
[root@server2 docker]# cd certs.d/reg.westos.org/
[root@server2 reg.westos.org]# ls
ca.crt    
[root@server2 reg.westos.org]# cp ca.crt /etc/pki/ca-trust/source/anchors/  ##复制证书
[root@server2 reg.westos.org]# update-ca-trust
[root@server2 reg.westos.org]# helm repo add mychart https://reg.westos.org/chartrepo/charts
"mychart" has been added to your repositories
[root@server2 reg.westos.org]# helm repo list  ##列出仓库
NAME   	URL                                    
bitnami	https://charts.bitnami.com/bitnami     
mychart	https://reg.westos.org/chartrepo/charts
[root@server2 reg.westos.org]# cd /root/helm/

在这里插入图片描述
在这里插入图片描述

2.4.2 harbor仓库创建一个新的项目

在这里插入图片描述

2.4.3 安装插件与上传
- 安装helm-push插件:
   $ helm plugin install https://github.com/chartmuseum/helm-push	//在线安装
  离线安装
	$ helm env	//获取插件目录
	$ mkdir ~/.local/share/helm/plugins/push
	$ tar zxf helm-push_0.8.1_linux_amd64.tar.gz -C ~/.local/share/helm/plugins/push
	$ helm push --help
	
	$ helm  repo list          ##列出仓库
	mychart  	http://172.25.0.11:30002/chartrepo/charts 

	$ helm push  mychart-0.1.0.tgz mychart -u admin -p Harbor12345    ##上传helm
	Pushing mychart-0.1.0.tgz to mychart...
	Done.
	$ helm  repo  update 		       ##更新仓库
	$ helm  search repo mychart   ##搜索
[root@server2 helm]# ls
helm-push_0.9.0_linux_amd64.tar.gz  helm-v3.4.1-linux-amd64.tar.gz  linux-amd64  mychart  mychart-0.1.0.tgz
[root@server2 helm]# helm env       ##查看环境变量,确定插件位置
HELM_BIN="helm"
HELM_CACHE_HOME="/root/.cache/helm"
HELM_CONFIG_HOME="/root/.config/helm"
HELM_DATA_HOME="/root/.local/share/helm"
HELM_DEBUG="false"
HELM_KUBEAPISERVER=""
HELM_KUBEASGROUPS=""
HELM_KUBEASUSER=""
HELM_KUBECONTEXT=""
HELM_KUBETOKEN=""
HELM_MAX_HISTORY="10"
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"    ##插件目录需要自己创建
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"

[root@server2 helm]# mkdir -p /root/.local/share/helm/plugins  ##创建目录
[root@server2 helm]# mkdir -p /root/.local/share/helm/plugins/push  ##创建push目录
[root@server2 helm]# cd /root/.local/share/helm/plugins/push
[root@server2 push]# cd /root/helm/
[root@server2 helm]# tar zxf helm-push_0.9.0_linux_amd64.tar.gz -C /root/.local/share/helm/plugins/push  ##解包到push目录
[root@server2 helm]# cd /root/.local/share/helm/plugins/push
[root@server2 push]# ls

[root@server2 ~]# cd helm/
[root@server2 helm]# helm repo list  ##列出仓库
NAME   	URL                                    
bitnami	https://charts.bitnami.com/bitnami     
mychart	https://reg.westos.org/chartrepo/charts
[root@server2 helm]# ls
[root@server2 helm]# helm push mychart-0.1.0.tgz mychart --insecure -u admin -p westos
Pushing mychart-0.1.0.tgz to mychart...
Done.
[root@server2 helm]# helm repo update     ##更新仓库
[root@server2 helm]# helm search repo mychart    ##搜索仓库
[root@server2 helm]# helm install webserver mychart/mychart 

[root@server2 helm]# helm list
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART        	APP VERSION
webserver	default  	1       	2021-03-26 20:22:41.221285036 +0800 CST	deployed	mychart-0.1.0	v1      
[root@server2 helm]# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
webserver-mychart-7bbdf7d75f-qqnbv   1/1     Running   0          22s
[root@server2 helm]# kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
webserver-mychart-7bbdf7d75f-qqnbv   1/1     Running   0          35s   10.244.141.239   server3   <none>           <none>
[root@server2 helm]# curl 10.244.141.239
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.4.4 更新
##更新
[root@server2 helm]# cd mychart/
[root@server2 mychart]# vim values.yaml  ##设置image的tag为v2
 11   tag: "v2"
[root@server2 mychart]# vim Chart.yaml
version: 0.2.0
appVersion: v2
[root@server2 mychart]# cd ..
[root@server2 helm]# helm package mychart
[root@server2 helm]# helm push mychart-0.2.0.tgz mychart --insecure -u admin -p westos
[root@server2 helm]# helm repo update
[root@server2 helm]# helm search repo mychart
NAME           	CHART VERSION	APP VERSION	DESCRIPTION                
mychart/mychart	0.2.0        	v2         	A Helm chart for Kubernetes
[root@server2 helm]# helm search repo mychart -l 
NAME           	CHART VERSION	APP VERSION	DESCRIPTION                
mychart/mychart	0.2.0        	v2         	A Helm chart for Kubernetes
mychart/mychart	0.1.0        	v1         	A Helm chart for Kubernetes
[root@server2 helm]# helm upgrade webserver mychart/mychart   ##更新
[root@server2 helm]# kubectl get pod
NAME                                 READY   STATUS        RESTARTS   AGE
webserver-mychart-665b959cb-zqfpp    1/1     Running       0          19s
webserver-mychart-7bbdf7d75f-qqnbv   0/1     Terminating   0          12m
[root@server2 helm]# kubectl get pod -o wide   ##查看分配的ip
NAME                                 READY   STATUS        RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
webserver-mychart-665b959cb-zqfpp    1/1     Running       0          28s   10.244.141.240   server3   <none>           <none>
webserver-mychart-7bbdf7d75f-qqnbv   0/1     Terminating   0          12m   <none>           server3   <none>           <none>
[root@server2 helm]# curl 10.244.141.240
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@server2 helm]# helm history webserver 
REVISION	UPDATED                 	STATUS    	CHART        	APP VERSION	DESCRIPTION     
1       	Fri Mar 26 20:22:41 2021	superseded	mychart-0.1.0	v1         	Install complete
2       	Fri Mar 26 20:34:38 2021	deployed  	mychart-0.2.0	v2         	Upgrade complete

在这里插入图片描述
在这里插入图片描述

2.4.5 回滚
##回滚
[root@server2 helm]# helm rollback webserver 1
Rollback was a success! Happy Helming!
[root@server2 helm]# helm history webserver 
REVISION	UPDATED                 	STATUS    	CHART        	APP VERSION	DESCRIPTION     
1       	Fri Mar 26 20:22:41 2021	superseded	mychart-0.1.0	v1         	Install complete
2       	Fri Mar 26 20:34:38 2021	superseded	mychart-0.2.0	v2         	Upgrade complete
3       	Fri Mar 26 20:36:18 2021	deployed  	mychart-0.1.0	v1         	Rollback to 1   
[root@server2 helm]# kubectl get pod -o wide
NAME                                 READY   STATUS        RESTARTS   AGE    IP               NODE      NOMINATED NODE   READINESS GATES
webserver-mychart-665b959cb-zqfpp    0/1     Terminating   0          108s   10.244.141.240   server3   <none>           <none>
webserver-mychart-7bbdf7d75f-rt2rt   1/1     Running       0          10s    10.244.141.241   server3   <none>           <none>
[root@server2 helm]# curl 10.244.141.241
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server2 helm]# helm uninstall webserver 
release "webserver" uninstalled   ##卸载
[root@server2 helm]# 

在这里插入图片描述
在这里插入图片描述

2.5 Helm部署nfs-client-provisioner
[root@foundation50 Desktop]# vim /etc/exports
[root@foundation50 Desktop]# cat /etc/exports
/nfsdata     *(rw,no_root_squash)
[root@foundation50 Desktop]# mkdir /nfsdata
[root@foundation50 Desktop]# cd /nfsdata/
[root@foundation50 nfsdata]# cd
[root@foundation50 ~]# chmod 777 -R /nfsdata
[root@foundation50 ~]# systemctl start nfs-server
[root@foundation50 ~]# showmount -e
Export list for foundation50.ilt.example.com:
/nfsdata *

[root@server3 ~]# yum install -y nfs-utils
[root@server4 ~]# yum install -y nfs-utils
[root@server3 ~]# showmount -e 192.168.0.100
Export list for 192.168.0.100:
/nfsdata *
[root@server4 ~]# showmount -e 192.168.0.100
Export list for 192.168.0.100:
/nfsdata *

在这里插入图片描述

[root@server2 ~]# helm search  hub nfs
[root@server2 ~]# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
"nfs-subdir-external-provisioner" has been added to your repositories
[root@server2 ~]# helm repo list
NAME                           	URL                                                               
bitnami                        	https://charts.bitnami.com/bitnami                                
mychart                        	https://reg.westos.org/chartrepo/charts                           
nfs-subdir-external-provisioner	https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
[root@server2 ~]# helm pull nfs-subdir-external-provisioner/nfs-subdir-external-provisioner 
[root@server2 ~]# mv nfs-subdir-external-provisioner-4.0.6.tgz helm/
[root@server2 ~]# cd helm/
[root@server2 helm]# tar zxf nfs-subdir-external-provisioner-4.0.6.tgz
[root@server2 helm]# cd nfs-subdir-external-provisioner/
[root@server2 nfs-subdir-external-provisioner]# ls
Chart.yaml  ci  README.md  templates  values.yaml
[root@server2 nfs-subdir-external-provisioner]# vim values.yaml
 5   repository: reg.westos.org/library/nfs-subdir-external-provisioner
  6   tag: v4.0.0
 10   server: 192.168.0.100
 11   path: /nfsdata
 23   defaultClass: true
 [root@server2 nfs-subdir-external-provisioner]# kubectl create namespace nfs-provisioner
namespace/nfs-provisioner created
[root@server2 nfs-subdir-external-provisioner]# helm install nfs-subdir-external-provisioner . -n nfs-provisioner
[root@server2 nfs-subdir-external-provisioner]# kubectl get sc
[root@server2 nfs-subdir-external-provisioner]# kubectl get all -n nfs-provisioner 

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

[root@server2 helm]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  #storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
[root@server2 helm]# kubectl apply -f pvc.yaml
persistentvolumeclaim/test-claim created
[root@server2 helm]# kubectl get pv
[root@server2 helm]# kubectl get pvc
[root@server2 helm]# kubectl delete -f pvc.yaml --force

在这里插入图片描述

2.6 Helm部署nginx-ingress
[root@server2 metallb]# kubectl -n metallb-system get secrets
NAME                     TYPE                                  DATA   AGE
memberlist               Opaque                                1      22d

[root@server2 ~]# cd metallb/
[root@server2 metallb]# kubectl edit configmap -n kube-system kube-proxy  11
Edit cancelled, no changes made.
     37       strictARP: true
     44     mode: "ipvs"
[root@server2 metallb]# kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
[root@server2 metallb]# kubectl -n kube-system get pod
[root@server2 metallb]# ipvsadm -ln
[root@server2 metallb]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
[root@server2 metallb]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
[root@server2 metallb]# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
[root@server2 metallb]# ls
config.yaml  demo-svc.yaml  metallb.yaml  namespace.yaml  nginx-svc.yml
[root@server2 metallb]# vim metallb.yaml 
[root@server2 metallb]# kubectl apply -f metallb.yaml
[root@server2 metallb]# kubectl -n metallb-system get secrets
[root@server2 metallb]# kubectl -n metallb-system get all
[root@server2 metallb]# vim config.yaml 
[root@server2 metallb]# kubectl apply -f config.yaml 
configmap/config created
[root@server2 metallb]# cd 

在这里插入图片描述

### 拉取镜像
[root@server1 harbor]# docker pull bitnami/nginx:1.19.8-debian-10-r6
[root@server1 harbor]# docker pull bitnami/nginx-ingress-controller:0.44.0-debian-10-r28
[root@server1 harbor]# docker tag bitnami/nginx-ingress-controller:0.44.0-debian-10-r28 reg.westos.org/bitnami/nginx-ingress-controller:0.44.0-debian-10-r28
[root@server1 harbor]# docker tag bitnami/nginx:1.19.8-debian-10-r6 reg.westos.org/bitnami/nginx:1.19.8-debian-10-r6
[root@server1 harbor]# docker push reg.westos.org/bitnami/nginx-ingress-controller:0.44.0-debian-10-r28
[root@server1 harbor]# docker push reg.westos.org/bitnami/nginx:1.19.8-debian-10-r6

[root@server2 ~]# yum install -y ipvsadm
[root@server3 ~]# yum install -y ipvsadm
[root@server4 ~]# yum install -y ipvsadm

[root@server2 ~]# cd helm/
[root@server2 helm]# helm search repo nginx-ingress
NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                           
bitnami/nginx-ingress-controller	7.5.0        	0.44.0     	Chart for the nginx Ingress controller
[root@server2 helm]# helm pull bitnami/nginx-ingress-controller
[root@server2 helm]# ls
helm-push_0.9.0_linux_amd64.tar.gz  mychart            nfs-subdir-external-provisioner
helm-v3.4.1-linux-amd64.tar.gz      mychart-0.1.0.tgz  nfs-subdir-external-provisioner-4.0.6.tgz
linux-amd64                         mychart-0.2.0.tgz  nginx-ingress-controller-7.5.0.tgz
[root@server2 helm]# tar zxf nginx-ingress-controller-7.5.0.tgz
[root@server2 helm]# cd nginx-ingress-controller/
[root@server2 nginx-ingress-controller]# ls
Chart.lock  charts  Chart.yaml  ci  README.md  templates  values.yaml
[root@server2 nginx-ingress-controller]# vim values.yaml 
  5 global:
  6    imageRegistry: reg.westos.org
  
[root@server2 nginx-ingress-controller]# helm install nginx-ingress-controller . -n nginx-ingress-controller
[root@server2 nginx-ingress-controller]# kubectl -n nginx-ingress-controller get all

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.7 Helm提供web UI界面管理
- 部署kubeapps应用,为Helm提供web UI界面管理:
	$ helm repo add bitnami https://charts.bitnami.com/bitnami
	$ helm pull bitnami/kubeapps
	$ vim values.yaml
		global:
		  imageRegistry: reg.westos.org		
		useHelm3: true
		ingress:
		  enabled: true
		  hostname: kubeapps.westos.org
	
	$ kubectl create namespace kubeapps
	$ helm install kubeapps -n kubeapps .
	
	$ kubectl create serviceaccount kubeapps-operator -n kubeapps
	$ kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=kubeapps:kubeapps-operator
2.7.1 拉取镜像
[root@server1 harbor]# docker pull bitnami/kubeapps-apprepository-controller:2.2.1-scratch-r0
[root@server1 harbor]# docker pull bitnami/kubectl:1.18.16-debian-10-r0
[root@server1 harbor]# docker pull bitnami/kubeapps-kubeops:2.2.1-scratch-r0
[root@server1 harbor]# docker pull bitnami/kubeapps-assetsvc:2.2.1-scratch-r0
[root@server1 harbor]# docker pull bitnami/kubeapps-dashboard:2.2.1-debian-10-r0
[root@server1 harbor]# docker pull bitnami/oauth2-proxy:7.0.1-debian-10-r5
[root@server1 harbor]# docker pull kubeapps/pinniped-proxy:latest
[root@server1 harbor]# docker pull bitnami/nginx:1.19.7-debian-10-r1
[root@server1 harbor]# docker pull bitnami/kubeapps-asset-syncer:2.2.1-scratch-r0
[root@server1 harbor]# docker pull bitnami/postgresql:11.11.0-debian-10-r0

在这里插入图片描述

2.7.2 修改value配置文件
[root@server2 helm]# helm pull bitnami/kubeapps --version 5.2.2
[root@server2 helm]# tar zxf kubeapps-5.2.2.tgz
[root@server2 helm]# cd kubeapps/
[root@server2 kubeapps]# ls
Chart.lock  charts  Chart.yaml  crds  README.md  templates  values.schema.json  values.yaml
[root@server2 kubeapps]# vim values.yaml 
  5 global:
  6    imageRegistry: reg.westos.org
 49   enabled: true
 57   hostname: kubeapps.westos.org   ##域名解析到ip上

[root@server2 kubeapps]# cd charts/postgresql/
[root@server2 postgresql]# ls
Chart.lock  charts  Chart.yaml  ci  files  README.md  templates  values.schema.json  values.yaml
[root@server2 postgresql]# vim values.yaml 
image:
  registry: docker.io
  repository: bitnami/postgresql
  tag: 11.11.0-debian-10-r0

[root@server2 kubeapps]# kubectl create namespace kubeapps
namespace/kubeapps created
[root@server2 kubeapps]# helm install kubeapps -n kubeapps .
[root@server2 kubeapps]# kubectl get pod -n kubeapps 

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.7.3 真机配置解析
[root@server2 kubeapps]# kubectl get ingress -n kubeapps
NAME       CLASS    HOSTS                 ADDRESS       PORTS   AGE
kubeapps   <none>   kubeapps.westos.org   192.168.0.3   80      45s
[root@server2 kubeapps]# kubectl describe ingress -n kubeapps

[root@server2 kubeapps]# kubectl -n nginx-ingress-controller get svc
NAME                                       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
nginx-ingress-controller                   LoadBalancer   10.98.204.47    192.168.0.211   80:31866/TCP,443:31391/TCP   2d13h
nginx-ingress-controller-default-backend   ClusterIP      10.102.34.108   <none>          80/TCP                       2d13h
[root@foundation50 Desktop]# vim /etc/hosts
[root@foundation50 Desktop]# cat /etc/hosts
192.168.0.211  kubeapps.westos.org

在这里插入图片描述

2.7.4 网页测试
[root@server2 kubeapps]# kubectl create serviceaccount kubeapps-operator -n kubeapps
serviceaccount/kubeapps-operator created
[root@server2 kubeapps]# kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=kubeapps:kubeapps-operator
clusterrolebinding.rbac.authorization.k8s.io/kubeapps-operator created
[root@server2 kubeapps]# kubectl -n kubeapps get secrets
[root@server2 kubeapps]# kubectl -n kubeapps describe secrets kubeapps-operator-token-ktxj5

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

[root@server2 kubeapps]# cd /etc/docker/certs.d/reg.westos.org/
[root@server2 reg.westos.org]# ls
ca.crt
[root@server2 reg.westos.org]# cat ca.crt ## 获取证书

[root@server2 reg.westos.org]# kubectl -n kube-system edit cm coredns
configmap/coredns edited
        hosts {
           192.168.0.1 reg.westos.org
           fallthrough
        }
[root@server2 reg.westos.org]# kubectl get pod -n kube-system |grep coredns | awk '{system("kubectl delete pod "$1" -n kube-system")}'

在这里插入图片描述

在这里插入图片描述
添加成功
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值