Helm
1 helm介绍
1.1 为什么需要helm
如果是手动安装一套k8s应用出来,我们需要分别创建应用中各个组件的Deployment以及Service的yaml文件。如果想在另一个k8s集群中去部署相同的一套应用,其实只需要将相同的一套yaml文件拷贝过去即可,顶多在更改几个参数。那么完全可以将应用所需要的yaml文件打包放在一个仓库里直接供人下载使用即可,这就是Helm的初衷。
不过当然Helm还集成了很多别的功能,例如一键更新和回滚,yaml文件设置动态参数等等,后面我们会慢慢学习到。
在真正开始使用Helm前有几个特有的概念要先熟悉下。
Chart
Chart就是按照一定目录结构保存的多个文件,用来描述所部署应用相关k8s资源的信息。Chart每被Helm使用一次,就部署一个应用到集群中,如果Chart被使用多次,就会把相同的应用部署多次。
通常是直接下载官方的Chart,微调下参数然后使用。也可以跟着官方文档中的步骤来创建自己的Chart,下面的实际操作中我们两种方式都会演示。
Release
可以将Helm的Chart类比Docker的Image,那么Release就是Docker的Container。一个Chart可以生成多个Release。
Repo
Repo就是存放Chart的仓库,和Docker Hub一样,Helm也有自己的Helm Hub。当然也是可以建立自己的私有仓库的。
- 在没使用helm之前,向kubernetes部署应用,我们要依次部署deployment,service,configMap等,步骤较繁琐。况且随着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂。
- helm通过打包的方式,支持发布的版本管理和控制,很大程度上简化了Kubernetes应用的部署和管理
- Helm本质就是让K8s的应用管理(Deployment,Service 等)可配置,能动态生成。通过动态生成K8s资源清单文件(deployment.yaml, service.yaml) 。然后调用Kubectl自动执行K8s资源部署
1.2 helm中几个概念
- Helm是官方提供的类似于YUM的包管理器,是部署环境的流程封装。Helm 有两个重要的概念: Chart 和Release
1)Chart: 一个Helm包,其中包含了运行一个应用所需要的镜像、依赖和资源定义等,还可能包含Kubernetes集群中的服务定义(可以理解为docker的image)
2)Release: 在Kubernetes集群上运行的 Chart的一个实例。在同一个集群上,一个 Chart可以安装很多次。每次安装都会创建一个新的release(可以理解为docker的container实例) - Helm包含两个组件: Helm客户端和Tiller服务器,如下图所示
- Helm客户端负责chart和release的创建和管理以及和Tiller的交互。Tiller 服务器运行在Kubernetes集群中,它会处理Helm客户端的请求,与Kubernetes API Server交互
1.3 helm安装
https://github.com/helm/helm/releases
1.Helm 的安装十分简单。下载 helm命令行具到master节点.
2. 安装github网站:https://github.com/helm/helm/releases/tag/v3.2.4
3. 这里安装3.2.4版本,通过脚本的方式后查看版本。helm version
4. 或者通过下载 Linux amd64 版本。
# 如无需更换版本,直接执行下载
wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
# 解压
tar -zxvf helm-v3.2.4-linux-amd64.tar.gz
# 进入到解压后的目录
cd linux-amd64/
cp helm /usr/local/bin/
# 赋予权限
chmod a+x /usr/local/bin/helm
# 查看版本
helm version
- Kubernetes 1.6 中开启了 RBAC ,权限控制变得简单了。Helm 也不必与 Kubernetes 做重复的事情,因此 Helm 3 彻底移除了 Tiller 。 移除 Tiller 之后,Helm 的安全模型也变得简单(使用 RBAC 来控制生产环境 Tiller 的权限非常不易于管理)。Helm 3 使用 kubeconfig 鉴权。集群管理员针对应用,可以设置任何所需级别的权限控制,而其他功能保持不变。
- 官网仓库: https://hub.helm.sh/ 这里是官方或别人的仓库。
2 自定义的chart
2.1 模板文件目录结构
- 上面仓库中的模板是别人做的。如何做自己的模板呢?
- 目录结构如下
.
├── Chart.yaml
├── templates
| ├── deployment.yaml
| └── service.yaml
├── values.yaml - 一个基本的自定义chart的文件目录结构大概是如上:
1)Chart.yaml: 定义当前chart的基本metadata, 比如name,tag啥的
2)templates: 这个文件夹下放当前chart需要的一些yaml资源清单
资源清单支持变量模版语法
3)values.yaml: 定义变量,可被template下的yaml资源清单使用
2.2 自定义chart的示例
# 1. 新建一个文件夹demo存放chart
mkdir demo && cd demo && mkdir templates
# 2. 新建Chart.yaml 必须包含name和version两个字段
cat << EOF > Chart.yaml
name: hello-world
version: 1.0.0
EOF
# 3. 新建./templates/deployment.yaml
# 模板目录必须是templates 注意image部分可以使用变量的模板语法,可以动态插入
cat << EOF > ./templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: hub.qnhyn.com/library/myapp:v1
ports:
- containerPort: 80
protocol: TCP
EOF
# 4. 新建./templates/service.yaml
cat << EOF > ./templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: hello-world
EOF
# 5. 新建values.yaml 这个用来动态导入的image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
# 这样做有个好处 只需要修改values.yaml文件 可以动态修改配置
#cat << EOF > values.yaml
#image:
# repository: lzw5399/tocgenerator
# tag: '951'
#EOF
# 将chart实例化成release
# 格式:helm install [RELEASE-NAME] [CHART-PATH]
helm install testname .
# 查看release
helm ls
# 查看历史
helm history <RELEASE_NAME>
# 安装成功!!
kubectl get pod
2.3 helm的基本操作
- 查看release
# 列出已经部署的Release
helm ls
# 查询一个特定的Release的状态
helm status <RELEASE_NAME>
# 查看被移除了,但保留了历史记录的release
helm ls --uninstalled
- 安装release
# 安装
helm install <RELEASE-NAME> <CHART-PATH>
# 命令行指定变量
helm install --set image.tag=233 <RELEASE-NAME> <CHART-PATH>
- 更新release
# 更新操作, flag是可选操作
helm upgrade [FLAG] <RELEASE> <CHART-PATH>
# 指定文件更新
helm upgrade -f myvalues.yaml -f override.yaml <RELEASE-NAME> <CHART-PATH>
# 命令行指定变量
helm upgrade --set foo=bar --set foo=newbar redis ./redis
4.卸载release
# 移除Release
helm uninstall <RELEASE_NAME>
# 移除Release,但保留历史记录
# 可以通过以下查看:helm ls --uninstalled
# 可以通过以下回滚:helm rollback <RELEASE> [REVISION]
helm uninstall <RELEASE_NAME> --keep-history
- 回滚release
# 更新操作, flag是可选操作
helm rollback <RELEASE> [REVISION]
6.尝试运行但是不运行。测试下能否运行找BUG
helm install --dry-run <RELEASE-NAME> <CHART-PATH>
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
helm repo list
helm repo remove stable #helm 删除配置的源
hlem 配置新的源
helm repo add stable https://burdenbear.github.io/kube-charts-mirror/
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add google https://kubernetes-charts.storage.googleapis.com
helm repo add jetstack https://charts.jetstack.io
NAME URL
k8s-dashboard https://kubernetes.github.io/dashboard
kubernetes-dashboard https://kubernetes.github.io/dashboard/
stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
jetstack https://charts.jetstack.io
bitnami https://charts.bitnami.com/bitnami
helm search hub airflow
helm search repo kubernetes-dashboard
helm pull stable/kubernetes-dashboard
3 配置https证书为secret
上传域名的nginx证书到服务器上
# 这里假设证书aaa.key和bbb.crt已经上传至路径/usr/local/cert
cd /usr/local/cert
# 创建secret到kube-system命名空间下
# 之后我们的dashboard也会创建在这个命名空间下,需要依赖这个,所以提前创建
kubectl creat secret tls dashboard-tls --key aaa.key --cert bbb.crt -n kube-system
4 部署dashboard
4.1 拉取dashboard的chart
# 添加helmhub上的dashboard官方repo仓库
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# 更新仓库
helm repo update
# 查看添加完成后的仓库
helm repo list
# 查询dashboard的chart
helm search repo kubernetes-dashboard
# 新建文件夹用于保存chart
mkdir dashboard-chart && cd dashboard-chart
# 拉取chart
helm pull kubernetes-dashboard/kubernetes-dashboard
# 此时会有一个压缩包,解压它
tar -zxvf kubernetes-dashboard-5.4.1.tgz
# 进入到解压后的文件夹
cd kubernetes-dashboard
4.2 配置chart
注意:以下创建的new-values.yaml是基于values.yaml修改的,即意味着如果需要跟自定义的配置,可以自己参照values.yaml修改配置文件
新建一个new-values.yaml,内容如下
注意:以下的host需要换成自己的域名,且secretname需要跟刚刚创建的secret对应起来
image:
repository: kubernetesui/dashboard
tag: v2.0.3
pullPolicy: IfNotPresent
pullSecrets: []
replicaCount: 1
annotations: {}
labels: {}
extraEnv: []
podAnnotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
nodeSelector: {}
tolerations: []
affinity: {}
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 2
memory: 200Mi
protocolHttp: false
service:
type: ClusterIP
externalPort: 443
annotations: {}
labels: {}
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
paths:
- /
customPaths: []
hosts:
- xxx.xxx.com # 你的域名
tls:
# 注意这个名字要跟前面新建的secret对上
- secretName: dashboard-tls
hosts:
- xxx.xxx.com # 你的域名
metricsScraper:
enabled: false
image:
repository: kubernetesui/metrics-scraper
tag: v1.0.4
resources: {}
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
metrics-server:
enabled: false
rbac:
create: true
clusterRoleMetrics: true
clusterReadOnlyRole: false
serviceAccount:
create: true
name:
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 30
podDisruptionBudget:
enabled: false
minAvailable:
maxUnavailable:
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
networkPolicy:
enabled: false
4.3 创建dashboard的release
# 执行路径在new-values.yaml目录
helm install . \
-n kubernetes-dashboard \ #指定名称
--namespace kube-system \ #指定名称空间
-f new-values.yaml #指定创建的yaml文件
4.4. 给dashboard的ServiceAccont授权
如果不授权,创建后我们刚进去界面,发现什么资源都显示不了,是因为dashboard默认的ServiceAccount并没有权限,所以我们需要给予它授权。
这里简单起见直接分配cluster- admin 这个集群内置的 ClusterRole 给它。
详细内容可以查看helm文档中的 https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md。
创建 rbac-config.yaml 文件:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
应用资源文件
kubectl apply -f rbac-config.yaml
获取token
kubectl get secret -n kube-system | grep kubernetes-dashboard-token
kubectl describe secret kubernetes-dashboard-token-vgp9w -n kube-system
5 直接安装
5.1 安装
helm repo add k8s-dashboard https://kubernetes.github.io/dashboard
helm install k8s-dashboard/kubernetes-dashboard --version 2.6.0 -n k8s-dashboard --namespace kube-system --generate-name
回显中可以看到,在kube-system名称空间下为k8s集群创建ClusterRole、ClusterRoleBinding、ConfigMap、Deployment、Pod(related)、Role、RoleBinding、Secret、Service和ServiceAccount等资源
详细可参考helm官方手册:https://hub.helm.sh/charts/k8s-dashboard/kubernetes-dashboard
修改service的type类型为Nodeport
配置
修改service的type类型为Nodeport,使其可以外部访问
kubectl get svc -n kube-system
kubectl edit svc kubernetes-dashboard-1652107044 -n kube-system
访问
https://192.168.66.22:30073
提示选择是使用Token方式连接还是Kubeconfig方式连接
在此使用Token连接,查看Token方法:
kubectl get secret -n kube-system |grep dashboard
kubectl describe secret kubernetes-dashboard-1652107044-token-j4dr5 -n kube-system | grep token
5.2 绑定操作
但是,默认情况下,直接进入,它本身没有访问整个集群的权限,所以要先对 dashborad 的SA进行一个ClusterRoleBinding 的操作:
vim dashbindins.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-1
subjects:
- kind: ServiceAccount
name: k8s-dashboard-kubernetes-dashboard #svc
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
将 cluster-admin 的权限赋予给名称为 k8s-dashboard-kubernetes-dashboard 的 SA,cluster-admin是集群默认的的角色,具有整个集群的所有权限,如果有个性化需求,自己定义一个ClusterRole也是可以的
进行绑定:
kubectl create -f dashbindins.yaml
绑定完成后,再次刷新 dashboard 的界面,就可以看到整个集群的资源情况。
或者执行
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
5.3个性化定制参数
Dashboard 默认采用 https 的形式进行访问,众所周知,https是需要绑定证书的,咱们以上直接通过 helm 方法安装的是自动绑定了config文件里的证书:
crt:grep ‘client-certificate-data’ ~/.kube/config | head -n 1 | awk ‘{print $2}’ | base64 -d
kry:grep ‘client-key-data’ ~/.kube/config | head -n 1 | awk ‘{print $2}’ | base64 -d >> kubecfg.key
但是如果我们想要定义自己的https证书,我们可以在创建 dashboard 的时候采用指定变量的方法:
先将 dashboard 文件下载下来:
helm fetch k8s-dashboard/kubernetes-dashboard
tar zxvf kubernetes-dashboard-2.6.0.tgz
ls
charts Chart.yaml README.md requirements.lock requirements.yaml templates values.yaml
创建变量文件:
vim dashboardvaluse.yaml
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64 # 指定存储库
tag: v1.10.1 #指定版本
ingress:
enabled: true #ingress是否开启
hosts:
- k8s.vfancloud.com #指定域名
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
tls: #指定secret,也就是指定你的证书
- secretName: repository-ssl
hosts:
- k8s.vfancloud.com
rbac:
clusterAdminRole: true
创建tls:kubectl create secret tls repository-ssl --key server.key --cert server.crt
编辑完毕后,创建时使用 -f 指定此变量文件即可:
helm install . --version 2.6.0 -n k8s-dashboard --namespace kube-system -f dashboardvaluse.yaml
https://www.cnblogs.com/pinghengxing/p/14674060.html
5.4 dashboard.yaml
dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
#spec:
# ports:
# - port: 443
# targetPort: 8443
spec:
type: NodePort # 改成NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 31001 # 指定nodePort端口
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
kubectl apply -f dashboard.yaml
访问https://ip+31001 无法使用谷歌浏览器打开 老样子只能用火狐打开
原因是部署UI的镜像中默认自带的证书是一个不可信任的证书
https://blog.csdn.net/aa18855953229/article/details/108046619?spm=1001.2014.3001.5502
5.5 配置ssl证书
没有域名自签证书:
umask 077; openssl genrsa -out dashboard.key 2048)
openssl req -key dashboard.key -out dashboard.csr -subj "/O=mango/CN=192.168.0.240"
openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=mango/CN=192.168.0.240"
openssl x509 -req -in dashboard.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out dashboard.crt -days 365
有域名:
百度SSL证书
FreeSSL首页 - FreeSSL.cn一个提供免费HTTPS证书申请的网站
签发过程:注册账号登陆,并根据要求增加TXT域名解析后,等待签发10分钟左右就可以
签发好后下载证书-nginx,解压的到key和crt文件。
配置kubernetes-dashboard.yml文件
添加如下内容(大约第200行):
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
添加下面两行
- --tls-key-file=tls.key
- --tls-cert-file=tls.crt
重新应用:kubectl apply -f kubernetes-dashborad.yml
创建secret
替换原有kubernetes-dashboard-certs,并重启pod,如下:
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
kubectl create secret generic kubernetes-dashboard-certs --from-file=kubernetes.sumengnan.com.key --from-file=kubernetes.sumengnan.com.crt -n kubernetes-dashboard
kubectl get pod -n kubernetes-dashboard | grep kubernetes-dashboard
kubectl delete pod kubernetes-dashboard-576cb95f94-xl959 -n kubernetes-dashboard
完毕
1、nodeport默认端口范围30000-32767,如果想改变端口范围怎么办?
解决办法:vim /etc/kubernetes/manifests/kube-apiserver.yaml
增加
spec:
containers:
- command:
- kube-apiserver
- –service-node-port-range=1-65535
- kube-apiserver
修改完毕立即生效
helm install stable/mysql --generate-name
helm list
但是需要注意,通常直接部署都会有各种大大小小的问题,所以一般是先下载Chart到本地,编辑之后再从本地部署。
helm pull stable/mysql
cd mysql
我们先将刚才部署的release删除
helm uninstall mysql-1589684111
创建本地Chart
创建自己的chart,只需要按照规定的格式创建目录结构,写入文件即可
文件结构
目录名为chart名
例如创建一个叫xiaofu的文件夹
[root@k8s-master helm]# mkdir xiaofu
[root@k8s-master helm]# cd xiaofu
目录结构
完整的目录结构包括下面这些文件和目录
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
values.schema.json # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file
charts/ # A directory containing any charts upon which this chart depends.
crds/ # Custom Resource Definitions
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
下面我们以一些必备文件为例创建一个chart,其余可选字段的说明可以查阅官方文档。
Chart.yaml
该文件里面存放chart的一些基本信息,字段如下
apiVersion: The chart API version (required)
name: The name of the chart (required)
version: A SemVer 2 version (required)
kubeVersion: A SemVer range of compatible Kubernetes versions (optional)
description: A single-sentence description of this project (optional)
type: It is the type of chart (optional)
keywords:
- A list of keywords about this project (optional)
home: The URL of this projects home page (optional)
sources: - A list of URLs to source code for this project (optional)
dependencies: # A list of the chart requirements (optional) - name: The name of the chart (nginx)
version: The version of the chart (“1.2.3”)
repository: The repository URL (“https://example.com/charts”) or alias (“@repo-name”)
condition: (optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled )
tags: # (optional)- Tags can be used to group charts for enabling/disabling together
enabled: (optional) Enabled bool determines if chart should be loaded
import-values: # (optional) - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
alias: (optional) Alias usable alias to be used for the chart. Useful when you have to add the same chart multiple times
maintainers: # (optional)
- Tags can be used to group charts for enabling/disabling together
- name: The maintainers name (required for each maintainer)
email: The maintainers email (optional for each maintainer)
url: A URL for the maintainer (optional for each maintainer)
icon: A URL to an SVG or PNG image to be used as an icon (optional).
appVersion: The version of the app that this contains (optional). This needn’t be SemVer.
deprecated: Whether this chart is deprecated (optional, boolean)
annotations:
example: A list of annotations keyed by name (optional).
其中必须的只有前三个
apiVersion - 如果是必须Helm3及以上需要填v2,不然就填v1
name - 和目录名一致即可
version - 一个遵循SemVer2标准的版本,例如1.0.0
创建一个Chart.yaml文件如下
apiVersion: v1
name: xiaofu
version: 1.0.0
charts
这个目录里面存放本chart所依赖的别的chart,我们这里因为没有别的chart要依赖,所以放空。
templates和values.yaml
templates目录存放着k8s资源创建所用的yaml文件。为了达到动态生成的效果,这里使用了Go模板语言从values.yaml中读取变量的值填入这些yaml文件中。
不懂Go模板也没太大关系,和python中的jinja差不多
我创建一个简单的Deployment以及对应的Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mynginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: mynginx
version: v2
spec:
containers:
- name: mynginx
image: {{ .Values.image }}:{{ .Values.imageTag }}
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: mynginx-service
namespace: default
spec:
type: NodePort
selector:
app: mynginx
version: v2
ports:
- name: http
port: 8080
targetPort: 80
nodePort: 30000
注意观察Deployment中用点号去获取变量值的写法,然后创建values.yaml如下
image: mynginx
imageTag: v2
1
2
这样基本信息差不读多了,试着部署看看
[root@k8s-master helm]# helm install xiaofu --generate-name
NAME: xiaofu-1589688249
LAST DEPLOYED: Sun May 17 12:04:09 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
查看一下信息
[root@k8s-master helm]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mynginx-deployment-b66f59f66-kncc6 1/1 Running 0 44s
mynginx-deployment-b66f59f66-wrtz6 1/1 Running 0 44s
mynginx-deployment-b66f59f66-xbmhz 1/1 Running 0 44s
[root@k8s-master helm]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 18d
mynginx-service NodePort 10.111.111.217 8080:30000/TCP 48s
之后访问k8s集群中任意节点IP的30000端口可以得到如下信息
this is mynginx v2
如果这时候要进行扩容以及更新镜像版本,就可以修改下模板文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mynginx-deployment
spec:
replicas: {{ .Values.replica }}
template:
metadata:
labels:
app: mynginx
version: v2
spec:
containers:
- name: mynginx
image: {{ .Values.image }}:{{ .Values.imageTag }}
ports:
- containerPort: 80
以及values.yaml
image: mynginx
imageTag: v1
replica: 5
之后更新下release
[root@k8s-master xiaofu]# helm upgrade xiaofu-1589688249 .
Release “xiaofu-1589688249” has been upgraded. Happy Helming!
NAME: xiaofu-1589688249
LAST DEPLOYED: Sun May 17 14:08:28 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
就会看到pod变多了
[root@k8s-master xiaofu]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mynginx-deployment-7f66686d6c-9slrh 1/1 Running 0 74s
mynginx-deployment-7f66686d6c-bmdc7 1/1 Running 0 72s
mynginx-deployment-7f66686d6c-lmhlr 1/1 Running 0 69s
mynginx-deployment-7f66686d6c-qcqcv 1/1 Running 0 72s
mynginx-deployment-7f66686d6c-rjww5 1/1 Running 0 74s
[root@k8s-master xiaofu]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 18d
mynginx-service NodePort 10.111.111.217 8080:30000/TCP 125m
此时再访问node的30000端口就变成了v1镜像的内容。
查看历史变更信息
[root@k8s-master xiaofu]# helm history xiaofu-1589688249
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun May 17 12:04:09 2020 superseded xiaofu-1.0.0 Install complete
2 Sun May 17 14:08:28 2020 deployed xiaofu-1.0.0 Upgrade complete
如果要回退
[root@k8s-master xiaofu]# helm rollback xiaofu-1589688249
Rollback was a success! Happy Helming!
就跟Deployment的回滚道理一样,因为旧的pod都没有被彻底删除,只是停用,所以回滚会很快。
这里只是简单的演示,更多的功能可以参考官方文档以及helm help帮助文档说明。
Helm常用命令汇总
把上面用到的一些常用命令汇总以下
命令 说明
helm search hub xxx 在Helm Hub上搜索Chart
helm search repo repo_name 在本地配置的Repo中搜索Chart
helm install release_name chart_reference chart一共有5种reference
helm list 查看已部署的release
helm status release_name 查看release信息
helm upgrade release_name chart_reference 修改chart信息后升级release
helm history release_name 查看release的更新历史记录
helm rollback release_name revision 回滚操作
helm uninstall release_name 卸载release