1.创建NFS动态存储卷
创建 NFS 服务器
NFS 允许系统将其目录和文件共享给网络上的其他系统。通过 NFS,用户和应用程序可以访问远程系统上的文件,就象它们是本地文件一样。
安装nfs-utils
-
在集群每一个节点安装
nfs-utils
sudo yum install -y nfs-utils
配置nfs-server
-
创建共享目录
mkdir -p /u01/prod
-
编辑
/etc/exports
文件添加需要共享目录,每个目录的设置独占一行,编写格式如下:NFS共享目录路径 客户机IP段(参数1,参数2,...,参数n)
-
例如:
/u01 192.168.1.1/16(rw,sync,insecure,no_subtree_check,no_root_squash)
参数 说明 ro 只读访问 rw 读写访问 sync 所有数据在请求时写入共享 async nfs在写入数据前可以响应请求 secure nfs通过1024以下的安全TCP/IP端口发送 insecure nfs通过1024以上的端口发送 wdelay 如果多个用户要写入nfs目录,则归组写入(默认) no_wdelay 如果多个用户要写入nfs目录,则立即写入,当使用async时,无需此设置 hide 在nfs共享目录中不共享其子目录 no_hide 共享nfs目录的子目录 subtree_check 如果共享/usr/bin之类的子目录时,强制nfs检查父目录的权限(默认) no_subtree_check 不检查父目录权限 all_squash 共享文件的UID和GID映射匿名用户anonymous,适合公用目录 no_all_squash 保留共享文件的UID和GID(默认) root_squash root用户的所有请求映射成如anonymous用户一样的权限(默认) no_root_squash root用户具有根目录的完全管理访问权限 anonuid=xxx 指定nfs服务器/etc/passwd文件中匿名用户的UID anongid=xxx 指定nfs服务器/etc/passwd文件中匿名用户的GID - 注1:尽量指定IP段最小化授权可以访问NFS 挂载的资源的客户端
- 注2:经测试参数insecure必须要加,否则客户端挂载出错mount.nfs: access denied by server while mounting
启动NFS服务
-
配置完成后,您可以在终端提示符后运行以下命令来启动 NFS 服务器:
sudo systemctl enable nfs-server sudo systemctl start nfs-server
检查NFS服务提供是否正常
-
到客户机上执行showmount命令进行检查
$ showmount -e <NFS服务器IP地址> Exports list on <NFS服务器IP地址>: /u01
安装 nfs-client-provisioner
添加choerodon chart仓库
helm repo add c7n https://openchart.choerodon.com.cn/choerodon/c7n/
helm repo update
-
在集群每一个节点安装
nfs-utils
sudo yum install -y nfs-utils
-
在任意一个master节点执行下面helm命令,安装
nfs-client-provisioner
helm install c7n/nfs-client-provisioner \ --set rbac.create=true \ --set persistence.enabled=true \ --set storageClass.name=nfs-provisioner \ --set persistence.nfsServer=127.0.0.1 \
--set persistence.nfsPath=/u01/prod \
--version 0.1.1 \ --name nfs-client-provisioner \ --namespace kube-system
提供NFS服务的主机IP地址或域名
NFS服务共享的目录
验证安装
-
新建
write-pod.yaml
文件,粘贴以下内容:kind: Pod apiVersion: v1 metadata: name: write-pod spec: containers: - name: write-pod image: busybox command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: myclaim --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: accessModes: - ReadWriteOnce storageClassName: nfs-provisioner resources: requests: storage: 1Mi
-
部署测试用例
kubectl apply -f write-pod.yaml
-
验证是否正常
$ kubectl get po NAME READY STATUS RESTARTS AGE write-pod 0/1 Completed 0 8s
pod状态为
Completed
则为正常,若长时间为ContainerCreating
状态则为不正常,请确认安装操作步骤是否正确。 -
清除测试用例
kubectl delete -f write-pod.yaml
2.查看并创建PV
[root@register ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
custompro-pv 8Gi RWX Retain Available 8m2s
我们可以看到 pv1 已经创建成功了,状态是 Available,表示 pv1 就绪,可以被 PVC 申请。
一个 PV 的生命周期中,可能会处于4中不同的阶段:
- Available(可用):表示可用状态,还未被任何 PVC 绑定
- Bound(已绑定):表示 PV 已经被 PVC 绑定
- Released(已释放):PVC 被删除,但是资源还未被集群重新声明
- Failed(失败): 表示该 PV 的自动回收失败
3.查看并创建PVC
这里创建失败
删除PVC并重启agent
[root@register ~]# kubectl get deployment --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
choerodon choerodon-cluster-agent-ja-pro 1/1 1 1 11d
ingress-controller nginx-ingress-controller 2/2 2 2 11d
ja-proc bjja-front-integration-74a24 2/2 2 2 21h
ja-proc hzero-admin-3e7e8 2/2 2 2 22h
ja-proc hzero-asgard-66974 2/2 2 2 17h
ja-proc hzero-config-49906 2/2 2 2 41h
ja-proc hzero-file-bad34 2/2 2 2 20h
ja-proc hzero-gateaway-6aa6d 2/2 2 2 29h
ja-proc hzero-iam-d7ef7 2/2 2 2 27h
ja-proc hzero-import-a948b 2/2 2 2 17h
ja-proc hzero-interface-974e1 2/2 2 2 17h
ja-proc hzero-message-1a3d4 2/2 2 2 4h47m
ja-proc hzero-oauth-6db46 2/2 2 2 28h
ja-proc hzero-platform-5df08 2/2 2 2 22h
ja-proc hzero-register-fa59f 1/1 1 1 2d4h
ja-proc hzero-report-b58be 2/2 2 2 3h46m
ja-proc hzero-scheduler-d1475 2/2 2 2 124m
ja-proc hzero-swagger-1022c 2/2 2 2 22h
ja-proc hzero-workflow-plus-b8a9d 2/2 2 2 4h59m
ja-proc register-backup-be763 1/1 1 1 24h
ja-proc wms-mdm-7bf92 2/2 2 2 89m
kube-system coredns 2/2 2 2 11d
kube-system metrics-server 1/1 1 1 11d
kube-system nfs-client-provisioner 1/1 1 1 11d
kube-system tiller-deploy 1/1 1 1 11d
kubernetes-dashboard dashboard-metrics-scraper 1/1 1 1 11d
kubernetes-dashboard kubernetes-dashboard 1/1 1 1 11d
[root@register ~]# kubectl scale deployment -n choerodon --replicas=0 choerodon-cluster-agent-ja-pro
deployment.apps/choerodon-cluster-agent-ja-pro scaled
[root@register ~]# kubectl get deployment --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
choerodon choerodon-cluster-agent-ja-pro 0/0 0 0 11d
ingress-controller nginx-ingress-controller 2/2 2 2 11d
ja-proc bjja-front-integration-74a24 2/2 2 2 21h
ja-proc hzero-admin-3e7e8 2/2 2 2 22h
ja-proc hzero-asgard-66974 2/2 2 2 17h
ja-proc hzero-config-49906 2/2 2 2 41h
ja-proc hzero-file-bad34 2/2 2 2 20h
ja-proc hzero-gateaway-6aa6d 2/2 2 2 29h
ja-proc hzero-iam-d7ef7 2/2 2 2 27h
ja-proc hzero-import-a948b 2/2 2 2 17h
ja-proc hzero-interface-974e1 2/2 2 2 17h
ja-proc hzero-message-1a3d4 2/2 2 2 4h48m
ja-proc hzero-oauth-6db46 2/2 2 2 28h
ja-proc hzero-platform-5df08 2/2 2 2 22h
ja-proc hzero-register-fa59f 1/1 1 1 2d4h
ja-proc hzero-report-b58be 2/2 2 2 3h47m
ja-proc hzero-scheduler-d1475 2/2 2 2 125m
ja-proc hzero-swagger-1022c 2/2 2 2 22h
ja-proc hzero-workflow-plus-b8a9d 2/2 2 2 5h
ja-proc register-backup-be763 1/1 1 1 24h
ja-proc wms-mdm-7bf92 2/2 2 2 90m
kube-system coredns 2/2 2 2 11d
kube-system metrics-server 1/1 1 1 11d
kube-system nfs-client-provisioner 1/1 1 1 11d
kube-system tiller-deploy 1/1 1 1 11d
kubernetes-dashboard dashboard-metrics-scraper 1/1 1 1 11d
kubernetes-dashboard kubernetes-dashboard 1/1 1 1 11d
[root@register ~]# kubectl scale deployment -n choerodon --replicas=1 choerodon-cluster-agent-ja-pro
deployment.apps/choerodon-cluster-agent-ja-pro scaled
[root@register ~]# kubectl get deployment --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
choerodon choerodon-cluster-agent-ja-pro 0/1 1 0 11d
ingress-controller nginx-ingress-controller 2/2 2 2 11d
ja-proc bjja-front-integration-74a24 2/2 2 2 21h
ja-proc hzero-admin-3e7e8 2/2 2 2 22h
ja-proc hzero-asgard-66974 2/2 2 2 17h
ja-proc hzero-config-49906 2/2 2 2 41h
ja-proc hzero-file-bad34 2/2 2 2 20h
ja-proc hzero-gateaway-6aa6d 2/2 2 2 29h
ja-proc hzero-iam-d7ef7 2/2 2 2 27h
ja-proc hzero-import-a948b 2/2 2 2 17h
ja-proc hzero-interface-974e1 2/2 2 2 17h
ja-proc hzero-message-1a3d4 2/2 2 2 4h48m
ja-proc hzero-oauth-6db46 2/2 2 2 28h
ja-proc hzero-platform-5df08 2/2 2 2 22h
ja-proc hzero-register-fa59f 1/1 1 1 2d4h
ja-proc hzero-report-b58be 2/2 2 2 3h47m
ja-proc hzero-scheduler-d1475 2/2 2 2 125m
ja-proc hzero-swagger-1022c 2/2 2 2 22h
ja-proc hzero-workflow-plus-b8a9d 2/2 2 2 5h
ja-proc register-backup-be763 1/1 1 1 24h
ja-proc wms-mdm-7bf92 2/2 2 2 90m
kube-system coredns 2/2 2 2 11d
kube-system metrics-server 1/1 1 1 11d
kube-system nfs-client-provisioner 1/1 1 1 11d
kube-system tiller-deploy 1/1 1 1 11d
kubernetes-dashboard dashboard-metrics-scraper 1/1 1 1 11d
kubernetes-dashboard kubernetes-dashboard 1/1 1 1 11d
[root@register ~]# kubectl get deployment --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
choerodon choerodon-cluster-agent-ja-pro 0/1 1 0 11d
ingress-controller nginx-ingress-controller 2/2 2 2 11d
ja-proc bjja-front-integration-74a24 2/2 2 2 21h
ja-proc hzero-admin-3e7e8 2/2 2 2 22h
ja-proc hzero-asgard-66974 2/2 2 2 17h
ja-proc hzero-config-49906 2/2 2 2 41h
ja-proc hzero-file-bad34 2/2 2 2 20h
ja-proc hzero-gateaway-6aa6d 2/2 2 2 29h
ja-proc hzero-iam-d7ef7 2/2 2 2 27h
ja-proc hzero-import-a948b 2/2 2 2 17h
ja-proc hzero-interface-974e1 2/2 2 2 17h
ja-proc hzero-message-1a3d4 2/2 2 2 4h48m
ja-proc hzero-oauth-6db46 2/2 2 2 28h
ja-proc hzero-platform-5df08 2/2 2 2 22h
ja-proc hzero-register-fa59f 1/1 1 1 2d4h
ja-proc hzero-report-b58be 2/2 2 2 3h47m
ja-proc hzero-scheduler-d1475 2/2 2 2 125m
ja-proc hzero-swagger-1022c 2/2 2 2 22h
ja-proc hzero-workflow-plus-b8a9d 2/2 2 2 5h
ja-proc register-backup-be763 1/1 1 1 24h
ja-proc wms-mdm-7bf92 2/2 2 2 90m
kube-system coredns 2/2 2 2 11d
kube-system metrics-server 1/1 1 1 11d
kube-system nfs-client-provisioner 1/1 1 1 11d
kube-system tiller-deploy 1/1 1 1 11d
kubernetes-dashboard dashboard-metrics-scraper 1/1 1 1 11d
kubernetes-dashboard kubernetes-dashboard 1/1 1 1 11d
重新绑定pvc,成功
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-C2BZeLgf-1589107259444)(D:/hap/mdfile/Image/image-20200509161637566.png)]
4.微服务POD配置PVC共享目录
修改deployeement.yaml和value.yaml
deployeement.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
{{ include "service.labels.standard" . | indent 4 }}
{{ include "service.logging.deployment.label" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{ include "service.labels.standard" . | indent 6 }}
template:
metadata:
labels:
{{ include "service.labels.standard" . | indent 8 }}
{{ include "service.microservice.labels" . | indent 8 }}
annotations:
{{ include "service.monitoring.pod.annotations" . | indent 8 }}
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- customs
topologyKey: "kubernetes.io/hostname"
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Chart.Version }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $name, $value := .Values.env.open }}
{{- if not (empty $value) }}
- name: {{ $name | quote }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
{{- if .Values.volumes.enabled }}
volumeMounts:
- name: {{ .Values.volumes.name }} # volumes名称
mountPath: {{ .Values.volumes.mountPath }} # 挂载路径
{{- end }}
readinessProbe:
exec:
command: ["/bin/sh","-c",
"curl -s localhost:{{ .Values.deployment.managementPort }}/actuator/health"]
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.volumes.enabled }}
volumes:
- name: {{ .Values.volumes.name }} # volumes名称
persistentVolumeClaim:
claimName: {{ .Values.volumes.claimName }} # pvc名称
{{- end }}
value.yaml
# Default values for manager-service.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: registry.cn-hangzhou.aliyuncs.com/operation-ja0146/hzero-customs
pullPolicy: Always
preJob:
preConfig:
enable: false
preInitDB:
enable: false
deployment:
managementPort: 8181
volumes:
enabled: true
name: customsv1
claimName: customs-pvc
mountPath: /home/hctm/myshare
env:
open:
## log
LOG_LEVEL: debug
## register-server
EUREKA_DEFAULT_ZONE: http://register.hzero.org/eureka
## redis
SPRING_REDIS_HOST: 172.28.8.110
SPRING_REDIS_PORT: 6379
SPRING_REDIS_DATABASE: 1
## mysql
SPRING_DATASOURCE_URL: jdbc:mysql://172.28.8.110:3306/hzero_customs?useUnicode=true&characterEncoding=utf-8&useSSL=false
SPRING_DATASOURCE_USERNAME: hzero
SPRING_DATASOURCE_PASSWORD: hzero
## config
SPRING_PROFILES_ACTIVE: default
SPRING_CLOUD_CONFIG_URI: http://config.hzero.org
## oauth && gateway
## other
metrics:
path: /prometheus
group: spring-boot
logs:
parser: spring-boot
persistence:
enabled: false
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
# subPath:
service:
name: hzero-customs
enabled: true
type: ClusterIP
port: 8180
ingress:
enabled: false
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources,such as Minikube. If you do want to specify resources,uncomment the following
# lines,adjust them as necessary,and remove the curly braces after 'resources:'.
limits:
# cpu: 100m
memory: 2Gi
requests:
# cpu: 100m
memory: 1.5Gi
重点关注
volumes:
enabled: true # 是否启用
name: customsv1 # volumes名称
claimName: customs-pvc # PVC名称
mountPath: /home/hctm/myshare # 挂载路径