Configmap使用指南
本文章主要分享在容器化应用中,使用configmap存储应用配置文件,再挂载到容器内部,实现程序和配置的解耦。
食用之前可先阅读 https://www.ytg2097.com/more/spring-boot-k8s-configmap.html
一、操作过程
1、在ack创建configmap
将Springboot的2个配置文件写入了configmap ,名为 myconfig , 其中mykey的值不一样。
apiVersion: v1
data:
application-test.yml: 'mykey: test22'
application.yml: |-
server:
port: 10085
spring:
profiles:
active: test
application:
name: eureka-server
eureka:
client:
service-url:
defaultZone: http://127.0.0.1:10085/eureka
register-with-eureka: false
fetch-registry: false
mykey: main
kind: ConfigMap
metadata:
creationTimestamp: '2024-06-17T09:09:46Z'
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:data':
.: {}
'f:application-test.yml': {}
'f:application.yml': {}
manager: ACK-Console Apache-HttpClient
operation: Update
time: '2024-06-19T01:31:55Z'
name: myconfig
namespace: dc
resourceVersion: '71997954'
uid: 58c79492-c10e-42c0-9d6d-7a619723e624
2、创建deployment
注意要选择挂载的configmap,这里容器路径选择jar包所在路径的子路径config。jar所在路径为/web/app.jar,故挂载路径为/web/config。
dockerfile如下:
#FROM ubuntu22-jdk17
FROM harbor.ovopark.com/library/jdk:1.8.0_311
ADD target/EurekaServer10085-1.0-SNAPSHOT.jar /web/app.jar
# 执行启动命令
# ENTRYPOINT ["sh","-c","exec java -jar -Xms2048m -Xmx2048m -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=512M /web/app.jar"]
WORKDIR /web
ENTRYPOINT ["sh","-c","exec java -jar -Xms1024m -Xmx1024m -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=512M -XX:+UseG1GC /web/app.jar"]
deployment 如下:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '11'
creationTimestamp: '2024-06-18T06:12:50Z'
generation: 11
labels:
app: zk-eureka3
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
.: {}
'f:app': {}
'f:spec':
'f:progressDeadlineSeconds': {}
'f:replicas': {}
'f:revisionHistoryLimit': {}
'f:selector': {}
'f:strategy':
'f:rollingUpdate':
.: {}
'f:maxSurge': {}
'f:maxUnavailable': {}
'f:type': {}
'f:template':
'f:metadata':
'f:annotations':
.: {}
'f:redeploy-timestamp': {}
'f:labels':
.: {}
'f:app': {}
'f:spec':
'f:containers':
'k:{"name":"zk-eureka3"}':
.: {}
'f:image': {}
'f:imagePullPolicy': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":10085,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:name': {}
'f:protocol': {}
'f:resources':
.: {}
'f:limits':
.: {}
'f:cpu': {}
'f:memory': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:volumeMounts':
.: {}
'k:{"mountPath":"/web/config"}':
.: {}
'f:mountPath': {}
'f:name': {}
'f:dnsPolicy': {}
'f:imagePullSecrets':
.: {}
'k:{"name":"miyao"}': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext': {}
'f:terminationGracePeriodSeconds': {}
'f:volumes':
.: {}
'k:{"name":"volume-1718691135676"}':
.: {}
'f:configMap':
.: {}
'f:defaultMode': {}
'f:name': {}
'f:name': {}
manager: ACK-Console Apache-HttpClient
operation: Update
time: '2024-06-19T01:34:38Z'
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:deployment.kubernetes.io/revision': {}
'f:status':
'f:availableReplicas': {}
'f:conditions':
.: {}
'k:{"type":"Available"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'k:{"type":"Progressing"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'f:observedGeneration': {}
'f:readyReplicas': {}
'f:replicas': {}
'f:updatedReplicas': {}
manager: kube-controller-manager
operation: Update
subresource: status
time: '2024-06-19T01:34:42Z'
name: zk-eureka3
namespace: dc
resourceVersion: '71998971'
uid: e609019e-6ca0-42b4-b6cb-879f770e3b64
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: zk-eureka3
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
redeploy-timestamp: '1718760878627'
labels:
app: zk-eureka3
spec:
containers:
- image: 'ovo-registry.cn-hangzhou.cr.aliyuncs.com/ovopark/zk-eureka:v2'
imagePullPolicy: Always
name: zk-eureka3
ports:
- containerPort: 10085
name: p
protocol: TCP
resources:
limits:
cpu: 200m
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /web/config
name: volume-1718691135676
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: miyao
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: myconfig
name: volume-1718691135676
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2024-06-18T06:12:52Z'
lastUpdateTime: '2024-06-18T06:12:52Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
- lastTransitionTime: '2024-06-18T06:12:50Z'
lastUpdateTime: '2024-06-19T01:34:42Z'
message: ReplicaSet "zk-eureka3-788b485557" has successfully progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1
来分析下这个yaml,在167-171行在容器内部声明了一个卷,名为volume-1718691135676 ,是由myconfig这个configmap创建而来。
在157-159行将该卷挂载到容器内指定路径。
3、创建service
选择负载均衡器clb test2-116.62.101.233
二、测试与验证
进入容器内部,确认配置文件已挂载进去。
再通过接口获取下mykey的值,正确输出。
接下来切换下环境,使用默认yaml。更新configmap,再重启下deployment。
再进入容器内部确认下configmap ,更新成功。
再次访问接口,获取下mykey的值,获取成功,测试通过!
三、总结与反思
实验中踩了一个大坑,碰到configmap确实挂载到了容器内部,但是程序访问到的还是jar内部的配置文件。最终发现是dockerfile没有指定工作路径workdir,应该要切换到jar所在路径(如dockerfile第6行)。感谢https://segmentfault.com/q/1010000041759388
之前在写《基于jenkins+GitLab实现CI/CD自动化部署》的时候,测试项目是鹰眼查项目,碰到一个问题,pod不断重启,查看了日志原来是mysql连接不对,应该是用内网的rds连接,
不得不修改了配置信息,提交代码触发流水线、推送新镜像到仓库、重新部署。仅仅是因为配置的修改就要大费周章实在是大可不必,现在有了configmap,只需修改configmap、重新部署deployment就行。
configmap是k8s自带的资源对象,是解决多配置环境下配置和程序解耦的一种解决方案。当然,分布式配置nacos、apollo,也是不错的选择。