一、介绍
1、coreDNS的功能
coreDNS的作用主要是作为DNS服务器,在集群内提供服务发现功能,也就是服务之间的互相定位的过程。他监听集群中service和pod的创建和销毁事件,当serivice或者pod被创建时,记录对应的解析记录。当其他pod通过域名来访问集群中的service或pod时,会向coreDNS服务查询解析记录,然后访问解析到的IP地址。
2、coreDNS特点
由于coreDNS的高效率、资源占用率少,已经替换kube-dns,成为了kubernetes集群的默认DNS服务
二、部署yaml
查看并分析yaml配置文件
#################################################
#################################################
####deployment配置文件
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: CoreDNS
name: coredns
namespace: kube-system
resourceVersion: '23065516'
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- '-conf'
- /etc/coredns/Corefile
image: 'easzlab.io.local:5000/coredns/coredns:1.9.3'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 300Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume
#################################################
#################################################
####service配置文件
---
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: '9153'
prometheus.io/scrape: 'true'
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: CoreDNS
name: kube-dns
namespace: kube-system
resourceVersion: '2954'
spec:
clusterIP: 10.68.0.2
clusterIPs:
- 10.68.0.2
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
#################################################
#################################################
####configmap配置文件
---
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf {\n max_concurrent 1000\n }\n cache 30\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2023-07-31T07:27:02Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: coredns
namespace: kube-system
resourceVersion: "2950"
uid: d03d439c-0bd2-4818-af1b-0f84ec6b6c15
注:coreDNS服务必须将metadata.name字段命名为kube-dns。这是为了能够与传统的kube-dns服务名称来解析集群内部地址的工作负载具有更好的互操作性。使用kube-dns作为服务的名称可以抽离其共有之后运行的是哪个DNS提供程序的这一实现细节
还需要修改kebelet的配置文件如下所示,clusterDNS的地址代表本机IP,因为后边会启用node-local-DNS-cache,后边再详细介绍。clusterDomain代表主域名后缀
clusterDNS: - 169.254.20.10 clusterDomain: cluster.local
三、集群DNS解析原理
pod内的DNS域名解析文件为/etc/resolv.conf,文件内容如下
search jws2-42.svc.cluster.local svc.cluster.local cluster.local
nameserver 169.254.20.10
options ndots:5
参数 | 描述 |
---|---|
nameserver | DNS服务器地址 |
search | 设置域名的查找后缀规则,查找配置越多,说明域名解析的查找匹配次数越多。一般为三个后缀,最多进行8次查找才能解析到正确结果,因为集群里面进行IPV4和IPV6查询各四次 |
options | 定义域名解析配置文件选项,支持多个KV值,如ndots:5,说明如果访问的域名字符串内的点字符数量超过ndots值,则认为是完整域名,并直接解析;如果不足ndots值,则追加search段后端在进行查询。 |
根据上述pod配置,集群会将域名请求查询发往DNS服务器获取结果。
四、集群DNS Policy配置和场景说明
集群支持通过dnspolicy字段为每个pod配置不同的DNS策略,目前支持四种策略
- ClusterFirst:默认策略;通过coreDNS做域名解析,pod内的dns服务IP为集群的kube-dns的地址。
- None:忽略集群DNS策略,需要您提供dnsconfig字段来指定dns配置信息。
- Default:pod直接继承集群节点的域名解析配置。即直接使用node节点配置的dns服务器
- clusterFirstWithHostNet:强制在hostnetwork网络模式下使用clusterfirst策略(默认使用default策略)
对上述场景做示例:
默认的配置就不做示例了
场景一:pod自定义dns配置
当需要使用集群外部的dns作为pod解析的dns时,可以使用dnspolicy:None策略,配置如下
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- image: alpine
command:
- sleep
- "10000"
imagePullPolicy: Always
name: alpine
dnsPolicy: None
dnsConfig:
nameservers: ["169.254.xx.xx"]
#最多指定三个地址,最少必须有一个
searches:
#最多6个搜索域
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
场景二:复用node节点的dns配置
当pod不需要访问集群内部的服务,不希望dns解析经过coredns,只需要一个DNS做解析,访问集群外部的服务时,可与采用dnspolicy:Default策略,配置如下:
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- image: alpine
command:
- sleep
- "10000"
imagePullPolicy: Always
name: alpine
dnsPolicy: Default
场景三:在HostNetwork网络模式下访问集群服务
如果pod使用HostNetwork来配置网络,pod内运行的应用可以直接使用宿主机的port,其dns策略默认为Default,不能访问集群内的服务。如果您希望在此网络模式下访问集群服务,可以使用dnspolicy:ClusterFirstWithHostNet策略,配置如下:
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- image: alpine
command:
- sleep
- "10000"
imagePullPolicy: Always
name: alpine
五、coreDNS配置文件解析
在kube-system名称空间下,集群有一个coreDNS的configmap,里面是coreDNS的配置文件和启动的插件。
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf {\n max_concurrent 1000\n }\n cache 30\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2023-07-31T07:27:02Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: coredns
namespace: kube-system
resourceVersion: "2950"
uid: d03d439c-0bd2-4818-af1b-0f84ec6b6c15
参数 | 描述 |
---|---|
errors | 将错误打到标准输出 |
health | coredns自身的将康状态报告,默认监听8080端口,一般做健康检查 |
ready | coredns插件状态报告,默认监听8181,一般做可读性检查,当所有插件运行后,ready的状态码为200 |
kubenrnetes | 开启Kubernetes插件,提供集群内的解析 |
prometheus | 可以通过9153端口暴露prometheus可以接收到的指标 |
forward | 将域名查询请求转到预定义的dns服务器。 |
cache | dns缓存 |
loop | 环路检查,如果有环路,则停止coredns |
reload | 允许自动重新加载已经更改的配置文件,当更改configmap后,无需重启coredns,2分钟后自动生效。 |
参考:https://lion-wu.blog.csdn.net/article/details/127027461