目录
1.2 拉取coredns的docker image并推送到私有dockerhub
1.安装coredns
该组件只负责服务名到集群网络ip(service ip)间的映射
对象h136
1.1配置nginx
提供k8s资源配置清单的统一访问入口
vim /etc/nginx/conf.d/k8s-yaml.od.com.conf
server {
listen 80;
server_name k8s-yaml.od.com;
location / {
autoindex on;
default_type text/plain;
root /data/k8s-yaml;
}
}
创建目录
mkdir /data/k8s-yaml
mkdir /data/k8s-yaml/coredns
1.2 拉取coredns的docker image并推送到私有dockerhub
docker pull docker.io/coredns/coredns:1.6.5
docker tag [coredns image id] harbor.od.com/public/coredns:v1.6.5
docker push harbor.od.com/public/coredns:v1.6.5
2 容器方式部署coredns
2.1 准备资源配置清单
对象:h136
下面4个yaml文件的官方样例配置在github的kubernetes/kubernetes-->cluster-->addons-->dns-->coredns-->coredns.yaml.base
都放到h136的/data/k8s-yaml/coredns下
2.1.1 RBAC
vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
2.1.2 ConfigMap
vim cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
ready
kubernetes cluster.local 192.168.0.0/16
forward . 192.168.146.132
cache 30
loop
reload
loadbalance
}
2.1.3 Deployment
vim dp.yaml
由于创建pods控制器的时候 需要从harbor拉取image,本章节所创建的coredns是放置在kube-system命名空间下,因此当以声明式创建控制器时不能再用之前的【k8s】集群8文章中创建的regcred secret,在此针对命名空间kube-system创建secret
kubectl create secret docker-registry regcred-kube-system --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 --namespace=kube-system
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
containers:
- name: coredns
image: harbor.od.com/public/coredns:v1.6.5
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
imagePullSecrets:
- name: regcred-kube-system
2.1.4 Service
vim svc.yaml
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 192.168.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
- name: metrics
port: 9153
protocol: TCP
2.2 应用资源配置清单
对象:h134
应用4个资源配置清单创建资源
[root@h134 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
[root@h134 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
configmap/coredns created
[root@h134 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
deployment.apps/coredns created
[root@h134 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created
3 验证
PS:注意service ip 192.168.0.2同样是定义使用在/opt/kubernetes/server/bin/kubelete.sh下的--cluster-dns参数中
[root@h134 ~]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-65cc6c5f86-dlxqf 1/1 Running 0 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP,9153/TCP 79m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 1/1 1 1 21m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-65cc6c5f86 1 1 1 21m
查看pods控制器对应的svc ip的映射,注意下面只有对有cluster ip的deployment才有效。
[root@h134 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short
192.168.37.77
[root@h134 ~]# dig -t A nginx-ds.default.svc.cluster.local. @192.168.0.2 +short
192.168.194.149
4 坑
我用的kubernetes svc集群 自动生成为192.168.0.1,这是给h134、h135 kube-apiserver用的前置ipvs ip,但我的apiserver-csr.json这个证书文件中,不包含192.168.0.1这个ip,这样的话启动coredns 观察日志会有错误,我用kubectl强制(--force)修改了kubernetes svc的ip为192.168.146.150,解决。
这个坑的另一种解决方法就是在编写apiserver-csr证书请求文件时候,提前将要用的ip都写入即可。