【K8S运维知识汇总】第3天6:安装部署coredns

在这里插入图片描述
在这里插入图片描述

在运维主机上(10.4.7.200)准备Coredns镜像文件,以docker镜像文件的方式部署到Kubernetes集群中去。

下载coredns镜像

[root@hdss7-200 ~]# docker pull coredns/coredns:1.6.9
[root@hdss7-200 ~]# docker images
REPOSITORY                      TAG                        IMAGE ID            CREATED             SIZE
coredns/coredns                 1.6.9                      faac9e62c0d6        3 months ago        43.2MB
……

[root@mfyxw50 ~]# docker tag faac9e62c0d6 harbor.od.com/public/coredns:v1.6.9

将打好标签的coredns上传到私有仓库

[root@hdss7-200 ~]# docker login harbor.od.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@hdss7-200 ~]# docker push harbor.od.com/public/coredns:v1.6.9
The push refers to repository [harbor.od.com/public/coredns]
8762ba1e4767: Pushed 
225df95e717c: Pushed 
v1.6.9: digest: sha256:2044ffefe18e2dd3d6781e532119603ee4e8622b6ba38884dc7ab53325435151 size: 739

登录harbor.od.com查看是否上传成功

登录到https://harbor.od.com,使用用户名:admin 密码:Harbor12345来查看coredns是否上传成功

在这里插入图片描述

提供coredns的yaml文件(通过http方式访问,给nginx提供配置文件)

在运维主机(10.4.7.200)执行

[root@mfyxw50 ~]# cat > /etc/nginx/conf.d/k8s-yaml.od.com.conf << EOF
server {
    listen       80;
    server_name  k8s-yaml.od.com;

    location / {
        autoindex on;
        default_type text/plain;
        root /data/k8s-yaml;
    }
}
EOF

创建k8s-yaml目录并重启nginx服务

在运维主机(10.4.7.200)上执行如下命令
以后所有的资源配置清单统一放置在运维主机的/data/k8s-yaml目录下即可

[root@hdss7-200 ~]# mkdir -p /data/k8s-yaml/coredns
[root@hdss7-200 ~]# /usr/sbin/nginx -s reload

在DNS的od配置文件添加记录

在DNS服务器(10.4.7.11)主机上执行如下命令

[root@hdss7-11 ~]# vi /var/named/od.com.zone 
[root@hdss7-11 ~]# cat !$
cat /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600	; 10 minutes
@   		IN SOA	dns.od.com. dnsadmin.od.com. (
				2020062503 ; serial
				10800      ; refresh (3 hours)
				900        ; retry (15 minutes)
				604800     ; expire (1 week)
				86400      ; minimum (1 day)
				)
				NS   dns.od.com.
$TTL 60	; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200


# 重启DNS服务
[root@mfyxw10 ~]# systemctl restart named
[root@mfyxw10 ~]# ping k8s-yaml.od.com

访问k8s-yaml.od.com/coredns

在这里插入图片描述

.为coredns提供yaml文件

在运维主机(10.4.7.200)上执行

rbac.yaml文件内容如下:

[root@hdss7-200 ~]# vi /data/k8s-yaml/coredns/rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
 

configMap.yaml文件内容如下:

[root@hdss7-200 ~]# vi /data/k8s-yaml/coredns/configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        ready
        kubernetes cluster.local 192.168.0.0/16
        forward . 10.4.7.11
        cache 30
        loop
        reload
        loadbalance
       }

 

备注: forward . 10.4.7.11 指向上级dns服务

deployment.yaml文件内容如下:

[root@hdss7-200 ~]# vi /data/k8s-yaml/coredns/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/public/coredns:v1.6.9
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile

svc.yaml文件内容如下:

[root@hdss7-200 ~]# vi /data/k8s-yaml/coredns/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP

执行coredns的yaml文件

在master主机(10.4.7.21或10.4.7.21)任意一台执行

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/configMap.yaml
configmap/coredns created

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/deployment.yaml
deployment.apps/coredns created

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created


# 验证
[root@hdss7-21 ~]# kubectl get all -n kube-system -o wide
NAME                           READY   STATUS             RESTARTS   AGE     IP           NODE                NOMINATED NODE   READINESS GATES
pod/coredns-58f8966f84-8wzdl   0/1     ImagePullBackOff   0          5m12s   172.7.21.2   hdss7-21.host.com   <none>           <none>


NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
service/coredns   ClusterIP   192.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   4m58s   k8s-app=coredns


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                SELECTOR
deployment.apps/coredns   0/1     1            0           5m12s   coredns      harbor.od.com/public/coredns:v1.6.9   k8s-app=coredns

NAME                                 DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                SELECTOR
replicaset.apps/coredns-58f8966f84   1         1         0       5m12s   coredns      harbor.od.com/public/coredns:v1.6.9   k8s-app=coredns,pod-template-hash=58f8966f84






在宿主机执行dig命令查询,验证cluster_dns(宿主机查询)

在master主机(10.4.7.21或10.4.7.21)任意一台执行

[root@hdss7-21 ~]# cat /opt/kubernetes/server/bin/kubelet.sh 
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
……
[root@hdss7-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short
[root@hdss7-21 ~]# dig -t A hdss7-21.host.com @192.168.0.2 +short # 自动访问上一级dns

# 验证core_dns,以名称为nginx-dp的service资源为例
[root@hdss7-21 ~]# kubectl get all -n kube-public -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP           NODE                NOMINATED NODE   READINESS GATES
pod/nginx-dp-5dfc689474-bqk8w   1/1     Running   0          7d2h   172.7.21.3   hdss7-21.host.com   <none>           <none>
pod/nginx-dp-5dfc689474-fc6z9   1/1     Running   0          7d2h   172.7.22.3   hdss7-22.host.com   <none>           <none>


NAME               TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/nginx-dp   ClusterIP   192.168.103.156   <none>        80/TCP    7d10h   app=nginx-dp


NAME                       READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                              SELECTOR
deployment.apps/nginx-dp   2/2     2            2           7d2h   nginx        harbor.od.com/public/nginx:v1.7.9   app=nginx-dp

NAME                                  DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES                              SELECTOR
replicaset.apps/nginx-dp-5dfc689474   2         2         2       7d2h   nginx        harbor.od.com/public/nginx:v1.7.9   app=nginx-dp,pod-template-hash=5dfc689474

[root@hdss7-21 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short
192.168.103.156 

# 解析成功,到此service资源名称与集群的虚拟ip绑定关联成功,后续的访问可以直接访问service资源名称来访问集群

使用curl来访问

在master主机(10.4.7.21或10.4.7.21)任意一台执行

在集群外面(宿主机上)curl是无法访问到
在这里插入图片描述
进入容器里面curl是否能正常访问到呢?
在这里插入图片描述在这里插入图片描述

# 访问成功在这里插入图片描述

以下2种访问方式都成功:
在这里插入图片描述在这里插入图片描述

curl nginx-db.kube-public访问成功的原因如下:省略了域名后缀
在这里插入图片描述
总结:在集群外面是无法curl到集群里面的svc的,而在容器就能正常使用curl来访问svc名
可以在每个容器里面,通过查看 cat /etc/resolv.conf可知,search 是代表缺省域

core_dns参考链接 :
http://ccnuo.com/

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值