k8s入门学习教程2[doshboard,kubectl使用,service使用,kube-dns使用]

目录

安装doshboard

kubectl的使用

服务镜像的制作

service服务相关操作

服务自动发现(kube-dns)


安装doshboard

基于网页的kubernetes用户界面.可以使用Dashboard将容器应用部署到kubenetes集群中,也可以对容器应用排错,还能管理集群资源,可以使用dashboard获取运行在集群中的应用的概览信息,可以创建或者修改kubenetes资源(如Deployment,Job,DaemonSet等等),Dashboard展示了Kubenetes集群中的资源的状态信息和所有报错信息

[root@registry ~]# docker images 

REPOSITORY                                                              TAG                 IMAGE ID            CREATED             SIZE

192.168.1.100:5000/kubernetes-dashboard-amd64    v1.8.3              0c60bcf89900        2 years ago         102.3 MB

[root@registry ~]# curl http://192.168.1.100:5000/v2/_catalog
{"repositories":["kubernetes-dashboard-amd64","myos","pod-infrastructure"]}

 

访问过程:用户访问master节点,master节点找到对应的node的节点,node节点的kubelet管理docker,从而开启pod的容器,pod是通过pause管理容器.


 

[root@kubenode1 ~]# docker pull pod-infrastructure:latest
Trying to pull repository 192.168.1.100:5000/pod-infrastructure ... 
latest: Pulling from 192.168.1.100:5000/pod-infrastructure
Digest: sha256:60b52a2ba3d2d11e6639d747dc9799e88cd2a6df1a0eab0fcbd6e63aa27e554e

 

[root@kubemaseter ~]# kubectl create -f kube-dashboard.yaml

[root@kubemaseter ~]# cat kube-dashboard.yaml 
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: 192.168.1.100:5000/kubernetes-dashboard-amd64:v1.8.3
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          - --apiserver-host=http://192.168.1.20:8080          ###修改为Master的IP
        volumeMounts:
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
    nodePort: 30090
  selector:
    k8s-app: kubernetes-dashboard
 

创建管理面板

[root@kubemaseter ~]# kubectl create -f kube-dashboard.yaml 
 

 

 

kubectl的使用

用于控制kubernetes集群的命令行工具

语法格式:kubectl [command] [TYPE] [NAME] [flags]

command :子命令 ,如create,get,describe,delete

type:资源类型:可以表示为单数,复数或者缩写形式

name: 资源的名称,如果省,则显示所有资源信息

flags:指定可选标识,或附加的参数

 

[root@kubemaseter ~]# kubectl get node   #查看节点状态
NAME           STATUS     ROLES     AGE       VERSION
kubenode1      Ready      <none>    19d       v1.10.3
kubenode2      Ready      <none>    19d       v1.10.3
kubenode3      Ready      <none>    19d       v1.10.3
[root@kubemaseter ~]# kubectl get pod   #查看pod容器资源
NAME                        READY     STATUS              RESTARTS   AGE
my-phpfpm-9ffb646f4-77djw   0/1       ContainerCreating   0          12d
[root@kubemaseter ~]# kubectl get deployment   #查看资源名称
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-phpfpm   3         3         3            0           12d

如果doshboard不成功,可以用如下命令检查一下

root@kubemaseter ~]# kubectl -n kube-system get deployment 
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-dns               1         1         1            1           17d
kubernetes-dashboard   1         1         1            1           22m
[root@kubemaseter ~]# kubectl -n kube-system get pod
NAME                                    READY     STATUS    RESTARTS   AGE
kube-dns-5bdcdf5b45-g5526               3/3       Running   12         17d
kubernetes-dashboard-55c5648f88-rf7nk   1/1       Running   0          22m

 

[root@kubemaseter ~]# kubectl describe deployment my-phpfpm   #查看资源详细信息
Name:                   my-phpfpm
Namespace:              default
CreationTimestamp:      Fri, 29 May 2020 10:16:13 +0800
Labels:                 app=my-phpfpm
Annotations:            deployment.kubernetes.io/revision=1
 

[root@kubemaseter ~]# kubectl -n kube-system describe pod kubernetes-dashboard-55c5648f88-rf7nk  #查看pod资源的信息
Name:           kubernetes-dashboard-55c5648f88-rf7nk
Namespace:      kube-system
Node:           kubenode1/192.168.1.21
Start Time:     Wed, 10 Jun 2020 10:40:12 +0800
Labels:         k8s-app=kubernetes-dashboard
 

 

 

deployment会控制RC,RC去创建对应的pod节点

 

 

 

[root@kubemaseter ~]# kubectl run test1 -i -t --image=192.168.1.100:5000/myos:latest   #创建容器
If you don't see a command prompt, try pressing enter. 
[root@test1-67b44cc6bf-9l7jw /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.254.45.2  netmask 255.255.255.0  broadcast 0.0.0.0

[root@kubemaseter ~]# kubectl attach test1-67b44cc6bf-9l7jw  -c test1 -i -t     #进入一个容器
[root@test1-67b44cc6bf-9l7jw /]# 

[root@kubemaseter ~]# kubectl logs  test1-67b44cc6bf-9l7jw  #显示之前在容器里面的执行终端的信息

 

 

删除容器

方法一:(单纯的删除pod并不能把pod删除,因为资源会自动重启一份)

[root@kubemaseter ~]# kubectl delete pod test1-67b44cc6bf-9l7jw 
pod "test1-67b44cc6bf-9l7jw" deleted
[root@kubemaseter ~]# kubectl get pod
NAME                        READY     STATUS              RESTARTS   AGE
test1-67b44cc6bf-9l7jw      1/1       Terminating         3          2h
test1-67b44cc6bf-bdrlq      1/1       Running             0          13s
 

方法二:(删除容器的资源在删除容器能彻底的把容器给删除)

[root@kubemaseter ~]# kubectl delete deployment test1
deployment.extensions "test1" deleted
[root@kubemaseter ~]# kubectl get deployment 
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-phpfpm   3         3         3            0           12d
[root@kubemaseter ~]# kubectl get pod
NAME                        READY     STATUS              RESTARTS   AGE
test1-67b44cc6bf-bdrlq      1/1       Terminating         0          2m
[root@kubemaseter ~]# kubectl delete pod test1-67b44cc6bf-bdrlq  
pod "test1-67b44cc6bf-bdrlq" deleted
[root@kubemaseter ~]# kubectl get pod

NAME                        READY     STATUS              RESTARTS   AGE
 

 

服务镜像的制作

制作一个Apache的镜像,在该镜像中由http和php软件,和暴露对应的端口80.这个为之前kubectl的创建容器做准备

[root@registry ~]# mkdir apache
[root@registry ~]# cd apache/
[root@registry apache]# vim Dockerfile 
FROM myos:latest
RUN yum install -y httpd php
ENV LANG=C
WORKDIR /var/www/html
EXPOSE 80
CMD ["httpd","-DFOREGROUND"]
[root@registry apache]# docker build -t myos:httpd .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM myos:latest
 ---> a9bf132858d5
Step 2 : RUN yum install -y httpd php
 ---> Using cache
 ---> 2b42fa67bf61
Step 3 : ENV LANG C
 ---> Using cache
 ---> 2069f61b112f
Step 4 : WORKDIR /var/www/html
 ---> Using cache
 ---> c5a6ed1f9382
Step 5 : EXPOSE 80
 ---> Using cache
 ---> cc11719efe84
Step 6 : CMD httpd -DFOREGROUND
 ---> Using cache
 ---> 43d9dece2466
Successfully built 43d9dece2466

[root@registry apache]# docker run -itd myos:httpd
d5d1c501a22f7626e1b8ceefbaeae066e2da829977986a387c31d7c6c52acc19
 

[root@registry apache]# curl http://10.254.56.2  #访问容器IP

[root@registry ~]# docker push 192.168.1.100:5000/myos:httpd

启动2副本的web服务(多副本会自动分配到不同机器上)

#kubectl run -r 副本数量 --image=镜像名称:标签

[root@kubemaseter ~]# kubectl run -r 2 apache --image=192.168.1.100:5000/myos:httpd
deployment.apps "apache" created
[root@kubemaseter ~]# kubectl get deployment 
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
apache      2         2         2            0           8s
[root@kubemaseter ~]# kubectl get pod -o wide
NAME                        READY     STATUS              RESTARTS   AGE       IP            NODE
apache-df885559b-rrsc7      1/1       Running             0          20s       10.254.95.5   kubenode1
apache-df885559b-zspkz      1/1       Running             0          20s       10.254.45.2   kubenode2

 

 

service服务相关操作

 

会变化的资:当发现某一个pod不能使用的时候RS会在其他机器上在创建一个相同的pod.以其对应的容器.虽然起到了高可用的作用,但是容器的ip是会变化的.

[root@kubemaseter ~]# kubectl get pod -o wide
NAME                        READY     STATUS              RESTARTS   AGE       IP            NODE
apache-df885559b-rrsc7      1/1       Running             0          29m       10.254.95.5   kubenode1
apache-df885559b-zspkz      1/1       Running             0          29m       10.254.45.2   kubenode2
[root@kubemaseter ~]# kubectl delete pod apache-df885559b-rrsc7
pod "apache-df885559b-rrsc7" deleted
[root@kubemaseter ~]# kubectl get pod -o wide
NAME                        READY     STATUS              RESTARTS   AGE       IP            NODE
apache-df885559b-hz8g4      0/1       ContainerCreating   0          4s        <none>        kubenode3
apache-df885559b-rrsc7      0/1       Terminating         0          29m       <none>        kubenode1
apache-df885559b-zspkz      1/1       Running             0          29m       10.254.45.2   kubenode2

  会变化的pod给的IP给我们访问带来了很多的问题,service是用于处理这些问题的,service会创建一个cluster ip,这个地址对应资源地址,不管pod如何变化,service总能找到对应的pod,且cluster ip保持不变,如果用pod对应多个容器,service会自动在多个容器间实现负载均衡,service通过port,nodePort,targetPort将访问的请求最终映射到Pod的容器内部服务上.

service为什么会知道其他的pod的资源,pod的资源是通过deploy创建的,service会和deploy进行绑定,这样就会知道那些节点上创建了pod.这个也之前和之前提的,为什么彻底删除pod的节点的时候,需要删除它的deploy资源,之后才可以彻底删除容器.service和deploy绑定之后,对外会由一个cluster ip.客户只需要访问cluster ip就可以访问到对应的pod节点

 

service服务的端口由三种

port:service暴露在cluster ip上的端口,是提供给集群内部客户访问service的入口,供集群内部服务访问使用

nodePort:是提供给集群外部客户访问service入口的一种方式,是集群外访问集群内资源的一种方式

targetPort:是pod上容器服务监听的端口,从port或nodePort上到来的数据最终经过kube-proxy流入到后端pod的targetPort进入容器,从而达到访问pod容器内部服务的目的

 

 

 

 

 

内部资源访问

cluster-ip是集群分配的服务ip,供集群容器内访问,如果是在集群容器外,是访问不了的.

语法格式:

kubectl expose 资源类型 资源名称 --port=服务端口 --target-port=容器端口 --name=service的名字

[root@kubemaseter ~]# kubectl get deployment apache 
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
apache    2         2         2            2           1h
[root@kubemaseter ~]# kubectl expose deployment apache --port=80 --target-port=80 --name=my-httpd
service "my-httpd" exposed
[root@kubemaseter ~]# kubectl get service
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
my-httpd       ClusterIP   10.254.87.62    <none>        80/TCP     22s

[root@kubemaseter ~]# curl http://10.254.87.62
curl: (7) Failed connect to 10.254.87.62:80; 没有到主机的路由
正确访问
[root@kubemaseter ~]# kubectl run test -i -t --image=192.168.1.100:5000/myos:latest
If you don't see a command prompt, try pressing enter.
[root@test-69bf8fdc5-2vgzn /]# curl http://10.254.87.62

 

(怎么样判断几个容器同属于一个pod组)

当服务出现问题的时候一定要检查,我们启动机器的顺序是flannel,docker,kubelet,kube-proxy.

 

服务自动发现(kube-dns)

kubernetes 提供了service的概念可以通过VIP访问pod提供的服务,但是在使用的时候怎么样知道某个应用的VIP?

比如我们有两个应用,一个web,一个db,通过service暴露出的端口提供服务,web需要链接到db应用,我们只知道应用的名称并不知道它的VIP地址,这时候最好的方式就是通过DNS来查询了,kube-dns就是为了解决这一问题而出现的,在K8s中DNS是作为插件来安装的.

需要将以下三个镜像

[root@registry ~]# curl http://192.168.1.100:5000/v2/_catalog
{"repositories":["k8s-dns-dnsmasq-nanny-amd64","k8s-dns-kube-dns-amd64","k8s-dns-sidecar-amd64","kubernetes-dashboard-amd64","myos","pod-infrastructure"]}

[root@kubemaseter ~]# vim kube-dns.yaml 

 33   clusterIP: 10.254.254.253   #定义dns的ip地址

 98         image: 192.168.1.100:5000/k8s-dns-kube-dns-amd64:1.14.10

128         - --domain=baidu.local.  #域名或者是baidu.com

131         - --kube-master-url=http://192.168.1.20:8080

150         image: 192.168.1.100:5000/k8s-dns-dnsmasq-nanny-amd64:1.14.10

170         - --server=/tedu.local./127.0.0.1#10053

189         image: 192.168.1.100:5000/k8s-dns-sidecar-amd64:1.14.10
 

202         - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.tedu.local.,5,SRV
 

203         - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.tedu.local.,5,SRV
 

完整的kube-dns.yaml 文件如下:

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.254.253
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: 192.168.1.100:5000/k8s-dns-kube-dns-amd64:1.14.10
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=tedu.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --kube-master-url=http://192.168.1.20:8080
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: 192.168.1.100:5000/k8s-dns-dnsmasq-nanny-amd64:1.14.10
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/tedu.local./127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: 192.168.1.100:5000/k8s-dns-sidecar-amd64:1.14.10
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.tedu.local.,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.tedu.local.,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

 

kubelet配置DNS的IP,修改所有kube-node节点 /etc/kubenetes/kubelet.通知dns的ip和域名

[root@kubenode1 ~]# vim /etc/kubernetes/kubelet

 14 KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --pod-i    nfra-container-image=pod-infrastructure:latest --cluster-dns=10.254.254.253 --cluster-domain=baidu.local."

[root@kubenode1 ~]# systemctl restart kubelet
 


注意,如果在没有创建kube-dns之前创建的pod.需要删除掉,不然之前的是不会得到dns域名解析的

[root@kubemaseter kube-dns]# kubectl -n kube-system get deployment 
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-dns               1         1         1            1           18d
kubernetes-dashboard   1         1         1            1           1d
 

[root@kubemaseter kube-dns]# kubectl -n kube-system get service
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.254.254.253   <none>        53/UDP,53/TCP   18d
kubernetes-dashboard   NodePort    10.254.108.208   <none>        80:30090/TCP    11h
 

[root@kubemaseter kube-dns]# kubectl -n kube-system get pod
NAME                                    READY     STATUS    RESTARTS   AGE
kube-dns-5bdcdf5b45-g5526               3/3       Running   12         18d
kubernetes-dashboard-55c5648f88-rf7nk   1/1       Running   0          11h

 

[root@kubemaseter ~]# kubectl apply -f kube-dashboard.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps "kubernetes-dashboard" configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service "kubernetes-dashboard" configured
 

 测试:创建一组容器,绑定好对应的接口和dns,这个是在平台内是ping不通的,只有在平台内才可以访问的.

[root@kubemaseter ~]# kubectl run -r 2 my-apache --image=192.168.1.100:5000/myos:httpd #启动Apache应用
deployment.apps "my-apache" created
[root@kubemaseter ~]# kubectl expose deployment my-apache --port=80 --target-port=80 --name=apche #创建服务
service "apche" exposed
[root@kubemaseter ~]# kubectl get service   #查询服务状态
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
apche          ClusterIP   10.254.18.254   <none>        80/TCP     11s

[root@kubemaseter ~]# kubectl delete service my-httpd #把之前相关的myos:httpd创建的容器服务给删除掉
service "my-httpd" deleted

[root@kubemaseter ~]# kubectl run test1  -i -t --image=192.168.1.100:5000/myos:latest  启动容器进行测试

[root@test1-67b44cc6bf-rcmz2 /]# curl http://apche
 

[root@kubemaseter ~]# kubectl describe pod my-apache-7c49dcfd54-cl8bd  查看pod状态,dns警告消失

 

 

 

 

 

 

 

 


 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值