1主2从基于GKE搭建k8s集群-无需科学上网

1、安装docker:

参考:Centos7 安装docker

2、搭建k8s集群:

使用kubeadm基于GKE搭建一个1主2从的k8s集群,1台master节点,2台worker节点

2.1、组件版本

Docker       18.09.0
---
kubeadm-1.17.4-0 
kubelet-1.17.4-0 
kubectl-1.17.4-0
---
k8s.gcr.io/kube-apiserver:v1.17.15
k8s.gcr.io/kube-controller-manager:v1.17.15
k8s.gcr.io/kube-scheduler:v1.17.15
k8s.gcr.io/kube-proxy:v1.17.15
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
---
calico:v3.9

2.1、修改hosts文件

  • master
    设置master的hostname,并且修改hosts文件
sudo hostnamectl set-hostname uat-master

[root@uat-master ~]# cat /etc/hosts
172.x.x.215 uat-master
172.x.x.216 uat-w1
172.x.x.217 uat-w2
  • worker
    设置worker的hostname,并且修改hosts文件
sudo hostnamectl set-hostname uat-w1
[root@uat-w1 ~]# vi /etc/hosts
172.x.x.215 uat-master
172.x.x.216 uat-w1
172.x.x.217 uat-w2

2.3、部署前基础前提配置

# (1)关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

# (2)关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# (3)关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab  ## 

# (4)配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

# (5)设置系统参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

2.4、部署kubeadm, kubelet and kubectl组件

  • 1 配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 2 安装kubeadm&kubelet&kubectl
yum install -y kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0
  • 3 docker和k8s设置同一个cgroup
# docker 注意格式和缩进,修改完后务必重启docker重启kubelet
vi /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker
# kubelet 如果发现输出directory not exist,也说明是没问题的
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet && systemctl start kubelet
  • 4 获取kubeadm需要的国内镜像
  • 4.1 查看镜像版本
[root@uat-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.17.15
k8s.gcr.io/kube-controller-manager:v1.17.15
k8s.gcr.io/kube-scheduler:v1.17.15
k8s.gcr.io/kube-proxy:v1.17.15
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
  • 4.2 使用阿里云国内镜像仓库获取对应版本镜像,并tag为kubeadm依赖的镜像名
    创建kubeadm.sh脚本
set -e
KUBE_VERSION=v1.17.15
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.4.3-0
CORE_DNS_VERSION=1.6.5

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-chengdu.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

查看镜像

[root@uat-master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.17.15            483fea38f633        5 days ago          117MB
k8s.gcr.io/kube-apiserver            v1.17.15            d354db900d2e        5 days ago          171MB
k8s.gcr.io/kube-controller-manager   v1.17.15            d3f0dfc74e3f        5 days ago          161MB
k8s.gcr.io/kube-scheduler            v1.17.15            e848b6f39abb        5 days ago          94.4MB
k8s.gcr.io/coredns                   1.6.5               70f311871ae1        13 months ago       41.6MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        13 months ago       288MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
[root@uat-master ~]#

2.5、kube init初始化master

  • kube init流程
01-进行一系列检查,以确定这台机器可以部署kubernetes

02-生成kubernetes对外提供服务所需要的各种证书可对应目录
/etc/kubernetes/pki/*

03-为其他组件生成访问kube-ApiServer所需的配置文件
    ls /etc/kubernetes/
    admin.conf  controller-manager.conf  kubelet.conf  scheduler.conf
    
04-为 Master组件生成Pod配置文件。
    ls /etc/kubernetes/manifests/*.yaml
    kube-apiserver.yaml 
    kube-controller-manager.yaml
    kube-scheduler.yaml
    
05-生成etcd的Pod YAML文件。
    ls /etc/kubernetes/manifests/*.yaml
    kube-apiserver.yaml 
    kube-controller-manager.yaml
    kube-scheduler.yaml
	etcd.yaml
	
06-一旦这些 YAML 文件出现在被 kubelet 监视的/etc/kubernetes/manifests/目录下,kubelet就会自动创建这些yaml文件定义的pod,即master组件的容器。master容器启动后,kubeadm会通过检查localhost:6443/healthz这个master组件的健康状态检查URL,等待master组件完全运行起来

07-为集群生成一个bootstrap token

08-将ca.crt等 Master节点的重要信息,通过ConfigMap的方式保存在etcd中,工后续部署node节点使用

09-最后一步是安装默认插件,kubernetes默认kube-proxy和DNS两个插件是必须安装的
kubeadm init --kubernetes-version=1.17.15 --apiserver-advertise-address=172.x.x.215 --pod-network-cidr=10.244.0.0/16
  • 提示安装成功后,根据提示创建config文件,并注意保存kubeadm join的信息
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • kubectl cluster-info查看一下是否成功
[root@uat-master ~]# kubectl cluster-info
Kubernetes master is running at https://172.x.x.215:6443
KubeDNS is running at https://172.x.x.215:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  • 系统组件以pod成功运行
  • 注意:coredns没有启动,需要安装网络插件
[root@uat-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
······
etcd-uat-master                           1/1     Running   1          2d1h
kube-apiserver-uat-master                 1/1     Running   1          2d1h
kube-controller-manager-uat-master        1/1     Running   1          2d1h
kube-proxy-48sr7                          1/1     Running   1          2d1h
kube-proxy-wxqsj                          1/1     Running   1          2d
kube-proxy-aeasd                          1/1     Running   1          2d
kube-scheduler-uat-master                 1/1     Running   1          2d1h
[root@uat-master ~]# 

  • 健康检查
[root@uat-master ~]# curl -k https://localhost:6443/healthz
ok[root@uat-master ~]# 

2.5、部署calico网络插件

选择网络插件:https://kubernetes.io/docs/concepts/cluster-administration/addons/

calico网络插件:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/

calico,同样在master节点上操作

  • 在k8s中安装calico
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
  • 确认一下calico是否安装成功
[root@uat-master ~]# kubectl get pods --all-namespaces
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
kube-system     calico-kube-controllers-7cc97544d-f4dw2     1/1     Running   1          2d1h
kube-system     calico-node-6chxj                           1/1     Running   7          2d1h
kube-system     calico-node-wxrrv                           1/1     Running   1          2d1h
kube-system     calico-node-aster                           1/1     Running   1          2d1h
kube-system     coredns-6955765f44-4g7pv                    1/1     Running   1          2d1h
kube-system     coredns-6955765f44-g2xbj                    1/1     Running   1          2d1h
kube-system     coredns-6955765f44-radfe                    1/1     Running   1          2d1h
kube-system     etcd-uat-master                             1/1     Running   1          2d1h
kube-system     kube-apiserver-uat-master                   1/1     Running   1          2d1h
kube-system     kube-controller-manager-uat-master          1/1     Running   1          2d1h
kube-system     kube-proxy-48sr7                            1/1     Running   1          2d1h
kube-system     kube-proxy-wxqsj                            1/1     Running   1          2d1h
kube-system     kube-proxy-ertye                            1/1     Running   1          2d1h
kube-system     kube-scheduler-uat-master                   1/1     Running   1          2d1h
[root@uat-master ~]# 

2.6、添加worker节点

  • 在准备添加到k8s集群的主机上执行如下命令 【初始化master节点的最后打印信息】
kubeadm join 172.x.x.215:6443 --token v5w25t.92s9rsst7bapvit9     --discovery-token-ca-cert-hash sha256:0c646df2915d1ca055aeeb88770a52813869a725c1aed28b729d1250ac4e0d6f
  • 执行添加命令后,在master节点查看集群节点
[root@uat-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
uat-master   Ready    master   2d2h   v1.17.4
uat-w1       Ready    <none>   2d1h   v1.17.4
uat-w2       Ready    <none>   2d1h   v1.17.4

2.7、体验一把

  • 部署whoami-deployment应用
[root@uat-master k8s]# cat whoami-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami-deployment
  labels:
    app: whoami
spec:
  replicas: 3
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami
        image: jwilder/whoami
        ports:
        - containerPort: 8000
[root@uat-master k8s]# kubectl get pods -o wide
NAME                                 READY   STATUS              RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
whoami-deployment-5ff8cd9445-g85sn   0/1     ContainerCreating   0          26s   <none>           uat-w1   <none>           <none>
whoami-deployment-5ff8cd9445-tcjx6   1/1     Running             0          26s   192.168.60.225   uat-w1   <none>           <none>
whoami-deployment-5ff8cd9445-vwtjr   0/1     ContainerCreating   0          26s   <none>           uat-w1   <none>           <none>
[root@uat-master k8s]# kubectl get pods -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
whoami-deployment-5ff8cd9445-g85sn   1/1     Running   0          50s   192.168.60.226   uat-w1   <none>           <none>
whoami-deployment-5ff8cd9445-tcjx6   1/1     Running   0          50s   192.168.60.225   uat-w1   <none>           <none>
whoami-deployment-5ff8cd9445-vwtjr   1/1     Running   0          50s   192.168.60.227   uat-w2   <none>           <none>
[root@uat-master k8s]# curl 192.168.60.227:8000
I'm whoami-deployment-5ff8cd9445-vwtjr

  • 扩缩容
[root@uat-master k8s]# kubectl scale deployment whoami-deployment --replicas=1
deployment.apps/whoami-deployment scaled
[root@uat-master k8s]# kubectl get pods -o wide
NAME                                 READY   STATUS        RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
whoami-deployment-5ff8cd9445-g85sn   0/1     Terminating   0          4m49s   192.168.60.226   uat-w1   <none>           <none>
whoami-deployment-5ff8cd9445-tcjx6   1/1     Running       0          4m49s   192.168.60.225   uat-w1   <none>           <none>
[root@uat-master k8s]# 

  • 删除 whoami-deployment应用
[root@uat-master k8s]# kubectl delete -f whoami-deployment.yaml 
deployment.apps "whoami-deployment" deleted
[root@uat-master k8s]# kubectl get pods -o wide
NAME                                 READY   STATUS        RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
whoami-deployment-5ff8cd9445-tcjx6   0/1     Terminating   0          8m15s   192.168.60.225   uat-w1   <none>           <none>
[root@uat-master k8s]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
[root@uat-master k8s]# 

2.8、Ingress组件

Ingress提供外部访问集群内部服务的能力
官网Ingress:https://kubernetes.io/docs/concepts/services-networking/ingress/

GitHub Ingress Nginx:https://github.com/kubernetes/ingress-nginx

Nginx Ingress Controller:https://kubernetes.github.io/ingress-nginx

  • 以Deployment方式创建Pod,该Pod为Ingress Nginx Controller,要让外界访问,可以通过Service的NodePort或者HostPort方式,官网默认的NodePort方式比较浪费集权节点端口,这里选择HostPort方式,指定在uat-w1节点上运行

创建 mandatory.yaml文件,也可以从官网https://kubernetes.github.io/ingress-nginx获取

[root@uat-master k8s]# cat mandatory.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        name: ingress
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

确保nginx-controller运行到w1节点上,给uat-w1节点打标签

 kubectl label node uat-w1 name=ingress

使用HostPort方式运行,需要在mandatory.yaml 增加配置
hostNetwork: true
在这里插入图片描述

  • 确保uat-w1节点上的80和443端口没有被占用
lsof -i tcp:80
lsof -i tcp:443
  • 部署ingress
[root@uat-master k8s]# kubectl apply -f mandatory.yaml 
······
[root@uat-master k8s]# kubectl get all -n ingress-nginx
NAME                                            READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-64f4897cbd-rvgj5   1/1     Running   1          2d1h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller   1/1     1            1           2d1h

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-ingress-controller-64f4897cbd   1         1         1       2d1h
[root@uat-master k8s]# 
  • 体验ingress
    创建whoami-sevice.yaml及服务
[root@uat-master k8s]# vi whoami-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: whoami-service
spec:
  ports:
  - port: 80   
    protocol: TCP
    targetPort: 8000
  selector:
    app: whoami
  type: NodePort  
[root@uat-master k8s]# kubectl apply -f whoami-service.yaml 
service/whoami-service created
[root@uat-master k8s]# kubectl get pods |grep whoami
[root@uat-master k8s]# kubectl get svc |grep whoami
whoami-service                             NodePort    10.97.146.171    <none>        80:32743/TCP     29s
[root@uat-master k8s]# curl 10.97.146.171:80 
I'm whoami-deployment-5ff8cd9445-j2l5q
  • 创建whoami-ingress.yaml及pod服务
[root@uat-master k8s]# cat whoami-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: whoami-ingress
spec:
  rules:
  - host: i.bingo.com
    http:
      paths:
      - path: /
        backend:
          serviceName: whoami-service
          servicePort: 80
[root@uat-master k8s]# kubectl apply -f whoami-ingress.yaml 
ingress.extensions/whoami-ingress created
[root@uat-master k8s]# kubectl get ingress
NAME             HOSTS               ADDRESS   PORTS   AGE
whoami-ingress   i.bingo.com                   80      8s
[root@uat-master k8s]# 
  • 配置hostes文件,以便访问i.bingo.com域名
[root@uat-master k8s]# vi /etc/hosts
172.x.x.215 uat-master
172.x.x.216 uat-w1
172.x.x.217 uat-w2
## nginx-ingress-controller所在节点外网IP:120.x.x.227
120.x.x.227 i.bingo.com  
[root@uat-master k8s]# curl i.bingo.com
I'm whoami-deployment-5ff8cd9445-95c5m
[root@uat-master k8s]# curl i.bingo.com
I'm whoami-deployment-5ff8cd9445-j2l5q
[root@uat-master k8s]# curl i.bingo.com
I'm whoami-deployment-5ff8cd9445-wr4pz
[root@uat-master k8s]# 
  • 如果有自己申请的域名,则将域名解析到nginx-ingress-controller所在节点外网IP地址即可
    在这里插入图片描述
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值