k8s集群外访问集群内pod服务的几种方式

一、hostPort或hostNetwork

此种方式直接将pod内部端口映射到部署pod的主机上,外部访问通过主机IP+端口直接访问pod;

  • hostPort
vim nginx-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.name: nginx-test
  template:
    metadata:
      labels:
        app.name: nginx-test
    spec:
      containers:
      - image: nginx:1.21.1
        imagePullPolicy: IfNotPresent
        name: nginx-test
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
          hostPort: 40080                        # 指定hostPort,关键配置
kubectl apply -f nginx-test.yaml

访问测试:

curl http://127.0.0.1:40080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

  • hostNetwork
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.name: nginx-test
  template:
    metadata:
      labels:
        app.name: nginx-test
    spec:
      hostNetwork: true                # 使用主机网络
      containers:
      - image: nginx:1.21.1
        imagePullPolicy: IfNotPresent
        name: nginx-test
        ports:
        - containerPort: 80
          name: http
          protocol: TCP

使用主机IP+端口访问;

说明

        此种方式访问集群内pod,需要使用标签固定部署服务的节点,需要关注pod漂移的问题;如果需要关注pod内的负载均衡的话,需要结合外部的LB使用;

二、NodePort

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.name: nginx-test
  template:
    metadata:
      labels:
        app.name: nginx-test
    spec:
      containers:
      - image: nginx:1.21.1
        imagePullPolicy: IfNotPresent
        name: nginx-test
        ports:
        - containerPort: 80
          name: http
          protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-test-nodeport
  labels:
    app: nginx-test-nodeport
spec:
  type: NodePort  # Service 类型指定为NodePort,其它如ExternalName,ClusterIP,LoadBalancer
  ports:
  - port: 80      # Service 在集群内部的服务端口                      
    targetPort: 80 # 关联pod对外开放的端口
    nodePort: 31080 # 集群外访问端口,范围为:30000-32767
  selector:
    app.name: nginx-test    # 关联pod的标签

NodePort 访问方式,集群内任意node的ip+NodePort端口号,即可访问;

如上,则为ip:31080 即可访问;

该方式集群内pod已实现负载均衡;

三、云服务器场景(LoadBalancer)

        k8s的service服务包含ClusterIP,NodePort,LoadBalancer,ExternalName,ingress这几种暴露服务的方式;默认只支持4层负载均衡的能力,结合ingress可以实现7层的负载均衡能力;

        LoadBalancer类型的Service,在NodePort类型的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到<NodeIP>:<NodePort>上,此模式只能在云服务器上使用;

        Service是有kube-proxy组件,加上iptables共同作用实现的;kube-proxy通过iptables处理Service的过程中,需要再宿主机上设置很多的iptables规则,如果宿主机有大量的Pod,不断刷新iptables的规则,会消耗大量的CPU资源;可以开始IPVS模式的Service,以支持更多量级的Pod。

apiVersion: v1
kind: Service
metadata:
  name: nginx-test-loadbalancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 80
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app.name: nginx-test
  sessionAffinity: None
  type: LoadBalancer
#  externalIPs:                # 也可以指定一个外部的IP地址
#  - 192.168.0.6

四、Ingress

        Ingress 生产环境建议使用,作用于nginx类似,需要部署一个ingress-controller的服务,该服务使用以上几种方式提供集群外的访问;再根据业务配置路由规则,访问集群内的其它服务。

1、 安装ingress

下载官方提供的两个问题

https://github.com/kubernetes/ingress-nginx/blob/nginx-0.30.0/deploy/static/mandatory.yaml
https://github.com/kubernetes/ingress-nginx/blob/nginx-0.30.0/deploy/baremetal/service-nodeport.yaml

mandatory.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

service-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  externalTrafficPolicy: Cluster

上述编排中的镜像,请根据实际情况更改;如私有镜像仓库的情况下需要进行调整;

在集群master执行

kubectl apply -f mandatory.yaml
kubectl apply -f service-nodeport.yaml

等待安装完成后,将会新增一个命名空间ingress-nginx,请查询该命名空间下pod的svc是否运行正常;

2、部署测试应用

部署以上章节中NodePort 的测试应用即可;

新建Deployment nginx-test 和Service nginx-test-nodeport

Service 类型可调整为ClusterIP

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.name: nginx-test
  template:
    metadata:
      labels:
        app.name: nginx-test
    spec:
      containers:
      - image: nginx:1.21.1
        imagePullPolicy: IfNotPresent
        name: nginx-test
        ports:
        - containerPort: 80
          name: http
          protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-test-nodeport
  labels:
    app: nginx-test-nodeport
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
  selector:
    app.name: nginx-test

3、新增路由规则

编辑文件ing-test.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    nginx.org/client-max-body-size: 50m
  name: nginx-test-ing
  namespace: default
spec:
  rules:
  - host: xxx.xxx.xxx                    # 绑定的域名
    http:
      paths:
      - backend:
          serviceName: nginx-test-nodeport       # Service的名称
          servicePort: 80                        # Service的端口
        path: /test                              # 路径
        pathType: ImplementationSpecific

将域名绑定到master IP,即可通过域名访问: http://xxx.xxx.xxx/test/

  • 3
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要使部网络能够访问 Kubernetes 集群中的 Pod,通常可以使用 Kubernetes Service 和 Ingress Controller。 1. 首先,创建一个 Service 对象来公开 Pod。Service 可以将流量从集群部路由到 Pod 内部。可以通过以下 YAML 示例创建一个 Service: ```yaml apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 8080 ``` 上述示例将名为 `my-service` 的 Service 创建为 TCP 协议的端口映射,将集群部的流量路由到具有标签 `app: my-app` 的 Pod 上的端口 8080。 2. 安装和配置 Ingress Controller。Ingress Controller 是负责将部流量路由到 Service 的组件。常见的 Ingress Controller 有 Nginx Ingress Controller、Traefik、HAProxy 等。 3. 创建一个 Ingress 资源对象,用于定义请求的入口点和路由规则。以下是一个示例 Ingress YAML: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: rules: - host: myapp.example.com http: paths: - pathType: Prefix path: / backend: service: name: my-service port: number: 80 ``` 上述示例将 Ingress 创建为将流量路由到名为 `my-service` 的 Service 上的规则。可以根据需要自定义 Host、Path 和其他路由规则。 4. 配置 DNS,将 Ingress 路由的域名解析到 Kubernetes 集群部 IP 地址或负载均衡器上。 完成上述步骤后,部网络就可以通过访问 Ingress 定义的域名,从而访问Kubernetes 集群内部的 Pod。请注意,具体的实现方式可能因集群环境和网络架构而有所不同。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值