从零开始搭建K8S--搭建K8S Ingress

48 篇文章 2 订阅

Ingress是个什么鬼,网上资料很多(推荐官方),大家自行研究。简单来讲,就是一个负载均衡的玩意,其主要用来解决使用NodePort暴露Service的端口时Node IP会漂移的问题。同时,若大量使用NodePort暴露主机端口,管理会非常混乱。

好的解决方案就是让外界通过域名去访问Service,而无需关心其Node IP及Port。那为什么不直接使用Nginx?这是因为在K8S集群中,如果每加入一个服务,我们都在Nginx中添加一个配置,其实是一个重复性的体力活,只要是重复性的体力活,我们都应该通过技术将它干掉。

Ingress就可以解决上面的问题,其包含两个组件Ingress Controller和Ingress:

  • Ingress
    将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可
  • Ingress Controller
    将新加入的Ingress转化成Nginx的配置文件并使之生效

好了,废话不多,走你~

准备操作

官方文档

人生苦短,不造轮子,本文将以官方的标准脚本为基础进行搭建,参考请戳官方文档。官方文档中要求依次执行如下命令:


 
 
  1. curl https:/ /raw.githubusercontent.com/kubernetes /ingress-nginx/master /deploy/namespace.yaml \
  2. | kubectl apply -f -
  3. curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \
  4. | kubectl apply -f -
  5. curl https:/ /raw.githubusercontent.com/kubernetes /ingress-nginx/master /deploy/configmap.yaml \
  6. | kubectl apply -f -
  7. curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \
  8. | kubectl apply -f -
  9. curl https:/ /raw.githubusercontent.com/kubernetes /ingress-nginx/master /deploy/udp-services-configmap.yaml \
  10. | kubectl apply -f -

以上yaml文件创建Ingress用到的Namespace、ConfigMap,以及默认的后端default-backend。最关键的一点是,由于之前我们基于Kubeadm创建了K8S集群,则还必须执行:


 
 
  1. curl https:/ /raw.githubusercontent.com/kubernetes /ingress-nginx/master /deploy/rbac.yaml \
  2. | kubectl apply -f -
  3. curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \
  4. | kubectl apply -f -

这是由于Kubeadm创建的集群默认开启了RABC,因此Ingress也必须创建相应的RABC权限控制。

导入镜像

但是,直接按照上述方式执行,我们的Ingress很可能会无法使用。所以,我们需要将上述Yaml文件全部wget下来,经过一些修改后才能执行kubectl apply -f创建。另外需要注意的是,这些yaml文件中提到的一些镜像,国内目前无法下载,如:


 
 
  1. gcr.io/google_containers/defaultbackend:1.4
  2. quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0

本人已经提前下载好,大家请戳:


 
 
  1. 地址:https: //pan.baidu.com/s/1N-bK9hI7JTZZB6AzmaT8PA
  2. 密码: 1a8a

拿到镜像后,在每个节点上执行如下命令导入镜像:


 
 
  1. docker load < quay.io #kubernetes-ingress-controller#nginx-ingress-controller_0.14.0.tar
  2. docker tag 452a96d81c30 quay.io/kubernetes-ingress-controller/nginx-ingress-controller: 0.14 .0
  3. docker load < gcr.io #google_containers#defaultbackend.tar
  4. docker tag 452a96d81c30 gcr.io/google_containers/defaultbackend

如上所示,导入镜像后,别忘记给打tag,否则镜像名称为<none>:

 

image.png

主要文件介绍

这里,我们先对一些重要的文件进行简单介绍。

default-backend.yaml

default-backend的作用是,如果外界访问的域名不存在的话,则默认转发到default-http-backend这个Service,其会直接返回404:


 
 
  1. apiVersion: extensions/v1beta1
  2. kind: Deployment
  3. metadata:
  4. name: default-http-backend
  5. labels:
  6. app: default-http-backend
  7. namespace: ingress-nginx
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: default-http-backend
  13. template:
  14. metadata:
  15. labels:
  16. app: default-http-backend
  17. spec:
  18. terminationGracePeriodSeconds: 60
  19. containers:
  20. - name: default-http-backend
  21. # Any image is permissible as long as:
  22. # 1. It serves a 404 page at /
  23. # 2. It serves 200 on a /healthz endpoint
  24. image: gcr.io/google_containers/defaultbackend: 1.4
  25. livenessProbe:
  26. httpGet:
  27. path: /healthz
  28. port: 8080
  29. scheme: HTTP
  30. initialDelaySeconds: 30
  31. timeoutSeconds: 5
  32. ports:
  33. - containerPort: 8080
  34. resources:
  35. limits:
  36. cpu: 10m
  37. memory: 20Mi
  38. requests:
  39. cpu: 10m
  40. memory: 20Mi
  41. ---
  42. apiVersion: v1
  43. kind: Service
  44. metadata:
  45. name: default-http-backend
  46. namespace: ingress-nginx
  47. labels:
  48. app: default-http-backend
  49. spec:
  50. ports:
  51. - port: 80
  52. targetPort: 8080
  53. selector:
  54. app: default-http-backend

rbac.yaml

rbac.yaml负责Ingress的RBAC授权的控制,其创建了Ingress用到的ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding。在上文《从零开始搭建Kubernetes集群》中,我们已对这些概念进行了简单介绍。


 
 
  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: nginx-ingress-serviceaccount
  5. namespace: ingress-nginx
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1beta1
  8. kind: ClusterRole
  9. metadata:
  10. name: nginx-ingress-clusterrole
  11. rules:
  12. - apiGroups:
  13. - ""
  14. resources:
  15. - configmaps
  16. - endpoints
  17. - nodes
  18. - pods
  19. - secrets
  20. verbs:
  21. - list
  22. - watch
  23. - apiGroups:
  24. - ""
  25. resources:
  26. - nodes
  27. verbs:
  28. - get
  29. - apiGroups:
  30. - ""
  31. resources:
  32. - services
  33. verbs:
  34. - get
  35. - list
  36. - watch
  37. - apiGroups:
  38. - "extensions"
  39. resources:
  40. - ingresses
  41. verbs:
  42. - get
  43. - list
  44. - watch
  45. - apiGroups:
  46. - ""
  47. resources:
  48. - events
  49. verbs:
  50. - create
  51. - patch
  52. - apiGroups:
  53. - "extensions"
  54. resources:
  55. - ingresses/ status
  56. verbs:
  57. - update
  58. ---
  59. apiVersion: rbac.authorization.k8s.io/v1beta1
  60. kind: Role
  61. metadata:
  62. name: nginx-ingress- role
  63. namespace: ingress-nginx
  64. rules:
  65. - apiGroups:
  66. - ""
  67. resources:
  68. - configmaps
  69. - pods
  70. - secrets
  71. - namespaces
  72. verbs:
  73. - get
  74. - apiGroups:
  75. - ""
  76. resources:
  77. - configmaps
  78. resourceNames:
  79. # Defaults to "<election-id>-<ingress-class>"
  80. # Here: "<ingress-controller-leader>-<nginx>"
  81. # This has to be adapted if you change either parameter
  82. # when launching the nginx-ingress-controller.
  83. - "ingress-controller-leader-nginx"
  84. verbs:
  85. - get
  86. - update
  87. - apiGroups:
  88. - ""
  89. resources:
  90. - configmaps
  91. verbs:
  92. - create
  93. - apiGroups:
  94. - ""
  95. resources:
  96. - endpoints
  97. verbs:
  98. - get
  99. ---
  100. apiVersion: rbac.authorization.k8s.io/v1beta1
  101. kind: RoleBinding
  102. metadata:
  103. name: nginx-ingress- role-nisa-binding
  104. namespace: ingress-nginx
  105. roleRef:
  106. apiGroup: rbac.authorization.k8s.io
  107. kind: Role
  108. name: nginx-ingress- role
  109. subjects:
  110. - kind: ServiceAccount
  111. name: nginx-ingress-serviceaccount
  112. namespace: ingress-nginx
  113. ---
  114. apiVersion: rbac.authorization.k8s.io/v1beta1
  115. kind: ClusterRoleBinding
  116. metadata:
  117. name: nginx-ingress-clusterrole-nisa-binding
  118. roleRef:
  119. apiGroup: rbac.authorization.k8s.io
  120. kind: ClusterRole
  121. name: nginx-ingress-clusterrole
  122. subjects:
  123. - kind: ServiceAccount
  124. name: nginx-ingress-serviceaccount
  125. namespace: ingress-nginx

with-rbac.yaml

with-rbac.yaml是Ingress的核心,用于创建ingress-controller。前面提到过,ingress-controller的作用是将新加入的Ingress进行转化为Nginx的配置。


 
 
  1. apiVersion: extensions/v1beta1
  2. kind: Deployment
  3. metadata:
  4. name: nginx-ingress-controller
  5. namespace: ingress-nginx
  6. spec:
  7. replicas: 1
  8. selector:
  9. matchLabels:
  10. app: ingress-nginx
  11. template:
  12. metadata:
  13. labels:
  14. app: ingress-nginx
  15. annotations:
  16. prometheus.io/port: '10254'
  17. prometheus.io/scrape: 'true'
  18. spec:
  19. serviceAccountName: nginx-ingress-serviceaccount
  20. containers:
  21. - name: nginx-ingress-controller
  22. image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller: 0.14. 0
  23. args:
  24. - /nginx-ingress-controller
  25. - -- default-backend-service=$(POD_NAMESPACE)/ default-http-backend
  26. - --configmap=$(POD_NAMESPACE)/nginx-configuration
  27. - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
  28. - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
  29. - --annotations-prefix=nginx.ingress.kubernetes.io
  30. env:
  31. - name: POD_NAME
  32. valueFrom:
  33. fieldRef:
  34. fieldPath: metadata. name
  35. - name: POD_NAMESPACE
  36. valueFrom:
  37. fieldRef:
  38. fieldPath: metadata.namespace
  39. ports:
  40. - name: http
  41. containerPort: 80
  42. - name: https
  43. containerPort: 443
  44. livenessProbe:
  45. failureThreshold: 3
  46. httpGet:
  47. path: /healthz
  48. port: 10254
  49. scheme: HTTP
  50. initialDelaySeconds: 10
  51. periodSeconds: 10
  52. successThreshold: 1
  53. timeoutSeconds: 1
  54. readinessProbe:
  55. failureThreshold: 3
  56. httpGet:
  57. path: /healthz
  58. port: 10254
  59. scheme: HTTP
  60. periodSeconds: 10
  61. successThreshold: 1
  62. timeoutSeconds: 1
  63. securityContext:
  64. runAsNonRoot: false

如上,可以看到nginx-ingress-controller启动时传入了参数,分别为前面创建的default-backend-service以及configmap。

创建Ingress

1.创建Ingress-controller

需要注意的是,官方提供的with-rbac.yaml文件不能直接使用,我们必须修改两处:

加入hostNetwork配置

如下,在serviceAccountName上方添加hostNetwork: true:


 
 
  1. spec:
  2. hostNetwork: true
  3. serviceAccountName: nginx-ingress-serviceaccount
  4. containers:
  5. - name: nginx-ingress-controller
  6. image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
  7. args:
  8. - /nginx-ingress-controller
  9. - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
  10. - --configmap=$(POD_NAMESPACE)/nginx-configuration
  11. - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
  12. - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
  13. - --annotations-prefix=nginx.ingress.kubernetes.io

配置hostNetwork: true是一种直接定义Pod网络的方式。定义后,Ingress-controller的IP就与宿主机k8s-node1一样(192.168.56.101),并且端口80也是宿主机上的端口。这样,我们通过该192.168.56.101:80,就可以直接访问到Ingress-controller(实际上就是nginx),然后Ingress-controller则会转发我们的请求到相应后端。

加入环境变量

在其env部分加入如下环境变量:


 
 
  1. env:
  2. - name: POD_NAME
  3. valueFrom:
  4. fieldRef:
  5. fieldPath: metadata. name
  6. - name: POD_NAMESPACE
  7. valueFrom:
  8. fieldRef:
  9. fieldPath: metadata.namespace
  10. - name: KUBERNETES_MASTER
  11. value: http: //192.168.56.101:8080

否则,创建后会提示如下错误:


 
 
  1. [root@k8s-node1 ingress]# kubectl describe pod nginx-ingress-controller -9fbd7596d-rt9sf -n ingress-nginx
  2. 省略前面...
  3. Events:
  4. Type Reason Age From Message
  5. ---- ------ ---- ---- -------
  6. Normal Scheduled 30s default-scheduler Successfully assigned nginx-ingress-controller -9fbd7596d-rt9sf to k8s-node1
  7. Normal SuccessfulMountVolume 30s kubelet, k8s-node1 MountVolume.SetUp succeeded for volume "nginx-ingress-serviceaccount-token-lq2dt"
  8. Warning BackOff 21s kubelet, k8s-node1 Back-off restarting failed container
  9. Normal Pulled 11s (x3 over 29s) kubelet, k8s-node1 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0" already present on machine
  10. Normal Created 11s (x3 over 29s) kubelet, k8s-node1 Created container
  11. Warning Failed 10s (x3 over 28s) kubelet, k8s-node1 Error: failed to start container "nginx-ingress-controller": Error response from daemon: OCI runtime create failed: container_linux. go: 348: starting container process caused "exec: \"/nginx-ingress-controller\": stat /nginx-ingress-controller: no such file or directory": unknown

修改with-rbac.yaml后,使用kubectl -f create命令分别执行如下yaml文件,即可创建Ingress-controller:

 

 

创建成功后如下所示:


 
 
  1. [root@k8s-node1 ingress]# kubectl get pod -n ingress-nginx -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE
  3. default-http-backend -5c6d95c48-pdjn9 1/ 1 Running 0 23s 192.168 .36 .81 k8s-node1
  4. nginx-ingress-controller -547cd7d9cb-jmvpn 1/ 1 Running 0 8s 192.168 .36 .82 k8s-node1

2.创建自定义Ingress

有了ingress-controller,我们就可以创建自定义的Ingress了。这里已提前搭建好了Kibana服务,我们针对Kibana创建一个Ingress:


 
 
  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: kibana-ingress
  5. namespace: default
  6. spec:
  7. rules:
  8. - host: myk8s.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: kibana
  14. servicePort: 5601

其中:

  • rules中的host必须为域名,不能为IP,表示Ingress-controller的Pod所在主机域名,也就是Ingress-controller的IP对应的域名。
  • paths中的path则表示映射的路径。如映射/表示若访问myk8s.com,则会将请求转发至Kibana的service,端口为5601。

创建成功后,查看:


 
 
  1. [root@k8s-node1 ingress]# kubectl get ingress -o wide
  2. NAME HOSTS ADDRESS PORTS AGE
  3. kibana-ingress myk8s .com 80 6 s

我们再执行kubectl exec nginx-ingress-controller-5b79cbb5c6-2zr7f -it cat /etc/nginx/nginx.conf -n ingress-nginx,可以看到生成nginx配置,篇幅较长,各位自行筛选:


 
 
  1. ## start server myk8s.com
  2. server {
  3. server_name myk8s.com ;
  4. listen 80;
  5. listen [::]:80;
  6. set $proxy_upstream_name "-";
  7. location /kibana {
  8. log_by_lua_block {
  9. }
  10. port_in_redirect off;
  11. set $proxy_upstream_name "";
  12. set $namespace "kube-system";
  13. set $ingress_name "dashboard-ingress";
  14. set $service_name "kibana";
  15. client_max_body_size "1m";
  16. proxy_set_header Host $best_http_host;
  17. # Pass the extracted client certificate to the backend
  18. # Allow websocket connections
  19. proxy_set_header Upgrade $http_upgrade;
  20. proxy_set_header Connection $connection_upgrade;
  21. proxy_set_header X-Real-IP $the_real_ip;
  22. proxy_set_header X-Forwarded-For $the_real_ip;
  23. proxy_set_header X-Forwarded-Host $best_http_host;
  24. proxy_set_header X-Forwarded-Port $pass_port;
  25. proxy_set_header X-Forwarded-Proto $pass_access_scheme;
  26. proxy_set_header X-Original-URI $request_uri;
  27. proxy_set_header X-Scheme $pass_access_scheme;
  28. # Pass the original X-Forwarded-For
  29. proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
  30. # mitigate HTTPoxy Vulnerability
  31. # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
  32. proxy_set_header Proxy "";
  33. # Custom headers to proxied server
  34. proxy_connect_timeout 5s;
  35. proxy_send_timeout 60s;
  36. proxy_read_timeout 60s;
  37. proxy_buffering "off";
  38. proxy_buffer_size "4k";
  39. proxy_buffers 4 "4k";
  40. proxy_request_buffering "on";
  41. proxy_http_version 1.1;
  42. proxy_cookie_domain off;
  43. proxy_cookie_path off;
  44. # In case of errors try the next upstream server before returning an error
  45. proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
  46. proxy_next_upstream_tries 0;
  47. # No endpoints available for the request
  48. return 503;
  49. }
  50. location / {
  51. log_by_lua_block {
  52. }
  53. port_in_redirect off;
  54. set $proxy_upstream_name "";
  55. set $namespace "default";
  56. set $ingress_name "kibana-ingress";
  57. set $service_name "kibana";
  58. client_max_body_size "1m";
  59. proxy_set_header Host $best_http_host;
  60. # Pass the extracted client certificate to the backend
  61. # Allow websocket connections
  62. proxy_set_header Upgrade $http_upgrade;
  63. proxy_set_header Connection $connection_upgrade;
  64. proxy_set_header X-Real-IP $the_real_ip;
  65. proxy_set_header X-Forwarded-For $the_real_ip;
  66. proxy_set_header X-Forwarded-Host $best_http_host;
  67. proxy_set_header X-Forwarded-Port $pass_port;
  68. proxy_set_header X-Forwarded-Proto $pass_access_scheme;
  69. proxy_set_header X-Original-URI $request_uri;
  70. proxy_set_header X-Scheme $pass_access_scheme;
  71. # Pass the original X-Forwarded-For
  72. proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
  73. # mitigate HTTPoxy Vulnerability
  74. # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
  75. proxy_set_header Proxy "";
  76. # Custom headers to proxied server
  77. proxy_connect_timeout 5s;
  78. proxy_send_timeout 60s;
  79. proxy_read_timeout 60s;
  80. proxy_buffering "off";
  81. proxy_buffer_size "4k";
  82. proxy_buffers 4 "4k";
  83. proxy_request_buffering "on";
  84. proxy_http_version 1.1;
  85. proxy_cookie_domain off;
  86. proxy_cookie_path off;
  87. # In case of errors try the next upstream server before returning an error
  88. proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
  89. proxy_next_upstream_tries 0;
  90. # No endpoints available for the request
  91. return 503;
  92. }
  93. }
  94. ## end server myk8s.com

3.设置host

首先,我们需要在Ingress-controller的Pod所在主机上(这里为k8s-node1),将上面提到的域名myk8s.com追加入/etc/hosts文件:

192.168.56.101 myk8s.com

 
 

除此之外,如果想在自己的Windows物理机上使用浏览器访问kibana,也需要在C:\Windows\System32\drivers\etc\hosts文件内加入上述内容。设置后,分别在k8s-node1和物理机上测试无误即可:

 

 

 

测试

在Windows物理机上,使用Chrome访问myk8s.com,也就是相当于访问了192.168.56.101:80

 

 

随意访问一个错误的地址myk8s.com/abc,返回预期的404:

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值