搭建参考文章:
本次实验所需的yaml文件:
码云Ingress 文件
对于大部分博客只有复制ingress的yaml文件,应对可能版本更新会有所变化简单介绍下如何找到yaml文件。
1、查看kubernetes官方网址先
2、进入NGINX Ingress Controller官方地址
居然访问不了,看来需要梯子。 此时可以查看github有我们需要的yaml文件
不知道有没有发现文件路径就是之前的入NGINX Ingress Controller官方裸机安装的步骤,可以去看下哈。
一、介绍:
ingress包含两大组件:ingress controller和ingress。
ingress:
简单理解就是你原来需要修改nginx配置,然后配置各种域名对应哪个Service,现在把这个动作抽象出来,变成一个Ingress对象,你可以用yaml创建,每次不要去修改nginx了,直接改yaml然后创建/更新就行了,那么问题又来了:nginx该如何处理呢?
ingress controller:
这个东西就是解决nginx如何处理这个问题的,ingress controller通过与kubernetes API交互,动态的去感知集群中Ingress规则变化,然后读取它,按照它自己的模板生成一段nginx配置,再写到nginx Pod中
二、实验准备文件:
文件获取地址: https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/deploy.yaml
所需文件如下几个:
1、 mandatory.yaml(它集合了configmap.yaml、namespace.yaml、rbac.yaml、with-rbac.yaml)
2、 service-nodeport.yaml(ingress-nginx 对外暴露服务的文件)
3、 Deployments_svcandnfs.yaml (部署后端服务)
4、 ingress.yaml(部署ingress)
这次实验我使用了NFS静态持久化存储(如果不想的话注释掉控制器关于pv、pvc的服务)
三、部署ingress实验:
1)准备后端服务:
vim Deployments_svcandnfs.yaml
apiVersion: v1
kind: PersistentVolume #控制器pv
metadata:
namespace: default #默认名称空间
name: nfsvolume #创建出来pv的名称
spec:
capacity: #容量
storage: 5000M #设置pv的大小为5000M
accessModes: #访问pv的模式
- ReadWriteOnce #指定访问模式是能在多节点上挂载,并且访问权限是读写执行
nfs:
server: 192.168.11.128 #nfs的服务器
path: /pv_test #nfs共享文件目录
---
apiVersion: v1
kind: PersistentVolumeClaim #控制器pvc
metadata:
namespace: default #默认名称空间
name: nfsvolume #控制器pvc名称
spec:
accessModes: #访问pvc的模式
- ReadWriteOnce #指定访问模式是能在多节点上挂载,并且访问权限是读写执行
resources: #资源
requests: #请求资源限制(相当于最小值)
storage: 2500M #pvc请求大小为2500M
---
apiVersion: extensions/v1beta1 #API版本
kind: Deployment #控制器
metadata:
namespace: default #定义名称空间
name: deploycpuandmemorynginx #这个标签为启动pod时候的名字
spec:
replicas: 1 #启动副本数
selector: #设置标签
matchLabels:
app: deploycpuandmemorynginx #这个标签相当于分组,查看(kubectl get pods --show-labels)
minReadySeconds: 5 #等待设置的时间后才进行升级,(如果没有设置该值,在某些极端情况下可能会造成服务不正常运行)
revisionHistoryLimit: 2 #要保留以允许回滚的旧复制集数
strategy: #策略
type: RollingUpdate #默认为滚动更新(可以不写)
rollingUpdate: #滚动更新
maxSurge: 1 #升级过程中最多可以比原先设置多出的POD数量
maxUnavailable: 1 #升级过程中最多有多少个POD处于无法提供服务的状态(该不为0)
template: #模板(相当于定义好的一个python中的模块)
metadata:
labels:
app: deploycpuandmemorynginx #这个标签需要selector定义的标签一个,划分在同一个组
spec:
containers: #模板(容器模板)
- name: deploynginx #node节点启动的容器名字(kind控制器名字+标签名)
image: nginx #镜像名
imagePullPolicy: IfNotPresent #拉取镜像(选择方式——直接使用本地拥有的镜像)
ports:
- name: nginxpod #服务启动的昵称(探针进行探活)
containerPort: 80 #容器开放的监听端口
livenessProbe: #可用性探针
httpGet: #使用httpdGet方式去判断
port: deploynginx #对应ports的昵称名字
path: /usr/share/nginx/html/index.html #探针去判断文件
scheme: HTTP #协议(默认为http协议)
initialDelaySeconds: 1 #初始化探测的延迟时间(容器启动立即探测会报错—容器启动也需要时间哇
periodSeconds: 3 #默认周期探测时长,多久去探测一次(默认为10秒一次)
failureThreshold: 3 #探测多少次失败才失败(默认为3次)
timeoutSeconds: 1 #探测时候始终没有响应等待多久时长(默认为一秒)
volumeMounts: #挂载容器的目录
- name: nginxpvc #容器挂载名字和本地挂载名字需要一致(下面名字)
mountPath: /usr/share/nginx/html/ #容器挂载的目录路径
# tolerations: #pod容忍的污点
# - key: "node.kubernetes.io/unreachable" #key值
# operator: "Exists" #value没有被指定
# effect: NoExecute #污染的驱逐(该选项意味着一旦 Taint 生效,如该节点内正在运行的 pod 没有对应 Tolerate 设置,会直接被逐出)
# - key: "node.kubernetes.io/unreachable" #key值
# operator: "Exists" #value没有被指定
# effect: NoSchedule #专用节点(表示 pod 不会被调度到标记为 taints 的节点)
restartPolicy: Always #重新启动pod中所有容器的策略
volumes: #挂载目录方式——本地/网络
- name: nginxpvc #容器挂载名字和本地挂载名字需要一致(上面名字)
persistentVolumeClaim: #使用pvc进行挂载
claimName: nfsvolume #定义的pvc名字(分组)
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: deploycpuandmemorynginx
spec:
selector:
app: nginx-deploy
type: ClusterIP #不对外暴露服务
ports:
- port: 80 #容器服务建立连接端口
targetPort: 80 #相当于宿主机于容器映射连接服务端口
selector:
app: deploycpuandmemorynginx #相当于绑定pod中的模板标签
生成pod,svc
kubectl apply -f Deployments_svcandnfs.yaml
查看是否成功创建成功:
kubectl get pod,svc,ep|grep deploycpuandmemorynginx
访问测试:
curl 10.244.2.25 #10.244.2.25 是生成pod的IP地址
2)生成ingressCtroller入口控制器
vim mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller #ingressCtroller入口控制器
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
生成ingressCtroller入口控制器
[root@k8s-node1 kubernetes-ingress]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
[root@k8s-node1 kubernetes-ingress]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-79f6884cf6-q2rf5 1/1 Running 0 54s
3)生成ingress-nginx ——service
vim service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30080 #修改下http访问端口
protocol: TCP
- name: https
port: 443
targetPort: 443
nodePort: 30443 #修改下https访问端口
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx #绑定mandatory.yaml中nginx-ingress-controller的pod(对应相对的标签)
app.kubernetes.io/part-of: ingress-nginx #绑定mandatory.yaml中nginx-ingress-controller的pod(对应相对的标签)
---
生成ingress-nginx
[root@k8s-node1 kubernetes-ingress]# kubectl apply -f service-nodeport.yaml
service/ingress-nginx created
[root@k8s-node1 kubernetes-ingress]# kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.111.118.147 <none> 80:30080/TCP,443:30443/TCP 2m6s
如上图可以看到ingress-nginx 成功加入到nginx-ingress-controller 控制器中
4)部署ingress
1、生成证书用于ingress的https访问
#进入存放kubernetes认证证书文件目录
cd /etc/kubernetes/pki
2、生成tls的https认证
umask 077; openssl genrsa -out tls.key 2048
openssl req -new -key tls.key -out tls.csr -subj "/CN=dm "
openssl x509 -req -in tls.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out tls.crt -days 365
openssl x509 -in tls.crt -text -noout
3、创建secret
kubectl create secret tls ingress-secret --cert=tls.crt --key=tls.key
4、查看生成的secret
kubectl get secrets |grep ingress-secret
5、编写ingress
vim ingress.yaml
apiVersion: extensions/v1beta1 #API版本
kind: Ingress #控制器
metadata:
name: ingress-pod #ingress 控制器
namespace: default #名称空间
annotations: #注释
kubernetes.io/ingress.class: "nginx" #说明是nginx
spec:
tls: #TSL协议
- hosts: #虚拟主机名称
- test.dm.com #域名
secretName: ingress-secret #secret的名称
rules: #规则
- host: test.dm.com #域名
http:
paths:
- path: / #相当于访问http://test.dm.com/
backend: #后端
serviceName: deploycpuandmemorynginx #后端svc的名称
servicePort: 80 #svc开放的端口
官方的解释:
6、生成ingress
[root@k8s-node1 kubernetes-ingress]# kubectl apply -f ingress.yaml
ingress.extensions/ingress-pod created
7、查看生成的ingress
[root@k8s-node1 kubernetes-ingress]# kubectl get ingresses.
NAME HOSTS ADDRESS PORTS AGE
ingress-pod test.dm.com 80, 443 32s
8、进入ingress-nginx pod中,查看信息
kubectl exec -n ingress-nginx -it nginx-ingress-controller-79f6884cf6-q2rf5 bash
四、访问测试:
Linux:
1、 vim /etc/hosts
2、 curl test.dm.com
windows:
1、编写C:\Windows\System32\drivers\etc\hosts文件
2、浏览器测试:
访问http模式:
test.dm.com
访问https模式:
https://test.dm.com:30443
对于ingress搭建目前就这些。
ingress搭建思路:
1、创建后端服务(pod)
2、创建svc 让其创建的pod 加入的创建svc中(这样访问的话就可以轮询访问到加入svc中pod)
3、创建nginx-ingress-controller(相当于nginx的conf配置文件)
4、创建ingress-nginx对外暴露服务的svc(相当于nginx)
5、当用户访问时候: 先访问tets.dm.com成功解析后——>去读取ingress-nginx暴露服务(svc,有http 30080 ,https 30443)——>ingress-nginx(svc)读取nginx-ingress-controller的pod(这时候pod去感应ingress控制器中的后端的pod信息然后写入到自己的pod中)——>获取后端的svc——>读取pod
分析下来就是: 访问ingreess的svc——>读取nginx-ingress-controlle(自动修改为:后端的svc中的pod信息),就想象成nginx的负载后端服务、后端服务再负载一次pod
客户端 --> slb --> nginx-ingress-lb service --> nginx-ingress-controller pod --> app service --> app pod
组织语言严重不足。。。。 多实验实验大概明白其中的关系