Kubernetes - 创建Ingress

kubernetes 1.6.2 搭建ingress。Ingress 用来代理后端服务,它就是k8s下的nginx。

安装Ingress
  • Ingress需要一个默认的后端,创建default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    k8s-app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    k8s-app: default-http-backend
  • 在k8s 1.6.2下需要创建Role、ClusterRole、RoleBinding、ClusterRoleBinding,创建ingress-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: clusterrole-ingress
rules:
- apiGroups:
  - ""
  - "extensions"
  resources:
  - configmaps
  - secrets
  - services
  - endpoints
  - ingresses
  - nodes
  - pods
  verbs:
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - ingresses
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - events
  - services
  verbs:
  - create
  - list
  - update
  - get
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  - ingresses
  verbs:
  - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: role-ingress
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get
  - create
  - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ingress-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: clusterrole-ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: ingress-rolebinding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: role-ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system
  • 创建服务账号ingress-ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress
  namespace: kube-system
  • 创建controller,nginx-ingress-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      # hostNetwork: true
      terminationGracePeriodSeconds: 60
      serviceAccountName: ingress
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5
        name: nginx-ingress-controller
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      nodeSelector:
        kubernetes.io/hostname: 192.168.1.211

这里通过nodeSelector绑定了pod启动的节点,将外部请求转发到这个节点上。
查看node label的方法:

[root@k8s-master ingress]# kubectl describe node 192.168.1.211
Name:           192.168.1.211
Role:
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=192.168.1.211
...

通过serviceAccountName,指定了pod使用的服务账号,也就是将上面创建的角色绑定起来。

至此ingress搭建完毕


测试ingress

创建服务frontend (《Kubernetes权威指南》中的例子)。

创建frontend-controller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  replicas: 1
  selector:
    name: frontend
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
      - name: frontend
        image: kubeguide/guestbook-php-frontend:latest
        env:
        - name: GET_HOSTS_FROM
          value: env
        ports:
        - containerPort: 80

创建frontend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    name: frontend

查看service:

[root@k8s-master ~]# kubectl get svc
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
frontend     10.254.179.200   <nodes>       80:8821/TCP   1h



创建转发规则frontend-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend-ingress
spec:
  rules:
  - host: guestbook.test.com
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80

注意这里要写80而不是8821。否则访问出现503错误。

查看ingress

[root@k8s-master ingress]# kubectl get ing
NAME                HOSTS                  ADDRESS   PORTS     AGE
frontend-ingress    guestbook.test.com             80        1h

现在从公网将guestbook.test.com转发到ingress所在node,即可看到guestbook页面。

进入nginx-ingress查看nginx.conf

[root@k8s-master ingress]# kubectl get pods -n=kube-system | grep ingress
nginx-ingress-controller-1894093054-835nx   1/1       Running   0          3h
[root@k8s-master ingress]# kubectl exec -it nginx-ingress-controller-1894093054-835nx bash -n=kube-system
root@nginx-ingress-controller-1894093054-835nx:/# cat /etc/nginx/nginx.conf

    ...
    upstream default-frontend-80 {
        least_conn;
        server 172.30.29.7:80 max_fails=0 fail_timeout=0;
    }
    ...
    server {
        server_name guestbook.test.com;
        listen 80;
        listen [::]:80;

        location / {
        ...
        }
    }
    ...

更新ingress

改变frontend的pod数量,将frontend-controller.yaml中replicas由1改成3,并应用frontend-controller.yaml,重新应用frontend-ingress.yaml

[root@k8s-master frontend]#  kubectl apply -f frontend-controller.yaml
replicationcontroller "frontend" configured
[root@k8s-master frontend]# kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
frontend-40c70             1/1       Running   0          7m
frontend-4m67t             1/1       Running   0          53m
frontend-xv1ck             1/1       Running   0          36s
[root@k8s-master ingress]# kubectl apply -f frontend-ingress.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
ingress "frontend-ingress" configured

查看ingress中nginx.conf的变化:

    upstream default-frontend-80 {
        least_conn;
        server 172.30.29.7:80 max_fails=0 fail_timeout=0;
        server 172.30.95.5:80 max_fails=0 fail_timeout=0;
        server 172.30.95.6:80 max_fails=0 fail_timeout=0;
    }



参考

https://github.com/kubernetes/ingress/tree/master/examples/deployment/nginx

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值