Apache APISIX最佳实践(一):初识高性能 API 网关

引言

    在数字化时代,API 网关作为系统架构中不可或缺的一部分,承担着流量路由、安全认证、监控以及性能优化等关键任务。Apache APISIX,于 2019 年开源并由国内公司api7.ai(支流科技)捐赠给 Apache 软件基金会,现已成为性能最高、 社区最活跃的开源 API 网关项目 。本文将介绍 Apache APISIX 的基本概念、核心功能以及如何最佳部署使用它。

什么是 APISIX

Apache APISIX 是一个动态、实时、高性能的 API 网关,提供负载均衡、动态上游、灰度发布、服务熔断、身份认证、可观测性等丰富功能。它基于 OpenResty(ngx_lua) + Nginx 开发,利用 Lua 语言的灵活性,提供了一套丰富的流量管理特性。

总的来说,它既可以做南北向流量管理,即流量网关,又可以做东西向流量管理,即微服务网关。

软件架构

APISIX 分为两个主要部分:

  1. APISIX核心,包括Lua插件、多语言插件运行时、Wasm插件运行时等。

  2. 功能丰富的各种内置插件:包括可观测性、安全性、流量控制等。

再来一个数据平面和控制平面的视角:

数据平面:它是真正去处理来自客户端请求的一个组件,去处理用户的真实流量,包括像身份验证、证书卸载、日志分析和可观测性等功能。数据面本身并不会存储任何数据,所以它是无状态的,可以弹性伸缩。

控制平面:依赖于非关系型etcd存储,提供界面管理、Admin API和可观测视图等。

插件加载的流程

请求进来会先匹配路由规则,路由规则存储在ETCD,所以生产环境的ETCD切记要考虑高可用部署哈,接着进入插件过滤,最后进入Upstream根据负载均衡策略分发到上游服务。

动态路由

APISIX 允许管理员通过简单的 RESTful API 配置路由规则,这些规则可以根据 HTTP 请求头、URI、主机、远程地址等多种条件动态匹配。这种灵活性使得 APISIX 特别适合现代的微服务架构。

服务发现与负载均衡

APISIX 支持多种服务发现机制,如 Eureka, Consul, Nacos 等,确保能够在微服务环境中有效地发现和管理服务。此外,它提供了多种负载均衡策略,包括 round-robin、一致性哈希等,以支持复杂的流量分发需求。

支持的协议

支持 HTTP/HTTPS、WebSocket、TCP、UDP 等多种协议,确保可以在不同的应用场景下使用。

安装

下面要介绍的是基于Kubernetes环境,通过helm和yaml两种方式部署Apache Apisix

helm方式安装:

首先添加 Apache APISIX Helm Chart 地址并更新仓库。

helm repo add apisix https://charts.apiseven.com
helm repo update

安装apisix,我们通过-n指定命名空间为apisix,如果命名空间apisix不存在的情况下我们还需要加上参数--create-namespace,同时启用dashboard和apisix-ingress,最后是指定etcd的存储类,请修改为当前在用的StorageClass名称:

helm install apisix apisix/apisix -n apisix --create-namespace --set dashboard.enabled=true --set ingress-controller.enabled=true --set ingress-controller.config.apisix.serviceNamespace=apisix --set etcd.persistence.storageClass="my-storageClass"

注意:

  • 不指定命名空间的情况下,默认安装到default

  • etcd持久化存储卷默认是8GB,如果路由规则较多的环境,可以通过etcd.persistence.size参数调大,如--set etcd.persistence.size="20Gi"

指令执行成功后会返回如下信息:

NAME: apisix

LAST DEPLOYED: Fri May 3 15:20:08 2024

NAMESPACE: apisix

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

1. Get the application URL by running these commands:

  export NODE_PORT=$(kubectl get --namespace apisix -o jsonpath="{.spec.ports[0].nodePort}" services apisix-gateway)

  export NODE_IP=$(kubectl get nodes --namespace apisix -o jsonpath="{.items[0].status.addresses[0].address}")

  echo http://$NODE_IP:$NODE_PORT

yaml方式安装:

创建namespace,我们这里把它命名为apisix

kubectl create ns apisix

创建etcd集群,将下面内容保存为etcd.yaml,请记得修改storageClassName

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: apisix-etcd
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix-etcd
      app.kubernetes.io/name: apisix-etcd
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix-etcd
        app.kubernetes.io/name: apisix-etcd
    spec:
      containers:
        - name: apisix-etcd
          image: docker.io/bitnami/etcd:3.5.12
          ports:
            - name: client
              containerPort: 2379
              protocol: TCP
            - name: peer
              containerPort: 2380
              protocol: TCP
          env:
            - name: BITNAMI_DEBUG
              value: 'false'
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: ETCDCTL_API
              value: '3'
            - name: ETCD_ON_K8S
              value: 'yes'
            - name: ETCD_START_FROM_SNAPSHOT
              value: 'no'
            - name: ETCD_DISASTER_RECOVERY
              value: 'no'
            - name: ETCD_NAME
              value: $(MY_POD_NAME)
            - name: ETCD_DATA_DIR
              value: /bitnami/etcd/data
            - name: ETCD_LOG_LEVEL
              value: info
            - name: ALLOW_NONE_AUTHENTICATION
              value: 'yes'
            - name: ETCD_ADVERTISE_CLIENT_URLS
              value: >-
                http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2379,http://apisix-etcd.apisix.svc.cluster.local:2379
            - name: ETCD_LISTEN_CLIENT_URLS
              value: http://0.0.0.0:2379
            - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
              value: >-
                http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2380
            - name: ETCD_LISTEN_PEER_URLS
              value: http://0.0.0.0:2380
          resources:
            requests:
              cpu: "500m"
              memory: "2Gi"
            limits:
              cpu: "1"
              memory: "4Gi"
          volumeMounts:
            - name: data
              mountPath: /bitnami/etcd
          livenessProbe:
            exec:
              command:
                - /opt/bitnami/scripts/etcd/healthcheck.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            exec:
              command:
                - /opt/bitnami/scripts/etcd/healthcheck.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 5
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            runAsUser: 1001
            runAsNonRoot: true
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        fsGroup: 1001
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: apisix-etcd
                    app.kubernetes.io/name: apisix-etcd
                namespaces:
                  - apisix
                topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: data
        creationTimestamp: null
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: "my-storageClass"   # 请修改为当前在用的StorageClass名称
        resources:
          requests:
            storage: 8Gi
        volumeMode: Filesystem
  serviceName: apisix-etcd-headless
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  revisionHistoryLimit: 10
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-etcd-headless
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  annotations:
    meta.helm.sh/release-name: apisix-etcd
    meta.helm.sh/release-namespace: apisix
    service.alpha.kubernetes.io/tolerate-unready-endpoints: 'true'
spec:
  ports:
    - name: client
      protocol: TCP
      port: 2379
      targetPort: client
    - name: peer
      protocol: TCP
      port: 2380
      targetPort: peer
  selector:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  clusterIP: None
  clusterIPs:
    - None
  type: ClusterIP
  sessionAffinity: None
  publishNotReadyAddresses: true
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-etcd
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  annotations:
    meta.helm.sh/release-name: apisix-etcd
    meta.helm.sh/release-namespace: apisix
spec:
  ports:
    - name: client
      protocol: TCP
      port: 2379
      targetPort: client
    - name: peer
      protocol: TCP
      port: 2380
      targetPort: peer
  selector:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  type: ClusterIP

通过kubectl apply -f etcd.yaml进行创建,然后等待其正常启动

在etcd集群启动完成后,我们将下面的内容保存为apisix.yaml:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: apisix
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 3.6.0
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix
      app.kubernetes.io/name: apisix
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix
        app.kubernetes.io/name: apisix
    spec:
      volumes:
        - name: apisix-config
          configMap:
            name: apisix
            defaultMode: 420
      initContainers:
        - name: wait-etcd
          image: busybox:1.28
          command:
            - sh
            - '-c'
            - >-
              until nc -z apisix-etcd.apisix.svc.cluster.local 2379; do echo
              waiting for etcd `date`; sleep 2; done;
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      containers:
        - name: apisix
          image: apache/apisix:3.6.0-debian
          ports:
            - name: http
              containerPort: 9080
              protocol: TCP
            - name: tls
              containerPort: 9443
              protocol: TCP
            - name: admin
              containerPort: 9180
              protocol: TCP
          resources:
            requests:
              cpu: "500m"
              memory: "2Gi"
            limits:
              cpu: "2"
              memory: "4Gi"
          volumeMounts:
            - name: apisix-config
              mountPath: /usr/local/apisix/conf/config.yaml
              subPath: config.yaml
          readinessProbe:
            tcpSocket:
              port: 9080
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - sleep 30
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: apisix
  namespace: apisix
data:
  config.yaml: >-
    #


    # Licensed to the Apache Software Foundation (ASF) under one or more


    # contributor license agreements.  See the NOTICE file distributed with


    # this work for additional information regarding copyright ownership.


    # The ASF licenses this file to You under the Apache License, Version 2.0


    # (the "License"); you may not use this file except in compliance with


    # the License.  You may obtain a copy of the License at


    #


    #     http://www.apache.org/licenses/LICENSE-2.0


    #


    # Unless required by applicable law or agreed to in writing, software


    # distributed under the License is distributed on an "AS IS" BASIS,


    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.


    # See the License for the specific language governing permissions and


    # limitations under the License.


    #


    apisix:
      node_listen: 9080             # APISIX listening port
      enable_heartbeat: true
      enable_admin: true
      enable_admin_cors: true
      enable_debug: false
      enable_dev_mode: false          # Sets nginx worker_processes to 1 if set to true
      enable_reuseport: true          # Enable nginx SO_REUSEPORT switch if set to true.
      enable_ipv6: true
      config_center: etcd             # etcd: use etcd to store the config value
                                      # yaml: fetch the config value from local yaml file `/your_path/conf/apisix.yaml`




      #proxy_protocol:                 # Proxy Protocol configuration
      #  listen_http_port: 9181        # The port with proxy protocol for http, it differs from node_listen and port_admin.
                                      # This port can only receive http request with proxy protocol, but node_listen & port_admin
                                      # can only receive http request. If you enable proxy protocol, you must use this port to
                                      # receive http request with proxy protocol
      #  listen_https_port: 9182       # The port with proxy protocol for https
      #  enable_tcp_pp: true           # Enable the proxy protocol for tcp proxy, it works for stream_proxy.tcp option
      #  enable_tcp_pp_to_upstream: true # Enables the proxy protocol to the upstream server


      proxy_cache:                     # Proxy Caching configuration
        cache_ttl: 10s                 # The default caching time if the upstream does not specify the cache time
        zones:                         # The parameters of a cache
        - name: disk_cache_one         # The name of the cache, administrator can be specify
                                      # which cache to use by name in the admin api
          memory_size: 50m             # The size of shared memory, it's used to store the cache index
          disk_size: 1G                # The size of disk, it's used to store the cache data
          disk_path: "/tmp/disk_cache_one" # The path to store the cache data
          cache_levels: "1:2"           # The hierarchy levels of a cache
      #  - name: disk_cache_two
      #    memory_size: 50m
      #    disk_size: 1G
      #    disk_path: "/tmp/disk_cache_two"
      #    cache_levels: "1:2"


      allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
        - 127.0.0.1/24
      #   - "::/64"
      port_admin: 9180


      # Default token when use API to call for Admin API.
      # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
      # Disabling this configuration item means that the Admin API does not
      # require any authentication.
      admin_key:
        # admin: can everything for configuration data
        - name: "admin"
          key: edd1c9f034335f136f87ad84b625c8f1
          role: admin
        # viewer: only can view configuration data
        - name: "viewer"
          key: 4054f7cf07e344346cd3f287985e76a2
          role: viewer
      router:
        http: 'radixtree_uri'         # radixtree_uri: match route by uri(base on radixtree)
                                      # radixtree_host_uri: match route by host + uri(base on radixtree)
        ssl: 'radixtree_sni'          # radixtree_sni: match route by SNI(base on radixtree)
      # dns_resolver:
      #
      #   - 127.0.0.1
      #
      #   - 172.20.0.10
      #
      #   - 114.114.114.114
      #
      #   - 223.5.5.5
      #
      #   - 1.1.1.1
      #
      #   - 8.8.8.8
      #
      dns_resolver_valid: 30
      resolver_timeout: 5
      ssl:
        enable: false
        enable_http2: true
        listen_port: 9443
        ssl_protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
        ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"


    nginx_config:                     # config for render the template to
    genarate nginx.conf
      error_log: "/dev/stderr"
      error_log_level: "warn"         # warn,error
      worker_rlimit_nofile: 20480     # the number of files a worker process can open, should be larger than worker_connections
      event:
        worker_connections: 10620
      http:
        access_log: "/dev/stdout"
        keepalive_timeout: 60s         # timeout during which a keep-alive client connection will stay open on the server side.
        client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
        client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
        send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
        underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
        real_ip_header: "X-Real-IP"    # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
        real_ip_from:                  # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
          - 127.0.0.1
          - 'unix:'
   
    deployment:
      etcd:
        host:                                 # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
          - "http://apisix-etcd.apisix.svc.cluster.local:2379"
        prefix: "/apisix"     # apisix configurations prefix
        timeout: 30   # 30 seconds
    plugins:                          # plugin list
      - api-breaker
      - authz-keycloak
      - basic-auth
      - batch-requests
      - consumer-restriction
      - cors
      - echo
      - fault-injection
      - grpc-transcode
      - hmac-auth
      - http-logger
      - ip-restriction
      - ua-restriction
      - jwt-auth
      - kafka-logger
      - key-auth
      - limit-conn
      - limit-count
      - limit-req
      - node-status
      - openid-connect
      - authz-casbin
      - prometheus
      - proxy-cache
      - proxy-mirror
      - proxy-rewrite
      - redirect
      - referer-restriction
      - request-id
      - request-validation
      - response-rewrite
      - serverless-post-function
      - serverless-pre-function
      - sls-logger
      - syslog
      - tcp-logger
      - udp-logger
      - uri-blocker
      - wolf-rbac
      - zipkin
      - server-info
      - traffic-split
      - gzip
      - real-ip
    stream_plugins:
      - mqtt-proxy
      - ip-restriction
      - limit-conn
    plugin_attr:
      server-info:
        report_interval: 60
        report_ttl: 3600
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-admin
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 3.6.0
spec:
  ports:
    - name: apisix-admin
      protocol: TCP
      port: 9180
      targetPort: 9180
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-gateway
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 3.6.0
spec:
  ports:
    - name: apisix-gateway
      protocol: TCP
      port: 80
      targetPort: 9080
      nodePort: 31684
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster

通过kubectl apply -f apisix.yaml进行创建,然后等待其正常启动

最后是安装dashboard,将下面内容保存为dashboard.yaml:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: apisix-dashboard
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 3.0.1
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix-dashboard
      app.kubernetes.io/name: apisix-dashboard
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix-dashboard
        app.kubernetes.io/name: apisix-dashboard
    spec:
      volumes:
        - name: apisix-dashboard-config
          configMap:
            name: apisix-dashboard
            defaultMode: 420
      containers:
        - name: apisix-dashboard
          image: apache/apisix-dashboard:3.0.1-alpine
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          resources:
            requests:
              cpu: "100m"
              memory: "200Mi"
            limits:
              cpu: "500m"
              memory: "1Gi"
          volumeMounts:
            - name: apisix-dashboard-config
              mountPath: /usr/local/apisix-dashboard/conf/conf.yaml
              subPath: conf.yaml
          livenessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext: {}
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: apisix-dashboard
      serviceAccount: apisix-dashboard
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-dashboard
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 3.0.1
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
  selector:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
  type: ClusterIP
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: apisix-dashboard
  namespace: apisix
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 3.0.1
data:
  conf.yaml: |-
    conf:
      listen:
        host: 0.0.0.0
        port: 9000
      etcd:
        endpoints:
          - apisix-etcd.apisix.svc.cluster.local:2379
      log:
        error_log:
          level: warn
          file_path: /dev/stderr
        access_log:
          file_path: /dev/stdout
    authentication:
      secert: secert
      expire_time: 3600
      users:
        - username: admin
          password: admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-dashboard
  namespace: apisix

通过kubectl apply -f dashboard.yaml进行创建,等他启动完,我们就完成了apisix的部署了

在按照上述方法安装完成后,会产生如下Pods,可以看到apisix和apisix-ingress都是单副本的,在生产环境使用的话,建议设置多个副本。

访问一下dashboard看看,默认账号和密码都是admin,密码可以在dashboard的configmap进行修改

总结

apisix真的有好多可以讲,本次就先介绍到这里,下次将介绍apisix的配置和使用,谢谢!

参考地址:

https://apisix.apache.org/docs/apisix/architecture-design/apisix/

Apache APISIX 在 API 和微服务领域的探索 | 支流科技

apisix-ingress-controller/docs/en/latest/tutorials/the-hard-way.md at master · apache/apisix-ingress-controller · GitHub

 欢迎订阅我的公众号「SRE运维手记」,可扫下方二维码,或者微信搜“SRE运维手记”

  • 23
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值