Kubernetes 部署 ElasticSearch 集群、Kibana、FileBeat 进行日志采集

在 Kubernetes 集群部署 ECK

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html

~]# kubectl create -f https://download.elastic.co/downloads/eck/2.9.0/crds.yaml
customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created

~]# kubectl apply -f https://download.elastic.co/downloads/eck/2.9.0/operator.yaml

~]# kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

部署 ElasticSearch 集群

这里将 es、kibana、filebeat 部署在一个单独的 namespace,首先创建 namespace

~]# kubectl create ns efk-stack
namespace/efk-stack created

部署elasticsearch,这里使用 longhorn 存储持久化数据

~]# cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: efk-stack
spec:
  version: 8.9.0
  nodeSets:
  - name: default
    count: 3
    config:
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
        storageClassName: longhorn
EOF

# elasticsearch集群创建成功
~]# kubectl get po -n efk-stack 
NAME                         READY   STATUS    RESTARTS   AGE
elasticsearch-es-default-0   1/1     Running   0          2m20s
elasticsearch-es-default-1   1/1     Running   0          2m20s
elasticsearch-es-default-2   1/1     Running   0          2m20s

~]# kubectl get svc -n efk-stack 
NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elasticsearch-es-default         ClusterIP   None             <none>        9200/TCP   4m9s
elasticsearch-es-http            ClusterIP   172.20.116.154   <none>        9200/TCP   4m11s
elasticsearch-es-internal-http   ClusterIP   172.20.78.23     <none>        9200/TCP   4m11s
elasticsearch-es-transport       ClusterIP   None             <none>        9300/TCP   4m11s

~]# kubectl get secrets  -n efk-stack 
NAME                                          TYPE     DATA   AGE
elasticsearch-es-default-es-config            Opaque   1      19m
elasticsearch-es-default-es-transport-certs   Opaque   7      19m
elasticsearch-es-elastic-user                 Opaque   1      19m
elasticsearch-es-file-settings                Opaque   1      19m
elasticsearch-es-http-ca-internal             Opaque   2      19m
elasticsearch-es-http-certs-internal          Opaque   3      19m
elasticsearch-es-http-certs-public            Opaque   2      19m
elasticsearch-es-internal-users               Opaque   4      19m
elasticsearch-es-remote-ca                    Opaque   1      19m
elasticsearch-es-transport-ca-internal        Opaque   2      19m
elasticsearch-es-transport-certs-public       Opaque   1      19m
elasticsearch-es-xpack-file-realm             Opaque   4      19m

# 查看 elastic 密码
~]# PASSWORD=$(kubectl get secret elasticsearch-es-elastic-user -n efk-stack -o go-template='{{.data.elastic | base64decode}}')
~]# echo $PASSWORD
4xi5jtTwK8tV6Y2H54rq771A

~]# curl -u "elastic:4xi5jtTwK8tV6Y2H54rq771A" -k "https://172.20.116.154:9200"
{
  "name" : "elasticsearch-es-default-0",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "Fs6JRC4KQROJpJ56XyAu2w",
  "version" : {
    "number" : "8.9.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "8aa461beb06aa0417a231c345a1b8c38fb498a0d",
    "build_date" : "2023-07-19T14:43:58.555259655Z",
    "build_snapshot" : false,
    "lucene_version" : "9.7.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

部署kibana

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html

~]# cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: efk-stack
spec:
  version: 8.9.0
  count: 1
  elasticsearchRef:
    name: elasticsearch
EOF

~]# kubectl get po -n efk-stack 
NAME                         READY   STATUS    RESTARTS   AGE
elasticsearch-es-default-0   1/1     Running   0          25m
elasticsearch-es-default-1   1/1     Running   0          25m
elasticsearch-es-default-2   1/1     Running   0          25m
kibana-kb-699bf98668-ptgvf   1/1     Running   0          68s

~]# kubectl get svc -n efk-stack 
NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elasticsearch-es-default         ClusterIP   None             <none>        9200/TCP   25m
elasticsearch-es-http            ClusterIP   172.20.116.154   <none>        9200/TCP   25m
elasticsearch-es-internal-http   ClusterIP   172.20.78.23     <none>        9200/TCP   25m
elasticsearch-es-transport       ClusterIP   None             <none>        9300/TCP   25m
kibana-kb-http                   ClusterIP   172.20.35.127    <none>        5601/TCP   58s


cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kibana
  namespace: efk-stack
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  labels:
    app: kibana
spec:
  ingressClassName: nginx
  rules:
    - host: kibana.example.io
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: kibana-kb-http
                port:
                  number: 5601
EOF

~]# kubectl get ingress -n efk-stack 
NAME     CLASS   HOSTS               ADDRESS                                        PORTS   AGE
kibana   nginx   kibana.example.io   192.168.36.153,192.168.36.154,192.168.36.155   80      31s

在这里插入图片描述

在这里插入图片描述

部署 filebeat 采集 POD 日志

~]# cat filebeat_autodiscover.yaml 
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: filebeat
  namespace: efk-stack
spec:
  type: filebeat
  version: 8.9.0
  elasticsearchRef:
    name: elasticsearch
  kibanaRef:
    name: kibana
  config:
    filebeat:
      autodiscover:
        providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints:
            enabled: true
            default_config:
              type: container
              paths:
              - /var/log/containers/*${data.kubernetes.container.id}.log
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}
  daemonSet:
    podTemplate:
      spec:
        tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
        serviceAccountName: filebeat
        automountServiceAccountToken: true
        terminationGracePeriodSeconds: 30
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true # Allows to provide richer host metadata
        containers:
        - name: filebeat
          securityContext:
            runAsUser: 0
            # If using Red Hat OpenShift uncomment this:
            #privileged: true
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups: ["batch"]
  resources:
  - jobs
  verbs:
  - get
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: default
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---


~]# kubectl apply -f filebeat_autodiscover.yaml 
beat.beat.k8s.elastic.co/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created

~]# kubectl get po -n efk-stack 
NAME                           READY   STATUS    RESTARTS   AGE
elasticsearch-es-default-0     1/1     Running   0          48m
elasticsearch-es-default-1     1/1     Running   0          48m
elasticsearch-es-default-2     1/1     Running   0          48m
filebeat-beat-filebeat-2f9cx   1/1     Running   0          5s
filebeat-beat-filebeat-l5vr8   1/1     Running   0          5s
filebeat-beat-filebeat-ph4rg   1/1     Running   0          5s
filebeat-beat-filebeat-qgk6t   1/1     Running   0          5s
filebeat-beat-filebeat-rr6tn   1/1     Running   0          5s
kibana-kb-699bf98668-ptgvf     1/1     Running   0          23m

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tps90Ed8-1692354054940)(C:\Users\yangxy\AppData\Roaming\Typora\typora-user-images\1692353797547.png)]

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
Kubernetes是一个开源的容器编排平台,可以帮助我们轻松部署、扩展和管理容器化应用程序。Elasticsearch是一个分布式的实时搜索和分析引擎,可以帮助我们处理大量的数据。 要在Kubernetes上持久化部署Elasticsearch8.x集群,我们可以按照以下步骤进行操作: 1. 创建Kubernetes集群:首先,我们需要在Kubernetes上创建一个集群。可以选择在本地搭建Minikube环境,或者使用云服务提供商如AWS、Azure等提供的Kubernetes集群。 2. 创建Persisten Volume(PV)和Persisten Volume Claim(PVC):PV是Kubernetes中的一种资源,用于表示集群中的持久存储。PVC则是对PV的申请,用于声明需要的存储资源。我们需要创建足够的PV和PVC来提供给Elasticsearch使用。 3. 创建Elasticsearch Pod和Service:创建一个包含Elasticsearch容器的Pod,并且将其暴露为一个Service。可以使用Kubernetes的Deployment资源来定义Elasticsearch的Pod模板,以便实现自动扩展和故障恢复。 4. 配置Elasticsearch集群:在Elasticsearch的配置文件中,我们需要为每个节点配置唯一的节点名称、集群名称和网络绑定地址。此外,还需要指定master节点和data节点的角色和数量,并配置持久化存储路径。 5. 使用StatefulSet进行扩容:Elasticsearch是一个分布式系统,可以通过添加更多的节点来扩展其容量。为了实现有状态应用的扩容,可以使用Kubernetes的StatefulSet资源,它可以为每个节点提供唯一的网络标识和稳定的存储卷。 6. 监控和日志管理:为了保证Elasticsearch集群的稳定性和可用性,可以使用Kubernetes提供的监控和日志管理工具。比如,Prometheus可以帮助我们监控集群的健康状态,Elasticsearch官方提供的Elasticsearch Logstash Kibana(ELK)可以用于集中存储和分析日志。 通过以上步骤,我们就可以在Kubernetes上成功持久化部署Elasticsearch8.x集群。这样可以有效地管理和扩展我们的分布式搜索和分析引擎,并且确保数据的持久性和可靠性。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CodingDemo

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值