Kubernetes部署ELK(filebeat+logstash+elasticsearch 8.x+kibana)收集linux系统日志

首先,在需要收集日志的节点服务器上部署filebeat

filebeat配置文件filebeat_temp.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /home/log/messages
    - /home/log/secure
    - /home/log/cron
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
# setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
  # allow_older_versions: true
  # hosts: ["http://192.168.3.230:30200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  # preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  hosts: ["192.168.3.230:31012"]
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

主要修改input和output,我这个环境通过rsyslog将日志汇聚到了一台服务器的/home/log路径上,所以就写的/home/log下的文件

Elasticsearch部署

pv
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-elasticsearch-pv
  namespace: es
  labels:
    pv: nfs-elasticsearch-pv
spec:
  capacity:
    storage: 1024Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    server: 192.168.3.228
    path: /es
pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-elasticsearch-pvc
  namespace: es
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1024Gi
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: es
  labels:
    k8s-app: elasticsearch
spec:
  selector:
    matchLabels:
      k8s-app: elasticsearch
  template:
    metadata:
      labels:
        k8s-app: elasticsearch
    spec:
      containers:
      - image: docker.elastic.co/elasticsearch/elasticsearch:8.13.4
        name: elasticsearch
        env:
          - name: "discovery.type"
            value: "single-node"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx2g"
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - name: inter
          containerPort: 9300
        volumeMounts:
        - name: elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
      volumes:
        - name: elasticsearch
          persistentVolumeClaim:
            claimName: nfs-elasticsearch-pvc
service
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: es
  labels:
    app: elasticsearch
spec:
  selector:
    k8s-app: elasticsearch
  type: NodePort
  ports:
    - name: db
      port: 9200
      targetPort: 9200
      protocol: TCP
      nodePort: 30200
    - name: intel
      port: 9300
      targetPort: 9300
      protocol: TCP
      nodePort: 30300

重置es密码

kubectl exec -it -n es 容器id /bin/bash
./bin/elasticsearch-reset-password-u elastic

建立es账户并划分权限

./bin/elasticsearch-users useradd keynes
./bin/elasticsearch-users useradd logstash
./bin/elasticsearch-users roles -a superuser keynes  # (超级管理员权限)
./bin/elasticsearch-users roles -a kibana_system keynes # (kibana的用户权限)
./bin/elasticsearch-users roles -a superuser logstash
./bin/elasticsearch-users roles -a logstash_system logstash # (logstash的用户权限)

kibana部署

kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://192.168.3.230:30200" ]
elasticsearch.username: keynes
elasticsearch.password: p@ssw0rd
xpack.screenshotting.browser.chromium.disableSandbox: true
monitoring.ui.container.elasticsearch.enabled: true
kubectl create configmap -n es --from-file=kibana.yml
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: es
  labels:
    k8s-app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana
  template:
    metadata:
      labels:
        k8s-app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:8.13.4
        resources:
          limits:
            cpu: 1
            memory: 1G
          requests:
            cpu: 0.5
            memory: 500Mi
        ports:
        - containerPort: 5601
          protocol: TCP
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/kibana/config/kibana.yml
            subPath: kibana.yml
      volumes:
      - name: config-volume
        configMap:
          name: kibana-config
          items:
            - key: kibana.yml
              path: kibana.yml
service
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: es
spec:
  type: NodePort
  ports:
  - port: 5601
    targetPort: 5601
    protocol: TCP
    nodePort: 30010
  selector:
    k8s-app: kibana

logstash部署

logstash.conf
input {
  beats {
  port => 5044
}
}
filter{
  dissect {
  mapping => {
  "message" => "%{ts} %{+ts} %{+ts} %{src} %{prog}: %{msg}"
}
}

}
output {
  elasticsearch {
  hosts => ["http://192.168.3.230:30200"]
  user => "logstash"
  password => "p@ssw0rd"
}
}

其中output指向es的开放端口

kubectl create -n es configmap logstash-configmap --from-file=logstash.conf
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  namespace: es
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: logstash
  template:
    metadata:
      labels:
        k8s-app: logstash
    spec:
      containers:
      - name: logstash
        image: logstash:7.12.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/pipeline/logstash.conf
            subPath: logstash.conf
      volumes:
      - name: config-volume
        configMap:
          name: logstash-config
          items:
            - key: logstash.conf
              path: logstash.conf
service
apiVersion: v1
kind: Service
metadata:
  name: logstash
  namespace: es
spec:
  type: NodePort
  ports:
  - port: 5044
    targetPort: 5044
    protocol: TCP
    nodePort: 31012
  selector:
    k8s-app: logstash
  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

盛者无名

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值