ELK搭建

本文介绍了ELK架构,包括Elasticsearch、Logstash、Kibana以及filebeat的角色和用途。在生产环境中,filebeat作为轻量级的日志收集代理被广泛使用。文章详细阐述了如何下载并配置这三个组件,包括设置网络主机、端口和日志路径等。最后,提到了在Kibana中查看日志监控效果的过程。
摘要由CSDN通过智能技术生成

ELK概述

ELK是elasticsearch + Logstash + Kibana 这种架构的简写。这是一种日志分平台析的架构,

Elasticsearch + Logstash + filebeat + Kibana

这种架构增加了一个filebeat模块。filebeat是一个轻量的日志收集代理,用来部署在客户端,优势是消耗非常少的资源(较logstash), 所以生产中,往往会采取这种架构方式,但是这种架构有一个缺点,当logstash出现故障, 会造成日志的丢失

 下载安装程序

社区下载会快一些:

https://elasticsearch.cn/download/

下载7.16.3版本 

 解压到elk目录

配置启动elk

配置启动es

修改config下elasticsearch.yml配置

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html

双击启动es

 启动成功

访问:http://localhost:9200/

配置启动kibana

修改config下kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
server.host: "localhost"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]
i18n.locale: "zh-CN"

 双击启动kibana

 访问:http://localhost:5601/

配置启动logstash 

配置config下logstash.config

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "test"
    #user => "elastic"
    #password => "changeme"
  }
}

 启动logstash

配置启动filebeat

配置filebeat下filebeat.yml

# ============================== Filebeat inputs ===============================

filebeat.inputs:


- type: filestream

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - E:/opt/excel-service-log/info/*.log
   
# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1


# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.



# ------------------------------ Logstash Output -------------------------------
output.logstash:
#   The Logstash hosts
  hosts: ["localhost:5044"]


# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

启动filebeat,根目录下 .\filebeat -e -c filebeat.yml 

在kibana中查看日志监控效果

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ELK是指Elasticsearch、Logstash和Kibana这三个开源软件的组合,用于构建实时数据分析和可视化的日志收集系统。ELK平台的搭建可以在Windows环境下进行。根据引用\[1\],在Windows 11环境下,可以使用logstash 8.0.0、elasticsearch 8.0.0和kibana 8.0.0进行搭建。 首先,需要下载elasticsearch、logstash、kibana和filebeat。根据引用\[2\],这些软件可以免费下载并使用。 接下来,打开一个新的shell,并执行命令切换到elk软件的安装目录。根据引用\[3\],可以使用类似以下命令切换到kibana的安装目录: ``` cd F:\soft\elk .\kibana-7.16.0-windows-x86_64\kibana-7.16.0-windows-x86_64\bin\kibana.bat ``` 以上是在Windows系统下搭建ELK平台的简要步骤。具体的配置和使用方法可以参考相关文档或教程。 #### 引用[.reference_title] - *1* [Windows环境下搭建完整的ELK平台](https://blog.csdn.net/m0_52403371/article/details/129894030)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [ELK搭建开源日志系统(window版本)—图文详细](https://blog.csdn.net/likewiehyou/article/details/124438761)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值