搭建ELK   elasticsearch+kibana+filebeat

1、启动elasticsearch

docker run -d --name elasticsearch -p 9209:9200 -p 9309:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms256m -Xmx256m"   docker.io/elasticsearch
其中 -e ES_JAVA_OPTS="-Xms256m -Xmx256m"为设置elasticsearch的使用内存大小为256m,默认为2G
临时启动一个elastic容器以后用docker cp 命令把相应的文件复制到宿主机
docker run -d --name elasticsearch -p 9209:9200 -p 9309:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -v /zhangm/docker/config/es/config:/usr/share/elasticsearch/config -v /zhangm/docker/data/es/data:/usr/share/elasticsearch/data docker.io/elasticsearch
问题:挂在路径以后elasticsearch无法启动,提示目录没有权限,所以需要弄明白相应的文件夹权限
ps:elastic服务不能用root权限启动


2、启动kibana

docker run --name kibana -p 5609:5601  --link elasticsearch:elasticsearch  -d kibana:latest 


3、安装filebeat
下载

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-x86_64.rpm

安装filebeat

rpm -ivh filebeat-5.5.1-x86_64.rpm

配置文件路径
/etc/filebeat/filebeat.yml
日志文件路径
/var/log/filebeat
注意 每次新启动都会生成一个新的filebeat,上次启动的会被mv为filebeat.1
启动命令
systemctl restart filebeat

编辑filebeat的配置文件filebeat.yml,增加"pipeline= "yum"

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/yum.log
    #- c:\programdata\elasticsearch\logs\*
  document_type: "yum"
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ["^ERR", "^WARN"]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.1.42:9209"]
  pipeline: "yum"
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]


将pipeline加入到elastic
方法1:
创建一个json文件yum-pipeline.json

{
"description": "grok yum-pipeline",
  "processors" : [
    {
      "grok": {
        "field": "message",
        "patterns": ["%{SYSLOGTIMESTAMP:date} %{WORD:method} ?: %{USERNAME:name}"]
      }
    },
    {
      "remove": {
        "field": "method"
      }
    }
  ]
  }

导入
curl -H 'Content-Type:application/json' -XPUT 'http://192.168.1.42:9209/_ingest/pipeline/yum' -d@yum-pipeline.json

方法2:
访问kibana服务,找到Dev Tools运行

PUT _ingest/pipeline/yum
{
    "description": "...",
    "processors": [
      {
        "grok": {
          "field": "message",
          "patterns": ["%{SYSLOGTIMESTAMP:date} %{WORD:method} ?: %{USERNAME:name}"]
        }
      },
      {
        "remove": {
          "field": "message"
        }
      },
      {
        "date": {
          "field": "timestamp",
          "formats": [
            "yyyy-MM-dd HH:mm:ss"
          ]
        }
      }
    ]
}

 

参考:http://blog.51cto.com/dressame/2166174?source=dra

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值