ELK 环境部署详解

ELK简介

核心组成

ELK由Elasticsearch、Logstash和Kibana三部分组件组成;
Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,
restful风格接口,多数据源,自动搜索负载等。
Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用
kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,
可以帮助您汇总、分析和搜索重要数据日志。

四大组件

Logstash: logstash server端用来搜集日志;
Elasticsearch: 存储各类日志;
Kibana: web化接口用作查寻和可视化日志;
Logstash Forwarder: logstash client端用来通过lumberjack 网络协议发送日志到logstash server;

ELK工作流程

在需要收集日志的所有服务上部署logstash,作为logstash agent(logstash shipper)用于监控并过滤收集日志,将过滤后
的内容发送到Redis,然后logstash indexer将日志收集在一起交给全文搜索服务ElasticSearch,可以用ElasticSearch进行
自定义搜索通过Kibana 来结合自定义 搜索进行页面展示。

ELK的帮助手册

ELK官网:https://www.elastic.co/
ELK官网文档:https://www.elastic.co/guide/index.html
ELK中文手册:http://kibana.logstash.es/content/elasticsearch/monitor/logging.html
注释:ELK有两种安装方式
(1)集成环境:Logstash有一个集成包,里面包括了其全套的三个组件;也就是安装一个集成包。
(2)独立环境:三个组件分别单独安装、运行、各司其职。(比较常用)

ELK环境搭建

logstash部署与配置

logstash安装
注释:logstash依赖JDK环境
首先 java -version 检查服务器java环境 如发现环境未安装 则先安装java环境

wget https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gz
tar zxf logstash-1.5.4.tar.gz -C /usr/local/
配置logstash的环境变量
echo "export PATH=\$PATH:/usr/local/logstash-1.5.4/bin" > /etc/profile.d/logstash.sh
. /etc/profile
logstash启动
logstash常用参数
-e :指定logstash的配置信息,可以用于快速测试;
-f :指定logstash的配置文件;可以用于生产环境;
logstash配置详解

下面我们使用 -e参数指定logstash的配置信息,用于快速测试,直接输出到屏幕

# logstash -e "input {stdin{}} output {stdout{}}"            
my name is MikePeng.    //手动输入后回车,等待10秒后会有返回结果
Logstash startup completed
2016-12-26T13:55:50.660Z 0.0.0.0 my name is MikePeng.
这种输出是直接原封不动的返回...

下面我们通过-e参数指定logstash的配置信息,用于快速测试,以json格式输出到屏幕。

# logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
my name is MikePeng.    //手动输入后回车,等待10秒后会有返回结果
Logstash startup completed
{
      "message" => "my name is MikePeng.",
     "@version" => "1",
   "@timestamp" => "2016-12-26T13:57:31.851Z",
         "host" => "0.0.0.0"
}

logstash以配置文件方式启动

vim logstash-simple.conf 
----------------------------logstash-simple.conf----------------
input { stdin {} }
output {
  stdout { codec=> rubydebug }
}
----------------------------------------------------------------
logstash -f logstash-simple.conf    //普通方式启动
Logstash startup completed
logstash agent -f logstash-simple.conf --verbose //开启debug模式
Pipeline started {:level=>:info}
Logstash startup completed
hello world.    //手动输入hello world.
{
         "message" => "hello world.",
         "@version" => "1",
         "@timestamp" => "2016-12-26T14:01:43.724Z",
         "host" => "0.0.0.0"
}

logstash输出信息存储到redis

vim logstash_to_redis.conf
-------------------------- logstash_to_redis.conf ------------
input { stdin { } }
output {
   stdout { codec => rubydebug }
   redis {
       host => '192.168.201.73:7351'
       data_type => 'list'
       key => 'logstash:redis'
   }
}
---------------------------------------------------------------
注:如果提示Failed to send event to Redis,表示连接Redis失败或者没有安装,请检查...
查看logstash的监听端口号
logstash agent -f logstash_to_redis.conf --verbose
netstat -tnlp |grep java
tcp        0      0 :::9301                     :::*                        LISTEN      1326/java

logstash消费kafka消息并写入elasticsearch

vim kafka_logstash_elasticsearch.conf
-------------------------- kafka_logstash_elasticsearch.conf ----------------
input {
   kafka {
       zk_connect => "192.168.201.73:2181" #kafka border 
       group_id => "elk_consumer"      #所属消费组
       topic_id => "boyaa"             #消费的topic
       reset_beginning => false        
       consumer_threads => 5
       decorate_events => true
   }
}
output {
    elasticsearch {
       host => "192.168.201.73"
       codec => "json"
       protocol => "http"
    }
}
logstash agent -f kafka_logstash_elasticsearch.conf --verbose
-------------------------------------------------------------------------------

Elasticsearch 部署与配置

安装Elasticsearch
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.tar.gz
tar zxf elasticsearch-1.7.2.tar.gz -C /usr/local/

修改elasticsearch配置文件elasticsearch.yml

vim /usr/local/elasticsearch-1.7.2/config/elasticsearch.yml
-------------------------------elasticsearch.yml-----------------------------
discovery.zen.ping.multicast.enabled: false        #关闭广播,如果局域网有机器开9300 端口,服务会启动不了
network.host: 192.168.201.73    #指定主机地址,其实是可选的,但是最好指定因为后面跟kibana集成的时候会
                                #报http连接出错(直观体现好像是监听了:::9200 而不是0.0.0.0:9200)
http.cors.allow-origin: "/.*/"
http.cors.enabled: true     #这2项都是解决跟kibana集成的问题,错误体现是 你的 elasticsearch 版本过低,其实不是
-----------------------------------------------------------------------------
Elasticsearch 启动
/usr/local/elasticsearch-1.7.2/bin/elasticsearch          #日志会输出到stdout
/usr/local/elasticsearch-1.7.2/bin/elasticsearch -d       #表示以daemon的方式启动
nohup /usr/local/elasticsearch-1.7.2/bin/elasticsearch > /var/log/logstash.log 2>&1 &
netstat -tnlp |grep java      #查看elasticsearch的监听端口
tcp        0      0 :::9200                     :::*                        LISTEN      7407/java           
tcp        0      0 :::9300                     :::*                        LISTEN      7407/java
logstash+Elasticsearch整合
将logstash的信息输出到elasticsearch中
vim logstash-elasticsearch.conf 
----------------------------logstash-elasticsearch.conf-----------------------
input { stdin {} }
output {
   elasticsearch { host => "192.168.201.73" }    
   stdout { codec=> rubydebug }
}
------------------------------------------------------------------------------
/usr/local/logstash-1.5.4/bin/logstash agent -f logstash-elasticsearch.conf  #启动logstash
Pipeline started {:level=>:info}
Logstash startup completed
python linux java c++    //手动输入
{
         "message" => "python linux java c++",
         "@version" => "1",
         "@timestamp" => "2016-12-26T14:51:56.899Z",
         "host" => "0.0.0.0"
}

curl命令发送请求来查看elasticsearch是否接收到了数据

curl http://192.168.201.73:9200/_search?pretty
{
 "took" : 28,
 "timed_out" : false,
 "_shards" : {
   "total" : 5,
   "successful" : 5,
   "failed" : 0
 },
 "hits" : {
   "total" : 1,
   "max_score" : 1.0,
   "hits" : [ {
     "_index" : "logstash-2016.12.26",
     "_type" : "logs",
     "_id" : "AVBH7-6MOwimSJSPcXjb",
     "_score" : 1.0,
     "_source":{"message":"python linux java c++","@version":"1","@timestamp":"2016-12-26T14:51:56.899Z","host":"0.0.0.0"}
   } ]
 }
}
redis+logstash+Elasticsearch整合
vim redis-logstash-Elasticsearch.conf
---------------------------------- redis-logstash-Elasticsearch.conf ---------------------
input {
   redis {
       host => '192.168.201.73'  # 我方便测试没有指定password,最好指定password
       data_type => 'list'
       port => "6379"
       key => 'logstash:redis' #自定义
       type => 'redis-input'   #自定义
   }
}
output {
   elasticsearch {
       host => "192.168.201.73"
       codec => "json"
       protocol => "http"  #版本1.0+ 必须指定协议http
   }
}
------------------------------------------------------------------------------
/usr/local/logstash-1.5.4/bin/logstash agent -f redis-logstash-Elasticsearch.conf  #启动logstash
安装elasticsearch插件
注释:Elasticsearch-kopf插件可以查询Elasticsearch中的数据,安装elasticsearch-kopf,只要在你安装Elasticsearch的目录中执行以下命令即可:
cd /usr/local/elasticsearch-1.7.2/bin/
./plugin install lmenezes/elasticsearch-kopf
> Installing lmenezes/elasticsearch-kopf...
  Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip...
  Downloading .............................................................................................
  Installed lmenezes/elasticsearch-kopf into /usr/local/elasticsearch-1.7.2/plugins/kopf

  执行插件安装后会提示失败,很有可能是网络等情况...
  -> Installing lmenezes/elasticsearch-kopf...
  Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip...
  Failed to install lmenezes/elasticsearch-kopf, reason: failed to download out of all possible locations..., use --verbose to get detailed information
 
  解决办法就是手动下载该软件,不通过插件安装命令...
  cd /usr/local/elasticsearch-1.7.2/plugins
  wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip
  unzip master.zip
  mv elasticsearch-kopf-master kopf 
  以上操作就完全等价于插件的安装命令
  
  netstat -tnlp |grep java
  tcp        0      0 :::9200                     :::*                        LISTEN      7969/java           
  tcp        0      0 :::9300                     :::*                        LISTEN      7969/java           
  tcp        0      0 :::9301                     :::*                        LISTEN      8015/java

浏览器访问kopf页面访问elasticsearch保存的数据

 

安装Kinaba

wget https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz
tar zxf kibana-4.1.2-linux-x64.tar.gz -C /usr/local
# vim /usr/local/kibana-4.1.2-linux-x64/config/kibana.yml
elasticsearch_url: "http://192.168.201.73:9200"
/usr/local/kibana-4.1.2-linux-x64/bin/kibana      #启动kinaba
输出以下信息,表明kinaba成功.
{"name":"Kibana","hostname":"localhost.localdomain","pid":1943,"level":30,"msg":"No existing kibana index found","time":"2016-12-26T00:39:21.617Z","v":0}
{"name":"Kibana","hostname":"localhost.localdomain","pid":1943,"level":30,"msg":"Listening on 0.0.0.0:5601","time":"2016-12-26T00:39:21.637Z","v":0}
kinaba默认监听在本地的5601端口上
浏览器访问kinaba
http://192.168.201.73:5601/#/settings/indices/?_g=()

Kakfa+ELK 整合

转载于:https://my.oschina.net/u/1757002/blog/868527

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值