例子一 360搭建

环境

  • 三台机器,结构见下图
    • 192.168.5.172 —— shipper收集kafka产生的日志
    • 192.168.5.175 ——
      • indexer汇总shipper收集到的日志
      • redis作为shipper与indexer之间的缓冲队列
      • elasticsearch负责存储索引inder汇总的日志
    • 192.168.5.176 —— kibana负责web端日志搜索跟展示
  • logstash版本:1.4.2
  • elasticsearch版本:1.1.1
ELK搭建 - 柒.smilence - being23  

shipper

新建配置目录conf,并创建配置文件shipper.conf

[root@datanode3 conf]# pwd
/usr/local/logstash/conf
[root@datanode3 conf]# cat shipper.conf 
input {
    file {
        type => "kafka_file"
        path => "/usr/local/kafka/logs-1/server.log"
        start_position => "beginning"
    }
}

output {
    stdout {}
    redis {
        host => "192.168.5.175"
        port => 6379
        data_type => "list"
        key => "kafka"
    }
}

启动

bin/logstash agent --verbose -f conf/shipper.conf --log /tmp/logstash.log

redis

跳过

indexer

新建配置目录conf,并创建配置文件indexer.conf

[root@namenode0 conf]# pwd
/usr/local/logstash/conf
[root@namenode0 conf]# cat indexer.conf 
input {
    redis {
        host => "127.0.0.1"
        port => 6379
        type => "kafka_redis"
        data_type => "list"
        key => "kafka"
    }
}

output {
    stdout {}
    elasticsearch {
        cluster => "elasticsearch"
        codec => "json"
        protocol => "http"
    }
}

启动

bin/logstash agent --verbose -f conf/indexer.conf --log /tmp/logstash.log

elasticsearch

使用默认配置,直接启动

[root@namenode0 kibana]# cd /usr/local/elasticsearch/bin/
[root@namenode0 bin]# ./elasticsearch -d

kibana

之前说过logstash集成了kibana,位于目录/usr/local/logstash/vendor中:

[root@localhost vendor]# pwd
/usr/local/logstash/vendor
[root@localhost vendor]# ls
bundle  collectd  geoip  jar  kibana  ua-parser

kibana由html和js创建,直接放到web服务器中即可,所以将整个kibana目录拷贝到nginx的静态文件目录

[root@localhost html]# pwd
/usr/local/nginx/html
[root@localhost html]# ls
50x.html  index.html  kibana

在使用之前,还需要在kibana的配置文件config.js中设置下elasticsearch参数: elasticsearch: "http://192.168.5.175:9200",。这么做是因为我这里是将elasticsearch部署在175这台机器上,如果elasticsearch与kibana在同一台机器上,默认配置就可以了,具体见下文。保存后,确认nginx已经启动,访问http://192.168.5.176/kibana/index.html#/dashboard/file/default.json就可以看到kibana的首页了。

遇到的问题及解决

  • kibana配置文件config.js中elasticsearch参数究竟如何设置

    有些资料中会告诉你将这个参数设置成http://localhost:9200,也许之前的版本可以这么配置(未验证),不过在1.4.2中是不可以的,否则会出现错误。下面是配置中的一段注释(当时看到这里的时候很纠结,配还是不配,这是个问题)——

    * ==== elasticsearch
    *
    * The URL to your elasticsearch server. You almost certainly don't
    * want +http://localhost:9200+/ here. Even if Kibana and Elasticsearch are on
    * the same host. By default this will attempt to reach ES at the same host you have
    * kibana installed on. You probably want to set it to the FQDN of your
    * elasticsearch host

    如果kibana和elasticsearch运行在同一台主机上,其实默认的配置就可以了。使用默认的配置,在kibana的首页上,有这么一段,意思很清楚——

    Configuration
    If Kibana and Elasticsearch are on the same host, and you're using the default Elasticsearch port, then you're all set. Kibana is configured to use that setup by default!
    If not, you need to edit config.js and set the elasticsearch parameter with the URL (including port, probably 9200) of your Elasticsearch server. The host part should be the entire, fully qualified domain name, or IP, not localhost.

    不过话又说回来,要是配置错了,页面打不开,就看不到这句,囧

  • 提示出错“Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above.”以及“Error Could not reach http://localhost:9200/_nodes. If you are using a proxy, ensure it is configured correctly”

    如果将elasticsearch设置成elasticsearch: "http://localhost:9200",就会出现上面的错误。如何配置前面已经说过了。

    这里还有一个地方需要注意,就是9200这个端口要开放。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值