【经验案例】采用ELK(Elasticsearch+Logstash+Kibana)+redis搭建日志分析系统

正文

材料准备

System: Linux x64

JDK: 1.7.0_67

Elasticsearch: 2.4.1

Logstash-2.4.0

Kibana: 4.6.1-linux-x86_64

Redis: 3.0.2

 

补充:

ELK下载地址:https://www.elastic.co/downloads

 

平台搭建

1.搭建客户端Logstash

(1)先在文件目录下解压包:tar -zxvf logstash-2.4.0.tar.gz

(2)编写配置文件

cd logstash-2.4.0

mkdir config

vi config/client.conf

 

配置文件的内容如下:

 
  1. input {
  2. file {
  3. path => ["/xxx/access*.log"]
  4. start_position => "beginning"
  5. tags => ["haha"]
  6. }
  7. }
  8. output {
  9. redis {
  10. host => "10.168.242.92"
  11. port => 7380
  12. data_type => "list"
  13. key => "logstash:redis"
  14. }
  15. }

(3)启动

nohup ./bin/logstash -f config/client.conf &

 

2.搭建服务端Logstash

同1,

配置文件略有不同

 
  1. input {
  2. redis {
  3. host => "192.168.52.102"
  4. port => "7380"
  5. data_type => "list"
  6. key => "logstash:redis"
  7. type => "redis-input"
  8. }
  9. }
  10. filter {
  11. #自定义数据字段
  12. grok {
  13. match => {
  14. "message" => ["%{DATA:dateTime}\|%{IP:clientIP}\|%{DATA:clientPort}\|%{IP:serverIP}\|%{DATA:serverPort}\|%{DATA:group}\|%{DATA:serviceName}\|%{DATA:version}\|%{DATA:methodName}\|%{DATA:methodParam}\|%{DATA:exception}\|%{NUMBER:executionTime}\|"]
  15. }
  16. }
  17. #date {
  18. # target => "runtime"
  19. # locale => "en"
  20. # match => ["dateTime","yyyy-MM-dd HH:mm:ss"]
  21. #}
  22. }
  23. output {
  24. if "wishdubbo" in [tags]{
  25. elasticsearch {
  26. hosts => ["115.231.103.57:9200"]
  27. index => "info"
  28. #采用自定义索引模板
  29. manage_template => false
  30. #模板名称
  31. template_name => "info"
  32. #elasticsearch 采用了自定义的索引模板,这里要指向模板文件
  33. template => "/data/project/chenkw/elasticsearch-2.4.1/config/templates/info.json"
  34. }
  35. }
  36. }

 

这里的模板文件,详见elasticsearch部分。

redis在此处仅用做缓冲队列,避免数据丢失,非必需。

 

 

补充相关参考文档:

logstash最佳实践:http://udn.yyuap.com/doc/logstash-best-practice-cn/output/elasticsearch.html

logstash grok dubber:https://grokdebug.herokuapp.com/

 

3.搭建Elasticsearch

(1)先在文件目录下解压包:tar -zxvf elasticsearch-2.4.1

(2)编写配置文件

vi elasticsearch-2.4.1/config/elasticsearch.yml

具体内容如下:

 
  1. cluster.name : es_cluster
  2. node.name : node0
  3. path.data : /xxx/elasticsearch-2.4.1/data
  4. path.logs : /xxx/elasticsearch-2.4.1/logs
  5. network.host : 23.453.103.57
  6. network.port : 9200

(3)自定义索引模板

由于我们的日志,直接导入到elasticsearch的时候,诸如number、ip等类型,都会被默认模板定义为string。这会使得kibana不能对这些字段做数据统计。

引入自定义模板,可以解决这个问题。

 

模板如下,供参考:

 
  1. {
  2. "order": 0,
  3. "template": "info*",
  4. "settings": {
  5. "index": {
  6. "refresh_interval": "5s"
  7. }
  8. },
  9. "mappings": {
  10. "_default_": {
  11. "dynamic_templates": [
  12. {
  13. "message_field": {
  14. "mapping": {
  15. "index": "analyzed",
  16. "omit_norms": true,
  17. "fielddata": {
  18. "format": "disabled"
  19. },
  20. "type": "string"
  21. },
  22. "match_mapping_type": "string",
  23. "match": "message"
  24. }
  25. },
  26. {
  27. "string_fields": {
  28. "mapping": {
  29. "index": "analyzed",
  30. "omit_norms": true,
  31. "fielddata": {
  32. "format": "disabled"
  33. },
  34. "type": "string",
  35. "fields": {
  36. "raw": {
  37. "index": "not_analyzed",
  38. "ignore_above": 256,
  39. "type": "string"
  40. }
  41. }
  42. },
  43. "match_mapping_type": "string",
  44. "match": "*"
  45. }
  46. }
  47. ],
  48. "properties": {
  49. "@timestamp": {
  50. "type": "date"
  51. },
  52. "geoip": {
  53. "dynamic": true,
  54. "properties": {
  55. "location": {
  56. "type": "geo_point"
  57. },
  58. "longitude": {
  59. "type": "float"
  60. },
  61. "latitude": {
  62. "type": "float"
  63. },
  64. "ip": {
  65. "type": "ip"
  66. }
  67. }
  68. },
  69. "@version": {
  70. "index": "not_analyzed",
  71. "type": "string"
  72. },
  73. "dateTime": {
  74. "type": "string"
  75. },
  76. "clientIP": {
  77. "type": "ip"
  78. },
  79. "clientPort": {
  80. "type": "string",
  81. "index": "not_analyzed"
  82. },
  83. "serverIP": {
  84. "type": "ip"
  85. },
  86. "serverPort": {
  87. "type": "string",
  88. "index": "not_analyzed"
  89. },
  90. "message": {
  91. "type": "string"
  92. },
  93. "serviceName": {
  94. "type": "string",
  95. "index": "not_analyzed"
  96. },
  97. "version": {
  98. "type": "string",
  99. "index": "not_analyzed"
  100. },
  101. "methodName": {
  102. "type": "string",
  103. "index": "not_analyzed"
  104. },
  105. "executionTime": {
  106. "type": "long"
  107. }
  108. },
  109. "_all": {
  110. "enabled": true,
  111. "omit_norms": true
  112. }
  113. }
  114. },
  115. "aliases": {}
  116. }

通过以下指令,把模板推送到elasticsearch

curl -XPUT http://123.21.103.517:9200/_template/template_info -d '@/xxx/elasticsearch-2.4.1/config/templates/info.json'

 

(4)启动

nohup ./bin/elasticsearch &

 

(5)注意点

A. 搭建elasticsearch时,修改elasticsearch.yml配置项时需要注意,每个配置项的key和value不能使用等于号(=),应使用英文冒号,且冒号前后要有空格!否则,启动会报错!

 

(6)补充

elasticsearch指南(中文版):http://es.xiaoleilu.com/

elasticsearch最佳实践:http://udn.yyuap.com/doc/logstash-best-practice-cn/output/elasticsearch.html

 

4.搭建Kibana

(1)先在文件目录下解压包:tar -zxvf kibana-4.6.1-linux-x86_64

(2)编写配置文件

vi kibana-4.6.1-linux-x86_64/config/kibana.yml

 

 
  1. server.port : 5601
  2. server.host : 115.231.103.57
  3. elasticsearch.url : http://123.21.103.517:9200
  4. kibana.index : ".kibana"

(3)启动

nohup ./bin/kibana &

 

(4)注意点:

A. 在linux服务器上通过./kibana启动kibana后,发现用ps -ef|grep kibana 并不能找到进程,可以通过netstat -nap|grep 5601(端口号)来找到进程

 

5. 效果

elasticsearch界面:http://123.21.103.517:9200/_plugin/head/

kibana界面:http://123.21.103.517:5601/

 

 

 

 

 

 

参考文章

http://www.tuicool.com/articles/YR7RRr

http://ju.outofmemory.cn/entry/270938

http://tshare365.com/archives/2344.html

转载于:https://my.oschina.net/williambrvheart/blog/803307

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值