ELK=elasticsearch+logstash+kibana
一、Elasticsearch-单节点
1、安装elasticsearch
elasticsearch运行需要java 图形化显示 elasticsearch-head
[root@server1 ~]
elasticsearch-2.3 .3 .rpm elasticsearch-head-master.zip jdk-8 u121-linux-x64.rpm
[root@server1 ~]
warning : elasticsearch-2.3 .3 .rpm : Header V4 RSA/SHA1 Signature, key ID d88e42b4 : NOKEY
Preparing...
1 :jdk1.8 .0 _121
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
2 :elasticsearch
You can start elasticsearch service by executing
sudo service elasticsearch start
2、配置elasticsearch
[root@server1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-application ##集群名称
node.name: server1 ##节点名称,注意域名解析
path.data: /var/lib/elasticsearch/ ##数据目录
path.logs: /var/log/elasticsearch/ ##日志目录
bootstrap.mlockall: true ##内存锁定
network.host: 172.25 .120 .1 ##主机IP
http.port: 9200 ##访问端口
[root@server1 ~]# /etc/init.d/elasticsearch start
Starting elasticsearch: [ OK ]
{
"name " : "server1" ,
"cluster_name " : "my-application" ,
"version " : {
"number " : "2.3.3" ,
"build_hash " : "218bdf10790eef486ff2c41a3df5cfa32dadcfde" ,
"build_timestamp " : "2016-05-17T15:40:04Z" ,
"build_snapshot " : false ,
"lucene_version " : "5.5.0"
} ,
"tagline " : "You Know, for Search"
}
4、安装 elasticsearch-head
[root@server1 ~]
-> Installing from file:/root/elasticsearch-head-master.zip...
Trying file:/root/elasticsearch-head-master.zip ...
Downloading .........DONE
Verifying file:/root/elasticsearch-head-master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed head into /usr/share/elasticsearch/plugins/head
提交信息,查看 此时为单节点,显示为黄色 红色:异常;绿色:ok;黄色:主节点ok,备节点异常
二、Elasticsearch-多节点
1、配置server1(172.25.120.1)
[root@server1 ~]
discovery.zen.ping.unicast.hosts: ["server1" , "server2" , "server3" ]
[root@server1 ~]
Stopping elasticsearch: [ OK ]
Starting elasticsearch: [ OK ]
2、配置server2、server3
[root@server1 ~]
[root@server1 ~]
[root@server1 ~]
[root@server1 ~]
主机2、3节点安装后,配置一致 修改配置文件,只需修改node.name和network.host
[root@server2 ~]
[root@server2 ~]
Starting elasticsearch: [ OK ]
数据存储ok(分布式存储)
4、配置节点(主备节点数据存储隔离)
server1设定master,不存储数据 server2、3设定slave,存储数据
[root@server1 ~]
node.master: true
node.data: false
[root@server1 ~]
Stopping elasticsearch: [ OK ]
Starting elasticsearch: [ OK ]
[root@server2 ~]
node.master: false
node.data: true
[root@server2 ~]
Stopping elasticsearch: [ OK ]
Starting elasticsearch: [ OK ]
查看节点存储状态
三、Logstash
1、安装 logstash
[root@server1 ~]
Preparing...
1 :logstash
2、测试输出
[root@server1 logstash]
Settings : Default pipeline workers: 1
Pipeline main started
hello nova
2018 -07 -30 T03: 07 : 36.291 Z server1 hello nova
hello world
2018 -07 -30 T03: 08: 22.033 Z server1 hello world
^CSIGINT received. Shutting down the agent. {:level=> :warn }
stopping pipeline {:id=>"main" }
[root@server1 logstash]
Settings: Default pipeline workers: 1
Pipeline main started
hello demo
{
"message" => "hello demo" ,
"@version" => "1" ,
"@timestamp" => "2018-07-30T03:10:05.971Z" ,
"host" => "server1"
}
[root@server1 logstash]
Settings : Default pipeline workers: 1
Pipeline main started
hello xiaoer
^CSIGINT received. Shutting down the agent. {:level=> :warn }
3、编写文件,终端输出、数据收集
[root@server1 logstash]
[root@server1 conf.d]
input {
stdin { }
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["172.25.120.1" ]
index => "logstash-%{+YYYY.MM.dd}"
}
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
bigyellow
{
"message" => "bigyellow" ,
"@version" => "1" ,
"@timestamp" => "2018-07-30T03:16:46.651Z" ,
"host" => "server1"
}
^CSIGINT received. Shutting down the agent. {:level=>:warn}
可先删除之前的index(索引)
5、输出到指定文件
[root@server1 conf.d]
file {
path => "/tmp/test"
codec => line { format => "custom format: %{message}" }
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
westos
{
"message" => "westos" ,
"@version" => "1" ,
"@timestamp" => "2018-07-30T03:39:15.881Z" ,
"host" => "server1"
}
cgewfgdsycgewbh
{
"message" => "cgewfgdsycgewbh" ,
"@version" => "1" ,
"@timestamp" => "2018-07-30T03:39:21.915Z" ,
"host" => "server1"
}
[root@server1 conf.d]
custom format : westos
custom format : cgewfgdsycgewbh
6、读取指定文件
[root@server1 conf.d]
file {
path => "/tmp/test"
start_position => "beginning"
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
{
"message" => "custom format: westos" ,
"@version" => "1" ,
"@timestamp" => "2018-07-30T03:42:32.577Z" ,
"path" => "/tmp/test" ,
"host" => "server1"
}
{
"message" => "custom format: cgewfgdsycgewbh" ,
"@version" => "1" ,
"@timestamp" => "2018-07-30T03:42:34.198Z" ,
"path" => "/tmp/test" ,
"host" => "server1"
}
7、logstash伪装成为日志采集服务器
[root@server1 conf.d]
input {
syslog {
port => 514
}
}
output {
elasticsearch {
hosts => ["172.25.120.1" ]
index => "syslog-%{+YYYY.MM.dd}"
}
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
[root@server2 ~]
*. * @@ 172.25 .120.1 :514
[root@server2 ~]
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]
8、多行过滤
[root@server1 conf.d]
filter {
multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
网页查看
9、多行输入
[root@server1 conf.d]
input {
stdin {
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
stdout {
codec => rubydebug
}
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
cwevd
cwds
fvewdsfv
[
{
"@timestamp" => "2018-07-30T06:25:38.497Z" ,
"message" => "cwevd\ncwds\nfvewdsfv" ,
"@version" => "1" ,
"tags" => [
[0 ] "multiline"
],
"host" => "server1"
}
注意:防止日志重复提交,position识别
ls -i 查看 日志提交后,用户家目录生成文件 信息分别为:inode、设备编号(主、备)、position
[root@server1 ~]
917774 /var/log /elasticsearch/my -application.log
[root@server1 ~]
1045079 0 64768 53
[root@server1 ~]
917774 0 64768 22385
10、apache日志处理
[root@server1 conf.d]
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name , using 172.25 .120 .1 for ServerName
[ OK ]
[root@server1 conf.d]
server1
[root@server1 conf.d]
[root@server1 patterns]
[root@server1 conf.d]
input {
file {
path => ["/var/log/httpd/access_log" , "/var/log/httpd/error_log" ]
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
output {
elasticsearch {
hosts => ["172.25.120.1" ]
index => "apache-%{+YYYY.MM.dd}"
}
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
网页查看
11、nginx日志处理
[root@server1 conf.d]
[root@server1 conf.d]
input {
file {
path => ["/var/log/nginx/access.log" , "/var/log/nginx/error.log" ]
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} %{QS:x_forwarded_for}" }
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["172.25.120.1" ]
index => "nginx-%{+YYYY.MM.dd}"
}
}
[root@server1 conf.d]
Settings : Default pipeline workers : 1
Pipeline main started
四、Kibana
1、安装配置 kibana
[root@server1 ~]
Preparing...
1 : kibana
[root@server1 ~]
elasticsearch.url : "http://172.25.120.1:9200"
kibana.index : ".kibana"
[root@server1 ~]
kibana started
[root@server1 ~]
tcp 0 0 0.0 .0 .0 :5601 0.0 .0 .0 :* LISTEN 495 9286 1172 /node
tcp 0 0 172.25 .120 .1 :5601 172.25 .120 .250 :59964 ESTABLISHED 495 10390 1172 /node
tcp 0 0 ::ffff :172.25 .120 .1 :55601 ::ffff :172.25 .120 .3 :9300 ESTABLISHED 498 9144 1125 /java
五、Logstash+Redis
注意日志权限 redis原理:logstash { input nginx output redis } –> logstash { input redis output es }
1、配置redis
[root@server2 ~]
[root@server2 ~]
[root@server2 redis-3.0 .2 ]
[root@server2 redis-3.0 .2 ]
[root@server2 redis-3.0 .2 ]
[root@server2 redis-3.0 .2 ]
Welcome to the redis service installer
Starting Redis server...
Installation successful!
[root@server2 ~]
tcp 0 0 0 .0 .0 .0 : 6379 0 .0 .0 .0 :* LISTEN 0 26705 4067 /redis-server *
tcp 0 0 : : : 6379 : : :* LISTEN 0 26703 4067 /redis-server *
2、redis 主机配置logstash
[root@server2 ~]
Preparing...
1 :logstash
[root@server2 ~]
[root@server2 conf.d]
input {
redis {
host => "172.25.120.2"
port => 6379
data_type => "list"
key => "logstash:redis"
}
}
output {
elasticsearch {
hosts => ["172.25.120.1" ]
index => "nginx-%{+YYYY.MM.dd}"
}
}
[root@server2 conf.d]
logstash started.
3、nginx主机配置
[root@server1 ~]
redis {
host => ["172.25.120.2" ]
port => 6379
data_type => "list"
key => "logstash:redis"
}
[root@server1 conf.d]
[root@server1 nginx]
access.log error.log
[root@server1 nginx]
[root@server1 nginx]
total 24
-rw-r--r-- 1 nginx adm 16528 Jul 30 22 :10 access.log
-rw-r--r-- 1 nginx adm 446 Jul 30 15 :32 error.log
[root@server1 ~]
Killing logstash (pid 887 ) with SIGTERM
Waiting logstash (pid 887 ) to die ...
Waiting logstash (pid 887 ) to die ...
Waiting logstash (pid 887 ) to die ...
logstash stopped.
logstash started.
3、压测
[root@server3 ~]
This is ApacheBench , Version 2.3 <$Revision : 655654 $>
Copyright 1996 Adam Twiss , Zeus Technology Ltd , http: //www.zeustech.net/
Licensed to The Apache Software Foundation , http: //www.apache.org/
Benchmarking 172.25 .120.1 (be patient).....done
[root@foundation120 ~]
This is ApacheBench , Version 2.3 <$Revision : 1430300 $>
Copyright 1996 Adam Twiss , Zeus Technology Ltd , http: //www.zeustech.net/
Licensed to The Apache Software Foundation , http: //www.apache.org/
Benchmarking 172.25 .120.1 (be patient).....done
4、刷新 kibana 展示页面