本人最近在回顾基础知识,到elk这里,由于操作步骤较多,因此我把它写成博客,供大家参考,请多指教。
ELK简介:是一个基于浏览器页面的Elasticsearch前端展示工具,也是一个开源和免费的工具,Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮你汇总、分析和搜索重要数据日志。
Elasticsearch:接收Logstash提交的日志。
Logstash:部署在产生日志的应用服务器上,用于收集日志。
kibana:是一个基于浏览器页面的的Elasticsearch前端的展示工具,用于日志展示,开源。
ELK的工作原理:
主体思路:
logstash: 192.168.237.10
elasticsearch: 192.168.237.11(elasticsearch 可以部署在多台机器上,操作相同,我这里只部署一台)
kibana: 192.168.237.12
关闭三台主机的防火墙和SElinux
为elasticsearch这台机器做好域名解析,我的是[ 192.168.237.11 elk-node1 ]
内存:>= 2G
准备好了就开干!!!!首先要部署的elasticsearch,为logstash服务提供指向。
一、elasticsearch的基础环境安装:(有多台elassearch主机即可同时操作,步骤相同【主机:192.168.237.11】
(1)下载并安装GPG Key,用来校验rpm包(这一步可以不做)
# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
(2)添加yum仓库
# vim /etc/yum.repos.d/elasticsearch.repo
因为我上一步GPG Key没做,所以我把文件中的gpgcheck置为0。(为什么呢,因为校验时间比较长,我等不及)
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=0
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
(3)安装elasticsearch和java
# yum install -y elasticsearch java
(4)检测java环境
# java -version
以上呢,elasticsearch的基础环境就准备好了
配置elk-node1:
(1)自定义日志存储目录并授权
#mkdir -p /data/es-data
#chown -R elasticsearch:elassearch /data/
(2)修改追加elasticsearch的配置文件
# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: wiger # 组名
node.name: elk-node1 # 节点名称,建议和主机名一致
path.data: /data/es-data # 数据存放的路径
path.logs: /var/log/elasticsearch/ # 日志存放的路径
bootstrap.mlockall: true # 锁住内存,不被使用到交换分区去(通常在内部不足时,休眠的程序内存信息会交换到交换分区)
network.host: 0.0.0.0 # 网络设置
http.port: 9200 # 端口
discovery.zen.ping.multicast.enabled: false #关闭多播
discovery.zen.ping.unicast.hosts: ["192.168.237.11"] #服务器地址,有几台写几台,我只有一台。
(3)启动elasticsearch并设置开机自启
# systemctl start elasticsearch
# systemctl enable elasticsearch
(4)检测,检测方法很多,我只用web测试一下。
(5)安装插件1
# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
# systemctl restart elasticsearch
(6)安装插件2
# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
# systemctl restart elasticsearch
(7)测试两个插件正常与否(出来图示界面即可,数据不用理睬)
二、配置logstash,另一台机器哦!!!【主机:192.168.239.10】
(1)检测(可忽略不用检测)
(2)添加yum仓库
# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=0
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
(3)安装logstash和java
# yum install -y logstash java
(4)启动logstash
systemctl start logstash
(5)测试logstash
第一种:数据收集测试之命令行单行操作:
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
命令解释:
-e 执行。(后期用-f带脚本的形式执行)
input{} 输入函数
output{} 输出函数
stdin{} 标准输入
stdout{} 标准输出
实例:
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
wiger is a com
2020-05-23T01:37:49.354Z tubage wiger is a com
www.wiger.club
2020-05-23T01:37:57.545Z tubage www.wiger.club
woaini
2020-05-23T01:38:02.251Z tubage woaini
法二:使用rubydebug详细输出
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
the first to learn elk
{
"message" => "the first to learn elk",
"@version" => "1",
"@timestamp" => "2020-05-23T01:40:10.977Z",
"host" => "tubage"
}
法三:把内容写到elasticsearch中
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.237.11:9200"]} }'
法四:既写到elasticsearch中又写在文件中一份
# /opt/logstash/bin/logstash -e
'input { stdin{} }
output {
elasticsearch { hosts => ["192.168.237.119200"]}
stdout{ codec => rubydebug}
}'
第二种:数据收集测试之logstash的配置文件的编写
编写logstash的配置文件
实例1:
# vim /etc/logstash/conf.d/01-logstash.conf
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.237.11:9200"]}
stdout { codec => rubydebug }
}
# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
xian
{
"message" => "xian",
"@version" => "1",
"@timestamp" => "2020-05-23T01:54:54.395Z",
"host" => "tubage"
}
实例2
# vim /etc/logstash/conf.d/02-logstash.conf
在这里插入代码片
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.237.11:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
执行、展示步骤同实例1,这里不再赘述
三、kibana的安装使用,又是另外一台机器哦!!!【主机:192.168.237.12】
(1)kibana的安装
# cd /usr/local/src 进入源码常用的安装目录
# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz 下载源码包
# tar zxf kibana-4.3.1-linux-x64.tar.gz 解压
# mv kibana-4.3.1-linux-x64 /usr/local/ 修改路径
# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana 创建软连接方便查找
# cd /usr/local/kibana/config
# cp kibana.yaml kibana.yaml.bak
vim kibana.yaml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.237.11:9200"
kibana.index: ".kibana"
(2)运行
/usr/local/kibana/bin/kibana
提示信息
log [18:23:19.867] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [18:23:19.911] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [18:23:19.941] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [18:23:19.953] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [18:23:19.963] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [18:23:19.995] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [18:23:20.004] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [18:23:20.010] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
(3)访问kibana
kibana的使用步骤
1.添加索引名称
2.然后点击上面的Discover,在Discover中查看:
3.查看日志登陆,需要点击“Discover”–>“message”,点击它后面的“add”
4.这样,右边显示的日志内容的属性就带了message和path
5.点击右边日志内容属性后面隐藏的<<,就可将内容向前缩进
6.添加新的日志采集项,点击Settings->+Add New,比如添加system系统日志。注意后面的*不要忘了。
7.删除kibana里的日志采集项,如下,点击删除图标即可。
注意:如果打开kibana查看日志,发现没有日志内容,出现“No results found”,
如下图所示,这说明要查看的日志在当前时间没有日志信息输出,可以点击右上角的时间钟来调试日志信息的查看。
总结,logstash只能用来收集日志信息,之后交给elasticsearch来处理,但是elasticsearch只能进行行处理,换句话说就是只能以行为单位来检索日志信息,这时就出现了kibana来对日志进行更细致的划分来便于检索出想要得到的字段内容。
初来乍到,请多指教。