ELK日志分析系统概述及部署
一、ELK日志分析系统
ELK是由Elasticsearch、Logstash、Kiban三个开源软件的组合
1. 日志处理步骤
- 将日志进行集中化管理(beats)
- 将日志格式化(Logstash),然后将格式化后的数据输出到Elasticsearch
- 对格式化后的数据进行索引和存储(Elasticsearch)
- 前端数据的展示(Kibana)
2. Elasticsearch
Elasticsearch是一个基于Lucene的搜索服务器。它基于RESTful web接口提供了一个分布式多用户能力的全文搜索引擎
特性
- 接近实时的搜索
- 集群
- 节点
- 索引
分片和副本
分片特点:
- 水平分割扩展,增大存储量
- 分布式并行跨分片操作,提供性能和吞吐量
副本特点:
- 高可用性,以应对分片或者节点故障,出于这个原因,分片副本要在不同的节点上
- 性能加强,增加吞吐量,搜索可以并行在所有副本上执行
3. LogStash
4. Kibana
- 一个针对Elasticsearch的开源分析及可视化平台
- 搜索、查看存储在Elasticsearch索引中的数据
- 通过各种图表进行高级数据分析及展示
二、ELK日志分析系统部署
1. 实验环境
主机 | 操作系统 | IP地址 | 软件包 |
---|---|---|---|
node1 | CentOS7 | 192.168.117.10 Elasticsearch | |
node2 | CentOS7 | 192.168.117.20 Elasticsearch | |
apache | CentOS7 | 192.168.117.30 | httpd / Logstash |
真机 | Windows | 192.168.117.1 |
2. 关闭防火墙并修改主机名
node1:192.168.117.10
node2:192.168.117.20
apache:192.168.117.30
systemctl stop firewalld.service
setenforce 0
hostnamectl set-hostname node1 #分别为node1、node2、apache
su -
3. 安装并启用elasticsearch服务
node1:192.168.117.10
node2:192.168.117.20
echo '192.168.117.10 node1' >> /etc/hosts
echo '192.168.117.20 node2' >> /etc/hosts
cd /opt
rz elasticsearch-5.5.0.rpm
rpm -ivh elasticsearch-5.5.0.rpm
systemctl daemon-reload
systemctl enable elasticsearch.service
4. Elasticsearch环境配置
node1:192.168.117.10
node2:192.168.117.20
cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
vim /etc/elasticsearch/elasticsearch.yml
#17行
cluster.name: my-elk-cluster
#23行
node.name: node1 #这里node2节点设置为node2
#33行
path.data: /data/elk_data
#37行
path.logs: /var/log/elasticsearch/
¥43行
bootstrap.memory_lock: false
#55行
network.host: 0.0.0.0
#59行
http.port: 9200
#68行
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
grep -v "^#" /etc/elasticsearch/elasticsearch.yml
mkdir -p /data/elk_data
chown elasticsearch:elasticsearch /data/elk_data/
systemctl start elasticsearch
netstat -antp |grep 9200
- 真机访问,查看节点信息
http://192.168.117.10:9200
http://192.168.117.20:9200
- 查看集群健康状态
http://192.168.117.10:9200/_cluster/health?pretty
http://192.168.117.20:9200/_cluster/health?pretty
http://192.168.117.10:9200/_cluster/state?pretty
http://192.168.117.10:9200/_cluster/state?pretty
5. Elasticsearch-head插件安装
node1:192.168.117.10
node2:192.168.117.20
5.1 编译安装node组件依赖包
yum -y install gcc gcc-c++ make
#上传软件包 node-v8.2.1.tar.gz 到/opt
cd /opt
tar xzvf node-v8.2.1.tar.gz
cd node-v8.2.1/
./configure && make && make install
5.2 安装phantomjs
#上传软件包 phantomjs-2.1.1-linux-x86_64.tar.bz2 到/opt目录下
cd /opt
tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src/
cd /usr/local/src/phantomjs-2.1.1-linux-x86_64/bin
cp phantomjs /usr/local/bin
5.3 安装elasticsearch-head
#上传软件包 elasticsearch-head.tar.gz 到/opt
cd /opt
tar zxvf elasticsearch-head.tar.gz -C /usr/local/src/
cd /usr/local/src/elasticsearch-head/
npm install
5.4 修改主配置文件
vim /etc/elasticsearch/elasticsearch.yml
#-------末尾,添加以下内容--------
http.cors.enabled: true
http.cors.allow-origin: "*"
systemctl restart elasticsearch.service
5.5 启动elasticsearch-head
#必须在解压后的 elasticsearch-head 目录下启动服务,进程会读取该目录下的 gruntfile.js 文件,否则可能启动失败。
cd /usr/local/src/elasticsearch-head/
npm run start &
5.6 使用elasticsearch-head插件查看集群状态
windows:192.168.117.1
http://192.168.117.10:9100
在Elasticsearch 后面的栏目中输入
http://192.168.117.10:9200
http://192.168.117.20:9100
在Elasticsearch 后面的栏目中输入
http://192.168.117.20:9200
5.7 创建索引
node1:192.168.117.10
curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'
windows:192.168.117.1
- 查看索引信息
http://192.168.117.10:9100
6. logstash安装
apache:192.168.117.30
6.1 安装并启动httpd服务
yum -y install httpd
systemctl start httpd
6.2 安装Java环境
yum install -y java
6.3 安装logstash
#上传logstash-5.5.1.rpm到/opt目录下
cd /opt
rpm -ivh logstash-5.5.1.rpm
systemctl start logstash.service
systemctl enable logstash.service
#建立logstash软连接
ln -s /usr/share/logstash/bin/logstash /usr/local/bin/
6.4 logstash命令测试
选项 | 作用 |
---|---|
-f | 通过这个选项可以指定logstash的配置文件 |
-e | 后面跟着字符串 该字符串可以被当做logstash的配置(如果是” ”,则默认使用stdin做为输入、stdout作为输出) |
-t | 测试配置文件是否正确,然后退出 |
logstash -e 'input { stdin{} } output { stdout{} }'
- 使用rubydebug显示详细输出,codec为一种编解码器
logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug } }'
- 使用Logstash将信息写入Elasticsearch中
logstash -e 'input { stdin{} } output { elasticsearch { hosts=>["192.168.117.10:9200"] } }'
6.5 查看索引信息
windows:192.168.117.1
6.6 Apache作对接配置
apache:192.168.117.30
chmod o+r /var/log/messages #给其他用户添加读权限,系统日志目录
vim /etc/logstash/conf.d/system.conf
input {
file{
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.163.11:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
systemctl restart logstash.service
7. kibana安装
node1:192.168.117.10
#上传kibana-5.5.1-x86_64.rpm 到/opt目录
cd /opt
rpm -ivh kibana-5.5.1-x86_64.rpm
cd /etc/kibana/
cp kibana.yml kibana.yml.bak
vim kibana.yml
#2行;取消注释;kibana打开的端口(默认5601)
server.port: 5601
#7行;取消注释,修改;kibana侦听的地址
server.host: "0.0.0.0"
#21行;取消注释,修改;和elasticsearch建立联系
elasticsearch.url: "http://192.168.117.10:9200"
#30行;取消注释;在elasticsearch中添加.kibana索引
kibana.index: ".kibana"
systemctl start kibana.service
systemctl enable kibana.service
windows:192.168.117.1
192.168.117.10:5601
- 对接Apache主机的Apache日志文件
apache:192.168.117.30
cd /etc/logstash/conf.d/
vim apache_log.conf
input {
file{
path => "/etc/httpd/logs/access_log"
type => "access"
start_position => "beginning"
}
file{
path => "/etc/httpd/logs/error_log"
type => "error"
start_position => "beginning"
}
}
output {
if [type] == "access" {
elasticsearch {
hosts => ["192.168.117.10:9200"]
index => "apache_access-%{+YYYY.MM.dd}"
}
}
if [type] == "error" {
elasticsearch {
hosts => ["192.168.117.10:9200"]
index => "apache_error-%{+YYYY.MM.dd}"
}
}
}
/usr/share/logstash/bin/logstash -f apache_log.conf
windows:192.168.117.1
192.168.117.30
打开浏览器 输入http://192.168.117.10:9100/ 查看索引信息
能发现apache_error-2021.05.12和apache_access-2021.05.12
打开浏览器 输入http://192.168.117.10:5601
点击左下角有个management选项—index patterns—create index pattern
分别创建apache_error-*和apache_access-*的索引