filebeat安装
安装环境:
filebeat-6.3.1-linux-x86_64
下载地址:https://www.elastic.co/downloads/beats/filebeat
安装步骤:
1.上传介质:
filebeat-6.3.1-linux-x86_64.tar.gz
解压:
tar zxvf filebeat-6.3.1-linux-x86_64.tar.gz
cd filebeat-6.3.1-linux-x86_64/
2.配置
vim filebeat.yml
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/logs/*.log
tags: ["test-wd-log"]
注释掉output.elasticsearch以及后面的配置,将输出改为kafka
output.kafka:
enabled: true
hosts: ["192.168.0.11:9092","192.168.0.12:9092","192.168.0.13:9092"]
topic: "test-log"
然后启动服务nohup ./filebeat -c filebeat.yml &
elasticsearch安装
安装环境:
jdk1.8
elasticsearch-6.3.1
下载地址:https://www.elastic.co/downloads/elasticsearch
防火墙关闭
安装步骤:
1.新建用户es(elasticsearch必须以非root用户启动)
2.在root用户下修改系统参数:
vi /etc/sysctl.conf
vm.max_map_count=655360
vi /etc/security/limits.conf
* es soft nproc 4096
* es hard nproc 4096
3.下载安装包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.2.tar.gz
解压
tar -xzf elasticsearch-6.3.1.tar.gz
在root用户下赋权:(赋权到es的根目录)
chown es /home/wd/elasticsearch -R
切换成es用户,修改elasticsearch配置:
vi elasticsearch/config/elasticsearch.yml
在末尾添加配置:
cluster.name: ES-wd-test
node.name: node-1
path.data: /home/wd/elasticsearch/data
path.logs: /home/wd/elasticsearch/logs
network.host: 10.99.2.73
http.port: 9200
配置bootstrap.system_call_filter为false,注意要在Memory下面:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
然后后台启动es:
nohup bin/elasticsearch &
logstash安装
安装环境:
logstash-6.3.1
下载地址:https://www.elastic.co/downloads/logstash
jdk1.8
安装步骤:
1.上传介质logstash-6.3.1.tar.gz
解压tar zxvf logstash-6.3.1.tar.gz
2.配置
cd logstash-6.3.1/config
新建配置文件
vim first.conf
内容:
input{
kafka{
bootstrap_servers => ["10.97.14.111:9092,10.97.14.112:9092,10.97.14.113:9092"]
client_id => "wd"
group_id => "logstash-test-wd"
auto_offset_reset => "latest"
consumer_threads => 1
decorate_events => true
topics => ["test-log-01"]
type => "kafka-source"
}
}
output {
elasticsearch {
hosts => [ "10.99.2.73:9200" ]
index => "xx_log-%{+YYYY.MM.dd}"
}
}
保存退出,注意配置文件中不要出现中文注释,不然会报错。
然后进入logstash的根目录启动服务(后台启动):
nohup bin/logstash -f config/first.conf --config.reload.automatic &
查看日志:
tail -f logs/logstash-plain.log
当日志中出现:
Successfully started Logstash API endpoint {:port=>9600}
并不继续报错的时候
证明启动成功 。
kabana安装
安装环境:
kibana-6.3.1
jdk1.8
安装步骤:
下载地址:https://www.elastic.co/downloads/kibana
1.上传介质:kibana-6.3.1-linux-x86_64.tar.gz
解压:
tar zxvf kibana-6.3.1-linux-x86_64.tar.gz
cd kibana-6.3.1-linux-x86_64
vim config/kibana.yml
添加配置:
#kibana服务的ip地址
server.host: "10.99.2.56"
#es的url
elasticsearch.url: "http://10.99.2.73:9200"
配置之后,后台启动kibana
nohup bin/kibana &
然后默认登录访问:http://10.99.2.56:5601/
用ps -ef|grep kibana查不到kibana的进程,停服务之后,需要用
lsof -i:5601,查这个端口的监听进程,把它杀死再启动。