ELK集群部署
一、基本环境
主机名 | IP地址 | 服务角色 | 主机系统 | 软件版本 |
---|---|---|---|---|
els-node1 | 192.168.119.189 | elasticsearch(搜索引擎) | Centos7.8 | elasticsearch7.6.2 |
els-node2 | 192.168.119.190 | elasticsearch(搜索引擎) | Centos7.8 | elasticsearch7.6.2 |
els-node3 | 192.168.119.191 | elasticsearch(搜索引擎) | Centos7.8 | elasticsearch7.6.2 |
logstash | 192.168.119.192 | logstash(日志处理) | Centos7.8 | logstash7.6.2 |
kibana | 192.168.119.193 | kibana(界面展示) | Centos7.8 | kibana7.6.2 |
web | 192.168.119.194 | filebeat(日志收集) | Centos7.8 | filebeat7.6.2 |
1. 基本环境配置
-
主机名配置
-
IP地址配置
-
关闭selinux
-
hosts文件
192.168.119.189 els-node1 192.168.119.190 els-node2 192.168.119.191 els-node3 192.168.119.192 logstash 192.168.119.193 kibana 192.168.119.194 web
-
同步时间
# 时间服务器:192.168.119.189 els-node1 $ vim /etc/chrony.conf allow 192.168.119.0/24 # 其他服务器 $ vim /etc/chrony.conf # 注释其他server,添加自己的server server els-node1 iburst # chrony服务 $ systemctl enable chronyd $ systemctl restart chronyd
二、elasticsearch集群
1 java环境
$ yum install java java-1.8.0-openjdk-devel -y
$ java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
2 安装elasticsearch软件
# 软件包
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm
# 安装
$ yum localinstall elasticsearch-7.6.2-x86_64.rpm -y
3 修改jvm的配置
$ vi /etc/elasticsearch/jvm.options
-Xms1g
-Xmx1g
# 默认是1G,根据实际情况进行调整,生产环境一般是32G
4 修改elasticsearch配置文件
$ vim /etc/elasticsearch/elasticsearch.yml
cluster.name: test-elk-cluster
node.name: els-node1 # 各节点名称不同
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.119.189 # 各节点根据自己实际IP进行修改
http.port: 9200
discovery.seed_hosts: ["192.168.119.189","192.168.119.190","192.168.119.191"]
cluster.initial_master_nodes: ["els-node1","els-node2","els-node3"]
5 启动集群
# 防火墙放行服务
$ firewall-cmd --add-service=elasticsearch --permanent
$ firewall-cmd --reload
# 集群
$ systemctl enable elasticsearch
$ systemctl start elasticsearch
$ systemctl status elasticsearch
6 内存不能锁定问题
$ cat /etc/systemd/system.conf
DefaultLimitMEMLOCK=infinity
# 需要重启
7 检查集群状态.
$ curl http://192.168.119.189:9200/_cluster/health?pretty
{
"cluster_name" : "test-elk-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
# 通过在浏览器中输入http://192.168.119.189:9200/_cluster/state?pretty可以查看集群的状态信息
三、elasticsearch-head插件
1. node1安装即可
# 安装npm
$ yum install npm -y
2. 安装elasticsearch-head
$ git clone git://github.com/mobz/elasticsearch-head.git
$ cd elasticsearch-head
$ npm install
# 需要联网,安装一段时间
# 中间会出错,提示解压一个包失败,手动解压就好
$ cd /tmp/phantomjs/
$ bzip2 -d phantomjs-2.1.1-linux-x86_64.tar.bz2
$ tar -xf phantomjs-2.1.1-linux-x86_64.tar
# 然后再次执行安装命令
$ npm install
3. 启动elasticsearch-head
$ npm run start &
[1] 3281
> elasticsearch-head@0.0.0 start /root/elasticsearch-head
> grunt server
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
$ netstat -antp | grep 9100
tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 3732/grunt
# 放行9100端口
$ firewall-cmd --add-port=9100/tcp --permanent
$ firewall-cmd --reload
4. 修改elasticsearch主配置文件
# 增加以下两行
$ vi /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
# 重启服务
$ systemctl restart elasticsearch
四、测试访问
1. 测试访问elasticsearch-head提供的页面
# 地址:http://192.168.119.189:9100/
2. 添加索引
$ curl -XPUT '192.168.154.101:9200/index-demo/test/1?pretty&pretty' -H 'Content-Type: application/json' -d '{ "user": "zhangsan","mesg":"hello world" }'
{
"_index" : "index-demo",
"_type" : "test",
"_id" : "1",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1
}
五、logstash2
1. 安装java环境
$ yum install java java-1.8.0-openjdk-devel -y
2. 安装logstash软件
# 软件包
https://artifacts.elastic.co/downloads/logstash/logstash-7.0.0.rpm
# 安装
$ yum localinstall logstash/logstash-7.0.0.rpm
# 创建命令链接
$ ln -s /usr/share/logstash/bin/logstash /usr/local/bin/
# 开启服务
$ systemctl enable logstash
$ systemctl start logstash
3. logstash基本使用
使用案例1:
# 启动一个1ogstash,-e:在命令行执行: input输入,stdin标准输入,是一个插件: output输出,stdout:标准输出
$ logstash -e 'input { stdin{} } output { stdout{} }'
[INFO ] 2021-04-06 04:10:22.079 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"message" => "www.baidu.com",
"@version" => "1",
"host" => "logstash",
"@timestamp" => 2021-04-06T08:10:21.621Z
}
www.sina.com.cn
{
"message" => "www.sina.com.cn",
"@version" => "1",
"host" => "logstash",
"@timestamp" => 2021-04-06T08:11:06.833Z
}
{
"message" => "",
"@version" => "1",
"host" => "logstash",
"@timestamp" => 2021-04-06T08:11:20.306Z
}
使用案例2:
# 使用1ogstash将信息写入到elasticsearch中
$ logstash -e 'input { stdin{} } output { elasticsearch { hosts=> ["192.168.119.189:9200"]} }'
[INFO ] 2021-04-06 04:12:27.036 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-04-06 04:12:28.564 [Ruby-0-Thread-5: :1] elasticsearch - Installing ILM policy {"policy"=>{"phases"=>{"hot"=>{"actions"=>{"rollover"=>{"max_size"=>"50gb", "max_age"=>"30d"}}}}}} to _ilm/policy/logstash-policy
www.baidu.com
www.sina.com.cn
www.163.com
www.taobao.com
4. 收集系统日志
$ setfacl -m u:logstash:r /var/log/messages
$ cd /etc/logstash/conf.d/
$ vi system.conf
input {
file { # 从文件读取
path => "/var/log/messages" # 文件类型
type => "system" # 设置类型
start_position => "beginning"# 是否从头开始读取
}
}
output {
elasticsearch { # 输出到elasticsearch中
hosts => ["192.168.154.101:9200"] # elasticsearch主机
index => "messages-%{+YYYY.MM.dd}" # 索引名称
}
}
# 重启logstash服务
$ systemctl restart logstash
六、Beat 轻量型数据采集器(web节点)
1. 下载并安装filebeat
# 软件包
https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.0-x86_64.rpm
# 安装
$ yum localinstall filebeat-7.6.0-x86_64.rpm -y
2. 配置案例
$ vi /etc/filebeat/filebeat.yml
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: true
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.119.189:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# 激活样例配置文件
$ cd /etc/filebeat/modules.d/
$ cp apache.yml.disabled apache.yml
$ vi apache.yml
# Module: apache
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.6/filebeat-module-apache.html
- module: apache
# Access logs
access:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths: # 如果日志不是默认安装路径,可以在这里修改
# Error logs
error:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
2.1. 启动服务
$ systemctl enable filebeat
$ systemctl start filebeat
2.2. 网站服务
# 安装Apache
$ yum install httpd -y
# 开启服务
$ systemctl enable httpd
$ systemctl start httpd
# 配置页面
$ echo '<h1>test pages</h1>' > /var/www/html/index.html
$ curl 192.168.119.194
<h1>test pages</h1>
2.3. 查看结果
# 对 192.168.119.194 做一些访问记录,然后查看
七、kibana
1. 安装kibana
# 软件包
https://artifacts.elastic.co/downloads/kibana/kibana-7.6.2-x86_64.rpm
# 安装
$ yum localinstall kibana-7.6.2-x86_64.rpm -y
2. 设置kibana的主配置文件
$ vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.119.189:9200"]
kibana.index: ".kibana"
i18n.locale: "zh-CN"
3. 启动服务
# 防火墙放行端口
$ firewall-cmd --add-service=kibana --permanent
$ firewall-cmd --reload
# 启动服务
$ systemctl start kibana
$ systemctl enable kibana
4. 访问kibana
4.1 使用浏览器访问 http://192.168.119.193:5601
4.2 添加日志数据信息
4.3 产生日志
# 任意节点添加脚本产生访问日志
$ vim web_cache.sh
#!/bin/bash
for i in `seq 1 100`
do
curl http://192.168.119.194 &> /dev/null
done
4.4 数据可视化
4.5 kibana监控集群状态
# 修改els-node1的 elasticsearch.yml 文件
$ vim /etc/elasticsearch/elasticsearch.yml
xpack.monitoring.collection.enabled: true