ELK日志服务器:filebeat + elasticsearch + kibana 安装配置

目的:实现一个基本的日志收集服务器

原理:

1、rsyslog收集本机所有的日志

2、filebeat获取到rsyslog收集的日志,发送给elasticsearch

3、elasticsearch分析日志

4、elasticsearch分析的结果给kibana进行输出展示

 

昨天做了第一步rsyslog的安装,今天安装filebeat:

1、安装配置jdk

2、百度elk,进入elk官网,点击产品-点击查看所有下载-点击beats--filebeat--install with yum

#第一步,下载公钥
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

#第二步:配置yum源,目录为/etc/yum.repos.d/,文件名自己可以定义,比如elk.repo
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

#第三步:安装filebeat
yum install filebeat

#第四步:开机自启
systemctl enable filebeat

#配置filebeat配置文件
#input模块
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/lxm.log

#output模块
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.161.70:9200"]

3、安装elasticsearch

#yum源在安装filebeat已经配置好,直接安装即可
    yum install elasticsearch -y 
#修改elasticsearch配置
#network 模块
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#Discovery模块
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
cluster.initial_master_nodes: ["node-1"]


#加载配置
systemctl daemon-reload


#启动服务
systemctl start elasticsearch 

#查看端口启动情况,确定服务是否启动,
netstat -tunpl |egrep "9200|9300"


#浏览器访问
http://192.168.161.70:9200/

返回:
{
  "name" : "localhost",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "7.8.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89",
    "build_date" : "2020-07-21T16:40:44.668009Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

代表服务已经正常运行

四、kibana安装配置

#安装kibana
yum install kibana -y 

#配置kibana
server.port: 5601
server.host: "192.168.161.70"
elasticsearch.hosts: ["http://192.168.16170:9200"]

#加载配置
systemctl deamon-reload

#启动kibana服务
systemctl start kibana

#查看进程或端口,确认启动情况
ps -ef |grep kibana  
netstat -tunpl |egrep "5601"


#访问kibana
http://ip:5601
#然后配置下即可


 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值