【ELK】heima教程elk学习

参考链接:https://www.bilibili.com/video/BV1iJ411c7Az?p=63

ELK:三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash

1、ElasticSearch:数据存储与查找

linux安装es

#创建elsearch用户,Elasticsearch不支持root用户运行
useradd elsearch
#解压安装包
tar -xvf elasticsearch-6.5.4.tar.gz -C /itcast/es/
  • 修改配置
#修改配置文件
vim conf/elasticsearch.yml
network.host: 0.0.0.0 #设置ip地址,任意网络均可访问
#说明:在Elasticsearch中如果,network.host不是localhost或者127.0.0.1的话,就会认为是生产环境,
#会对环境的要求比较高,我们的测试环境不一定能够满足,一般情况下需要修改2处配置,如下:
#1:修改jvm启动参数
vim conf/jvm.options
-Xms128m #根据自己机器情况修改
-Xmx128m

#2:一个进程在VMAs(虚拟内存区域)创建内存映射最大数量
vim /etc/sysctl.conf
vm.max_map_count=655360
sysctl -p #配置生效
  • 启动与停止ES服务
su - elsearch
cd bin
./elasticsearch 或 ./elasticsearch -d #后台启动
#通过访问进行测试,看到如下信息,就说明ES启动成功了
{
  "name": "ZO1vdaQ",
  "cluster_name": "elasticsearch",
  "cluster_uuid": "ibiBX0_uQgmRcYV4h55J1A",
  "version": {
    "number": "6.5.4",
    "build_flavor": "default",
    "build_type": "tar",
    "build_hash": "d2ef93d",
    "build_date": "2018-12-17T21:17:40.758843Z",
    "build_snapshot": false,
    "lucene_version": "7.5.0",
    "minimum_wire_compatibility_version": "5.6.0",
    "minimum_index_compatibility_version": "5.0.0"
  },
  "tagline": "You Know, for Search"
}

#停止服务
root@itcast:~# jps
68709 Jps
68072 Elasticsearch
kill 68072 #通过kill结束进程

安装报错

#启动出错,
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at
least [65536]
#解决:切换到root用户,编辑limits.conf 添加类似如下内容
vi /etc/security/limits.conf
#添加如下内容:  添加后需要重新登录配置才生效,exit退出后重新su - elsearch登录
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

[2]: max number of threads [1024] for user [elsearch] is too low, increase to at least
[4096]
#解决:切换到root用户,进入limits.d目录下修改配置文件。
vi /etc/security/limits.d/[xx]-nproc.conf
#修改如下内容:
* soft nproc 1024
#修改为
* soft nproc 4096
[3]: system call filters failed to install; check the logs and fix your configuration
or disable system call filters at your own risk
#解决:Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true
vim config/elasticsearch.yml
#添加:
bootstrap.system_call_filter: false

elasticsearch-head

由于ES官方并没有为ES提供界面管理工具,仅仅是提供了后台的服务。elasticsearch-head是一个为ES开发的一个页
面客户端工具,其源码托管于GitHub,地址为:https://github.com/mobz/elasticsearch-head

4种安装方式:

注意:

由于前后端分离开发,所以会存在跨域问题,需要在服务端做CORS的配置,如下:

1、vim elasticsearch.yml

2、添加http.cors.enabled: true http.cors.allow-origin: “*”

通过chrome插件的方式安装不存在该问题。

IK分词器

Elasticsearch插件地址:https://github.com/medcl/elasticsearch-analysis-ik

  • 安装:
#安装方法:将下载到的elasticsearch-analysis-ik-6.5.4.zip解压到es安装目录:elasticsearch/plugins/ik目录下即可。
mkdir plugins/ik
#解压
unzip elasticsearch-analysis-ik-6.5.4.zip
#重启
./bin/elasticsearch

java客户端

  • 依赖
<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch</artifactId>
    <version>6.5.4</version>
</dependency>
<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-client</artifactId>
    <version>6.5.4</version>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.4</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.11.3</version>
    <scope>compile</scope>
</dependency>
<!--itcast es高级-->
<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <version>6.5.4</version>
</dependency>
rest低级客户端
public class RestEsBase {
    private static final Logger LOGGER = LoggerFactory.getLogger(RestEsBase.class);
    private static final ObjectMapper MAPPER = new ObjectMapper();
    private RestClient restClient;

    // 初始化
    @Before
    public void init(){
        RestClientBuilder restClientBuilder = restClient.builder(
//                new HttpHost("192.168.43.128", 9200,  "http"),
//                ... 可添加多个作为集群
                new HttpHost("192.168.43.128", 9200,  "http")
        );
        restClientBuilder.setFailureListener(new RestClient.FailureListener(){
            @Override
            public void onFailure(Node node) {
                LOGGER.error("is error..." + node);
            }
        });
        this.restClient = restClientBuilder.build();
    }

    // 关闭
    @After
    public void after() throws IOException{
        restClient.close();
    }

    // 查询es状态
    @Test
    public void testGetInfo() throws IOException {
        Request request = new Request("GET", "/_cluster/state");
        request.addParameter("pretty", "true");
        Response response = this.restClient.performRequest(request);
        System.out.println(response.getStatusLine());
        System.out.println(EntityUtils.toString(response.getEntity()));
    }

    // 新增数据
    @Test
    public void testCreateDate() throws IOException {
        Request request = new Request("Post", "/haoke/house");
        Map<String, Object> data = new HashMap<>();
        data.put("id","2001");
        data.put("title","张江高科");
        data.put("price","3500");
        request.setJsonEntity(MAPPER.writeValueAsString(data));
        Response response = this.restClient.performRequest(request);
        System.out.println(response.getStatusLine());
        System.out.println(EntityUtils.toString(response.getEntity()));
    }

    // 删除
    @Test
    public void deleteDate() throws IOException {
        Request request = new Request("DELETE", "/haoke/house/s6Go-XwB6CaVutaqNdyL");
        Response response = this.restClient.performRequest(request);
        System.out.println(EntityUtils.toString(response.getEntity()));
    }

    // 根据id查询
    @Test
    public void testQueryData() throws IOException{
        Request request = new Request("GET", "haoke/house/uqGm-nwB6CaVutaqcNw7");
        Response response = this.restClient.performRequest(request);
        System.out.println(response.getStatusLine());
        System.out.println(response.getEntity());
    }
}
rest高级客户端
public class RestEsBaseHighLevel {
    private static final Logger LOGGER = LoggerFactory.getLogger(RestEsBaseHighLevel.class);
    private static final ObjectMapper MAPPER = new ObjectMapper();
    private RestHighLevelClient restHighLevelClient;

    @Before
    public void init(){
        RestClientBuilder restClientBuilder = RestClient.builder(
//                new HttpHost("192.168.43.128", 9200,  "http"),
//                ... 可添加多个作为集群
                new HttpHost("192.168.43.128", 9200,  "http")
        );
        this.restHighLevelClient = new RestHighLevelClient(restClientBuilder);
    }

    @After
    public void after() throws IOException{
        restHighLevelClient.close();
    }

    // 新增 同步操作
    @Test
    public void testCreate() throws IOException {
        Map<String, Object> data = new HashMap<>();
        data.put("id", "2002");
        data.put("title", "南京西路 拎包入住 一室一厅");
        data.put("price", "4500");

        IndexRequest indexRequest = new IndexRequest("haoke", "haose").source(data);
        IndexResponse indexResponse = this.restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
        System.out.println("id:" + indexResponse.getId());
        System.out.println("index:" + indexResponse.getIndex());
        System.out.println("type:" + indexResponse.getType());
        System.out.println("version:" + indexResponse.getVersion());
        System.out.println("result:" + indexResponse.getResult());
        System.out.println("shardInfo:" + indexResponse.getShardInfo());
    }

    // 新增,异步操作
    @Test
    public void testCreateAsync() throws Exception {
        Map<String, Object> data = new HashMap<>();
        data.put("id", "2003");
        data.put("title", "南京东路 最新房源 二室一厅");
        data.put("price", "5500");
        IndexRequest indexRequest = new IndexRequest("haoke", "house").source(data);
        this.restHighLevelClient.indexAsync(
            indexRequest,
            RequestOptions.DEFAULT,
            new ActionListener<IndexResponse>() {
                @Override
                public void onResponse(IndexResponse indexResponse) {
                    System.out.println("id:" + indexResponse.getId());
                    System.out.println("index:" + indexResponse.getIndex());
                    System.out.println("type:" + indexResponse.getType());
                    System.out.println("version:" + indexResponse.getVersion());
                    System.out.println("result:" + indexResponse.getResult());
                    System.out.println("shardInfo:" + indexResponse.getShardInfo());
                }

                @Override
                public void onFailure(Exception e) {
                    System.out.println(e);
                }
            }
        );
        System.out.println("ok");
        Thread.sleep(20000);
    }

    // 查询
    @Test
    public void testQuery() throws IOException {
        GetRequest getRequest = new GetRequest("haoke", "house", "vaHB-nwB6CaVutaq-tzA");
        // 指定返回字段
        String[] includes = new String[]{"title", "id"};
        String[] excludes = Strings.EMPTY_ARRAY;
        FetchSourceContext fetchSourceContext = new FetchSourceContext(true, includes, excludes);
        getRequest.fetchSourceContext(fetchSourceContext);
        GetResponse response = this.restHighLevelClient.get(getRequest, RequestOptions.DEFAULT);
        System.out.println("data: " + response.getSource());
    }

    @Test
    public void testQuery2() throws IOException {
        RestEsUtils.init(new HttpHost("192.168.43.128", 9200,  "http"));
        String[] includes = new String[]{"title", "id"};
        Map<String, Object> query = RestEsUtils.query("haoke", "house", "vaHB-nwB6CaVutaq-tzA", includes);
        System.out.println(query);
    }

    // 判断是否存在
    @Test
    public void testExiste() throws IOException {
        GetRequest getRequest = new GetRequest("haoke","haose","vaHB-nwB6CaVutaq-tzA");
        boolean exists = this.restHighLevelClient.exists(getRequest, RequestOptions.DEFAULT);
        System.out.println("exist:" + exists);
    }

    // 删除
    public void testDelete() throws IOException {
        DeleteRequest deleteRequest = new DeleteRequest("haoke", "house", "vaHB-nwB6CaVutaq-tzA");
        DeleteResponse response = this.restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);
        System.out.println(response.status());
    }

    //更新数据
    @Test
    public void testUpdate() throws Exception {
        UpdateRequest updateRequest = new UpdateRequest("haoke", "house", "uqGm-nwB6CaVutaqcNw7");
        Map<String, Object> data = new HashMap<>();
        data.put("title", "张江高科2");
        data.put("price", "5000");
        updateRequest.doc(data);
        UpdateResponse response = this.restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT);
        System.out.println("version:" + response.getVersion());
    }

    //测试搜索
    @Test
    public void testSearch() throws Exception{
        SearchRequest searchRequest = new SearchRequest("haoke");
        searchRequest.types("house");

        //
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
        sourceBuilder.query(QueryBuilders.matchQuery("title", "拎包入住"));
        sourceBuilder.from(0);
        sourceBuilder.size(5);
        sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));

        searchRequest.source(sourceBuilder);

        SearchResponse search = this.restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);

        System.out.println("search data count:" + search.getHits().totalHits);
        SearchHits hits = search.getHits();
        for (SearchHit hit : hits) {
            System.out.println(hit.getSourceAsString());
        }
    }
}

2、Kibana:数据查看

部署安装

#解压安装包
tar -xvf kibana-6.5.4-linux-x86_64.tar.gz

#修改配置文件
vim config/kibana.yml
server.host: "ip" #对外暴露服务的地址
elasticsearch.url: "http://es ip地址:9200" #配置Elasticsearch

#启动
./bin/kibana

#通过浏览器进行访问
http://192.168.40.133:5601/app/kibana

3、Filebeat:轻量日志采集器

部署安装

mkdir ./beats
tar -xvf filebeat-6.5.4-linux-x86_64.tar.gz
cd filebeat-6.5.4-linux-x86_64  # 进入安装目录,
  • 添加配置:【…安装目录下/test.yml】
#指定输入
filebeat.inputs:
#-type: stdin #当前控制台输入
- type: log  # 读取日志文件输入
  enabled: true
  paths:
    - /home/elsearch/beats/*.log # 日志路径
  tags: ["haoke-im"] # 添加自定义tag,便于后续处理
  fields: #添加自定义子段
    from: haoke-im
  fields_under_root: true #true为添加到根节点,false为添加到字节点 

# 指定索引分片
setup.template.settings:
  index.number_of_shards: 3 #指定索引分片数

# 输出到控制台
#output.consol: 
#  pretty: true
#  enable: true

# 输出到es
output.elasticsearch: #指定es配置
  hosts: ["192.168.43.128:9200"]
  • 启动并输入
#启动filebeat 当前目录:【...安装目录下/】
./filebeat -e -c test.yml
# ./filebeat -e -c test.yml -d "publish"
#参数说明
-e: 输出到标准输出,默认输出到syslog和logs下
-c: 指定配置文件
-d: 输出debug信息

#根据配置输入路径:如上/home/elsearch/beats/*.log   添加a.log并输入数据保存退出
  • 查看es数据
{
    "_index": "filebeat-6.5.4-2021.11.08",
    "_type": "doc",
    "_id": "WXT4_3wBfzb1yMzuFiLV",
    "_version": 1,
    "_score": 1,
    "_source": {
        "@timestamp": "2021-11-08T14:33:39.569Z",
        "message": "123",
        "host": {
            "name": "localhost.localdomain"
        },
        "source": "/home/elsearch/beats/a.log",
        "offset": 12,
        "input": {
            "type": "log"
        },
        "from": "haoke-im",
        "beat": {
            "version": "6.5.4",
            "name": "localhost.localdomain",
            "hostname": "localhost.localdomain"
        },
        "tags": [
            "haoke-im"
        ],
        "prospector": {
            "type": "log"
        }
    }
}

读取nginx日志

  • 下载nginx:http://nginx.org/en/download.html
  • 上传解压
mkdir nginx
tar -xvf nginx-1.11.6.tar.gz
yum -y install pcre-devel zlib-devel
./configure
make install

#启动
cd...安装目录/sbin/】
./nginx
#通过浏览器访问页面并且查看日志
#访问地址:http://服务器地址/
tail -f 【...安装目录/logs/access.log】
  • 添加配置文件
#指定输入
filebeat.inputs:
#-type: stdin
- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/*.log
  tags: ["nginx"] # 添加自定义tag,便于后续处理
  fields: #添加自定义子段
    from: nginx-log
  fields_under_root: true #true为添加到根节点,false为添加到字节点 

# 指定索引分片
setup.template.settings:
  index.number_of_shards: 3 #指定索引分片数

# 输出到控制台
#output.consol: 
#  pretty: true
#  enable: true
#

# 输出到es
output.elasticsearch: #指定es配置
  hosts: ["192.168.43.128:9200"]
  • 启动后,访问nginx,可以在Elasticsearch中看到索引以及查看数据

Module

日志数据的读取以及处理都是自己手动配置的,其实,在Filebeat中,有大量的Module,可以简化我 们的配置,直接就可以使用,如下:

#目录:【...安装目录/】
#命令
./filebeat modules list

#内容
Enabled:

Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
suricata
system
traefik
nginx-model使用
  • 开启或关闭module。如:
./filebeat modules enable nginx #启动
./filebeat modules disable nginx #禁用
  • 查看修改nginxmodel配置
# 进入modle目录
cd modules.d/

# 修改nginx.yml
- module: nginx
  # Access logs
  access:
    enabled: true
    var.paths: ["/usr/local/nginx/logs/access.log*"]
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

  # Error logs
  error:
    enabled: true
    var.paths: ["/usr/local/nginx/logs/error.log*"]
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
  • 修改或添加filebeat启动配置
# nginx-conf-module.yml
# 指定索引分片
setup.template.settings:
  index.number_of_shards: 3 #指定索引分片数

# 输出到es
output.elasticsearch: #指定es配置
  hosts: ["192.168.43.128:9200"]
  
# 开启models
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  • 启动filebeat
./filebeat -c -e nginx-conf-module.yml 

# 报错
2021-11-13T16:23:09.788+0800	ERROR	fileset/factory.go:142	Error loading pipeline: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
    sudo bin/elasticsearch-plugin install ingest-user-agent
    sudo bin/elasticsearch-plugin install ingest-geoip

#解决:需要在Elasticsearch中安装ingest-user-agent、ingest-geoip插件。根据命令或离线安装
#离线安装:需要ingest-user-agent.tar、ingest-geoip.tar、ingest-geoip-conf.tar 3个文件
#ingest-user-agent.tar、ingest-geoip.tar解压到plugins下
#ingest-geoip-conf.tar解压到config下
#重启es
  • 刷新nginx,查看es数据

4、Logstash:数据处理

部署安装

#检查jdk环境,要求jdk1.8+
java -version
#解压安装包
tar -xvf logstash-6.5.4.tar.gz

读取自定义日志

  • 添加配置文件:【test-pipeline.conf 】
# 输入
input {
  file { 
    path => "/home/elsearch/logstash/logs/app.log"
    start_position => "beginning"
  }
}

# 过滤
filter {
  mutate {
    split => {"message"=>"|"}
  }
}

# 输出
output {
  stdout {
    codec => rubydebug	
  }
}
  • 启动测试
#启动
./bin/logstash -f ./itcast-pipeline.conf

#写日志到文件
cd /home/elsearch/logstash/logs
echo "2019-03-15 21:21:21|ERROR|读取数据出错|参数:id=1002" >> app.log
#输出的结果
{
       "message" => [
        [0] "2019-03-15 21:21:21",
        [1] "ERROR",
        [2] "读取数据出错",
        [3] "参数:id=1002"
    ],
          "host" => "hadoop01",
    "@timestamp" => 2021-11-14T06:26:14.291Z,
          "path" => "/home/elsearch/logstash/logs/app.log",
      "@version" => "1"
}

解析数据写出到es

  • 添加配置文件
# 输入
input {
  file { 
    path => "/home/elsearch/logstash/logs/app.log"
    start_position => "beginning"
  }
}

# 过滤
filter {
  mutate {
    split => {"message"=>"|"}
  }
}

# 输出
output {
#  stdout {
#    codec => rubydebug	
#  }
  elasticsearch {
    hosts => ["192.168.43.128:9200"]
  }
}

  • 写日志到文件测试输出
cd /home/elsearch/logstash/logs
echo "2019-03-15 21:21:21|ERROR|读取数据出错|参数:id=1002" >> app.log

5、整合elk收集日志

Elasticsearch + Logstash + Beats + Kibana整合。

5-1、准备项目:test-elk

  • 依赖
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter</artifactId>
        <exclusions>
            <exclusion>
                <groupId>ch.qos.logback</groupId>
                <artifactId>logback-classic</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-lang3</artifactId>
        <version>3.3.2</version>
    </dependency>
    <dependency>
        <groupId>joda-time</groupId>
        <artifactId>joda-time</artifactId>
        <version>2.9.9</version>
    </dependency>
    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-log4j12</artifactId>
        <version>1.7.26</version>
    </dependency>
</dependencies>
  • log日志文件
log4j.rootLogger=DEBUG,A1,A2

log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=[%p] %-d{yyyy-MM-dd HH:mm:ss} [%c] - %m%n

log4j.appender.A2 = org.apache.log4j.DailyRollingFileAppender
log4j.appender.A2.File = /home/elsearch/logstash/logs/app.log
log4j.appender.A2.Append = true
log4j.appender.A2.Threshold = INFO
log4j.appender.A2.layout = org.apache.log4j.PatternLayout
log4j.appender.A2.layout.ConversionPattern =[%p] %-d{yyyy-MM-dd HH:mm:ss} [%c] - %m%n
  • springboot项目模拟操作
package cn.itcast.dashboard;

import org.apache.commons.lang3.RandomUtils;
import org.joda.time.DateTime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Main {

    private static final Logger LOGGER = LoggerFactory.getLogger(Main.class);

    public static final String[] VISIT = new String[]{"浏览页面", "评论商品", "加入收藏", "加入购物车", "提交订单", "使用优惠券", "领取优惠券", "搜索", "查看订单"};

    public static void main(String[] args) throws Exception {
        while(true){
            Long sleep = RandomUtils.nextLong(200, 1000 * 5);
            Thread.sleep(sleep);
            Long maxUserId = 9999L;
            Long userId = RandomUtils.nextLong(1, maxUserId);
            String visit = VISIT[RandomUtils.nextInt(0, VISIT.length)];
            DateTime now = new DateTime();
            int maxHour = now.getHourOfDay();
            int maxMillis = now.getMinuteOfHour();
            int maxSeconds = now.getSecondOfMinute();
            String date = now.plusHours(-(RandomUtils.nextInt(0, maxHour)))
                    .plusMinutes(-(RandomUtils.nextInt(0, maxMillis)))
                    .plusSeconds(-(RandomUtils.nextInt(0, maxSeconds)))
                    .toString("yyyy-MM-dd HH:mm:ss");

            String result = "DAU|" + userId + "|" + visit + "|" + date;
            LOGGER.info(result);
            Thread.sleep(1*60*1000);
        }
    }
}
  • 打jar包上传linux运行(由于用filebeat作为日志收集,需要把项目部署在filebeat同一机器)
# 运行之后,就可以将日志写入到app.log文件中
java -jar test-elk-1.0-SNAPSHOT.jar

5-2、启动es

# 进入es安装目录启动es
./bin/elasticsearch

5-3、配置logstash并启动

  • logstatsh进行数据处理发送到es。
# =====logstatsh安装目录下添加配置文件:【test-elk.conf】
# 输入
input {
  beats {
    port => "5044"
    codec => json
    client_inactivity_timeout => 36000
  }
}

filter {
  mutate {
    split => {"message"=>"|"}
  }

  mutate {
    add_field => {
      "userId" => "%{message[1]}"
      "visit" => "%{message[2]}"
      "date" => "%{message[3]}"
    }
  }

  mutate {
    convert => {
      "userId" => "integer"
      "visit" => "string"
      "date" => "string"
    }
  }
}

# 输出
output {
  elasticsearch {
    hosts => ["192.168.43.128:9200"]
    codec => "json"
  }
}


## =====启动
./bin/logstash -f test-elk.conf

5-3、配置filebeat并启动

  • 使用filebeat作为日志收集发送到logstash。
## =====filebeat安装目录下添加配置文件:【test-elk.yml】
# 配在输入
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/elsearch/logstash/logs/*.log

# 设置分片数
setup.template.settings:
  index.number_of_shards: 3 #指定索引分片数

# 输出到logstash
output.logstash:
  hosts: ["192.168.43.129:5044"]

## =====启动
./filebeat -e -c test-elk.yml

5-5、查看es中数据

# 进入kibanan安装目录启动kibanan   数据没有写入es时可以删除es中的索引重试。
./bin/kibana
  • 创建index
    在这里插入图片描述

  • 查看数据
    在这里插入图片描述

  • 获取通过elasticsearch-head插件查看也可以
    在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值