搭建单节点ELK日志收集

一.Elasticsearch安装

(1).操作系统调优(必须配置,否则ES起不来)

【1】内存优化
在/etc/sysctl.conf添加如下内容

fs.file-max=655360
vm.max_map_count=655360

刷新生效

sysctl -p

解释:
(1)vm.max_map_count=655360
系统最大打开文件描述符数

(2)vm.max_map_count=655360
限制一个进程拥有虚拟内存区域的大小

【2】修改

vim /etc/security/limits.conf
  • * soft nofile 65536
    * hard nofile 65536
    * soft nproc 65536
    * hard nproc 65536
    * soft memlock unlimited
    * hard memlock unlimited
    

解释:
(nofile)最大开打开文件描述符
(nproc)最大用户进程数
(memlock)最大锁定内存地址空间

(2)新建文件夹并上传elasticsearch文件
mkdir /opt/elasticsearch
tar -zxvf elasticsearch-7.4.2-linux-x86_64.tar.gz
(3)修改es配置文件
vim ./config/elasticsearch.yml
cluster.name: my-application    # 集群名称

node.name: node-1        # 节点名称,仅仅是描述名称,用于在日志中区分
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /opt/elasticsearch/data                 # 数据的默认存放路径
#
# Path to log files:
#
path.logs: /opt/elasticsearch/logs                  # 日志的默认存放路径

#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["172.31.0.24"]  #集群个节点IP地址
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"] #master节点资格
#
# For more information, consult the discovery and cluster formation module documentation.

(4)添加elasticsearch启动用户
adduser es //创建一个账号为 es的用户
chown -R es:es /opt/elasticsearch/ //给es用户赋予操作此文件夹的权限
su es // 使用 es账户操作,如果是 su root 意为使用 root用户操作
(5)安装ik分词器

在/opt/elasticsearch/elasticsearch/plugins下新建ik文件夹

mkdir /opt/elasticsearch/elasticsearch/plugins/ik

将elasticsearch-analysis-ik-7.4.2.zip上传到ik文件夹下并解压

yum install -y unzip       //下载unzip命令

unzip elasticsearch-analysis-ik-7.4.2.zip    //解压文件夹

bin目录下查看已经加载的插件

./elasticsearch-plugin list
(6)启动es

在elasticsearch的bin目录下

//需要用es用户启动

./elasticsearch        //前台启动  
./elasticsearch -d    //后台启动
http://127.0.0.1:9200/_cat/nodes?v&s=index     //查看节点信息

http://127.0.0.1:9200/_cat/indices?v    //查看索引信息

二.kibana安装

(1)上传安装包 解压并改名
tar -zxvf kibana-7.4.2-linux-x86_64.tar.gz
mv ./kibana-7.4.2  /kibana
chown -R es:es /opt/kibana 
(2)配置config/kibana.yml
server.port: 5601   //端口

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"     //允许来自远程用户的连接

server.name: "my_kibana"     //服务名称 

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://172.31.0.24:9200"]  //elasticsearch地址

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
i18n.locale: "zh-CN"    //中文

(3)启动kibana
su  es  //使用es用户启动

bin/kibana   //前台启动

nohup  bin/kibana  &   //后台启动  

三.安装logstash

(1)上传安装包 解压并改名
tar -zxvf /opt/elasticsearch/logstash-7.4.2.tar.gz
cd /opt/elasticsearch/logstash-7.4.2

(2)新建config/收集nginx接口日志
vim  nginxtest.conf
input{
  file{
      path => "/var/log/nginx/access.log" # 想要收集的日志信息文件-注意全路径
      type=>"logstash_log"              # 指定一个类型
      start_position =>"beginning"      # 说明从文件最开始读
  }
}

#@timestamp比当前时间慢8小时
filter {
    ruby { 
        code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" 
    }

    ruby {
        code => "event.set('@timestamp',event.get('timestamp'))"
    }

    mutate {
        remove_field => ["timestamp"]
    }
}
##@timestamp比当前时间快8小时
#修改kibana当前时区 GMT+0 (默认获取浏览器时间)


output{
    elasticsearch{                      # 配置elasticsearch接收数据信息
    hosts=>["http://127.0.0.1:9200"]           # 配置elasticsearch端口信息
    index=>"log-%{+YYYY.MM.dd}"         # 配置Kibana新建index,比如我这里是”log-“开头,在Kibana可以写”log-*”
    #使用自定义索引模板 
    template => "/opt/elasticsearch/logstash-7.4.2/nginxtest.json"  
    template_name => "log-*"
    template_overwrite => true

    }
}
(3)新建json文件/nginxtest.json

Elasticsearch-7x版本的基本模板

https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/main/lib/logstash/outputs/elasticsearch/templates/ecs-disabled/elasticsearch-7x.json

Es为单节点生成日志索引副本数默认是1,新建的索引会有未分配碎片

vim  nginxtest.json
{
  "index_patterns" : "log-*",
  "version" : 60001,
  "settings" : {
    "index.refresh_interval" : "5s",
    "index.number_of_replicas" : 0,     //副本数改为0
    "number_of_shards": 1
  },
  "mappings" : {
    "dynamic_templates" : [ {
      "message_field" : {
        "path_match" : "message",
        "match_mapping_type" : "string",
        "mapping" : {
          "type" : "text",
          "norms" : false
        }
      }
    }, {
      "string_fields" : {
        "match" : "*",
        "match_mapping_type" : "string",
        "mapping" : {
          "type" : "text", "norms" : false,
          "fields" : {
            "keyword" : { "type": "keyword", "ignore_above": 256 }
          }
        }
      }
    } ],
    "properties" : {
      "@timestamp": { "type": "date"},
      "@version": { "type": "keyword"},
      "geoip"  : {
        "dynamic": true,
        "properties" : {
          "ip": { "type": "ip" },
          "location" : { "type" : "geo_point" },
          "latitude" : { "type" : "half_float" },
          "longitude" : { "type" : "half_float" }
        }
      }
    }
  }
}
(4)启动logstash
./bin/logstash -f nginxtest.conf -t   //检查配置文件是否正确

./bin/logstash -f nginxtest.conf    //启动logstash

四、查看索引信息

在这里插入图片描述

在这里插入图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值