ELK 日志服务

本文详细介绍了ELK(Elasticsearch、Logstash、Kibana)堆栈的部署和管理,包括Elasticsearch的安装、配置及插件安装,如elasticsearch-head和cerebro。接着讲解了Filebeat的安装与配置,用于收集Nginx和Tomcat的日志,以及如何在Kibana中查看和管理索引。Logstash的安装和配置过程也进行了详解,包括从不同源获取数据、转换日志格式并发送到Elasticsearch。最后,文章提供了通过Kafka作为缓冲层的Logstash配置实例。
摘要由CSDN通过智能技术生成

标题

1.ELK 概述

elk
logstash 格式转换处理,发送到es
es对比mysql

es    mysql 

索引   数据库
type    表
字段    列
文档   行,记录

在这里插入图片描述

2. ELK部署和管理

2.1elastisearch安装

内存4G,内存太少起不来
es-node1.luo.org 192.168.1.136
es-node2.luo.org 192.168.1.137
es-node3.luo.org 192.168.1.138

配置域名解析

#修改内核配置
echo “vm.max_map_count = 262144” >> /etc/sysctl.conf && sysctl -p
echo “fs.file-max = 1000000” >> /etc/sysctl.conf #ubuntu不用修改,默认值就很大
sysctl -p

修改资源配置

安装包下载

https://www.elastic.co/cn/downloads/enterprise-search
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.9.0-linux-x86_64.tar.gz

https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/apt/pool/main/e/elasticsearch/elasticsearch-8.9.0-amd64.deb

配置文件说明

[root@es-node1 ~]# grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml 
#ELK集群名称,同一个集群内每个节点的此项必须相同,新加集群的节点此项和其它节点相同即可加入集群,而无需再验证
cluster.name: ELK-Cluster 
#当前节点在集群内的节点名称,同一集群中每个节点要确保此名称唯一
node.name: es-node1 
#ES 数据保存目录
path.data: /data/es-data 
#ES 日志保存目录
path.logs: /data/es-logs
#服务启动的时候立即分配(锁定)足够的内存,防止数据写入swap,提高启动速度
bootstrap.memory_lock: true
#指定监听IP,如果绑定了错误的IP,可将此修改为指定IP
network.host: 0.0.0.0 
#监听端口
http.port: 9200
#发现集群的node节点列表,可以添加部分或全部节点IP

#在新增节点到集群时,此处需指定至少一个已经在集群中的节点地址
discovery.seed_hosts: ["10.0.0.101","10.0.0.102","10.0.0.103"]
#集群初始化时指定希望哪些节点可以被选举为 master,只在初始化时使用,新加节点到已有集群时此项可不配置
cluster.initial_master_nodes: ["10.0.0.101","10.0.0.102","10.0.0.103"]
#一个集群中的 N 个节点启动后,才允许进行数据恢复处理,默认是1,一般设为为所有节点的一半以上,防止出现脑裂现象
#当集群无法启动时,可以将之修改为1,或者将下面行注释掉,实现快速恢复启动
gateway.recover_after_nodes: 2
#设置是否可以通过正则表达式或者_all匹配索引库进行删除或者关闭索引库,默认true表示必须需要明确指定索引库名称,不能使用正则表达式和_all,生产环境建议设置为 true,防止误删索引库。
action.destructive_requires_name: true
#不参与主节点选举
node.master: false
#存储数据,此值为false则不存储数据而成为一个路由节点
#如果将true改为false,需要先执行/usr/share/elasticsearch/bin/elasticsearch-node 
repurpose 清理数据
node.data: true
#7.x以后版本下面指令已废弃,在2.x 5.x 6.x 版本中用于配置节点发现列表
discovery.zen.ping.unicast.hosts: ["10.0.0.101", "10.0.0.102","10.0.0.103"]
es7 配置
grep -v "#" /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster     
node.name: es-node2          #es集群中只需要修改此行,每个节点都不能相同
path.data: /data/es-data
path.logs: /data/es-logs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.1.136", "192.168.1.137","192.168.1.138"]
cluster.initial_master_nodes: ["192.168.1.136", "192.168.1.137","192.168.1.138"]
gateway.recover_after_nodes: 2  #es8 此参数已经被移除了
Aug 03 22:15:18 es-node1.luo.org systemd-entrypoint[34943]: 2023-08-03 14:15:18,096 main ERROR Unable to locate appender "rolling" for logger config "root"
Aug 03 22:15:18 es-node1.luo.org systemd-entrypoint[34943]: 2023-08-03 14:15:18,096 main ERROR Unable to locate appender "rolling_old" for logger config "root"
Aug 03 22:15:18 es-node1.luo.org systemd-entrypoint[34943]: 2023-08-03 14:15:18,097 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Aug 03 22:15:18 es-node1.luo.org systemd-entrypoint[34943]: 2023-08-03 14:15:18,097 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Aug 03 22:15:18 es-node1.luo.org systemd-entrypoint[34943]: 2023-08-03 14:15:18,097 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Aug 03 22:15:18 es-node1.luo.org systemd-entrypoint[34943]: 2023-08-03 14:15:18,097 main ERROR Unable to locate appender "audit_rolling" for logger config "org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail"
Aug 03 22:15:27 es-node1.luo.org systemd-entrypoint[34865]: ERROR: Elasticsearch did not exit normally - check the logs at /data/es-logs/es-cluster.lo
授权
sudo chmod 755 /data/es-logs/
2023-08-03T14:16:30,171][ERROR][o.e.b.Elasticsearch      ] [es-node2] fatal exception while booting Elasticsearch
java.lang.IllegalArgumentException: unknown setting [gateway.recover_after_nodes] did you mean any of [gateway.recover_after_time, gateway.recover_after_data_nodes]?
	at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:575) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:521) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:491) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:461) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:150) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:55) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.node.Node.<init>(Node.java:484) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.node.Node.<init>(Node.java:334) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:234) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:234) ~[elasticsearch-8.9.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:72) ~[elasticsearch-8.9.0.jar:?]
[2023-08-03T14:16:59,651][INFO ][o.a.l.u.VectorUtilPanamaProvider] [es-node2] Java vector incubator API enabled; uses preferredBitSize=256
这个错误消息表明在Elasticsearch的配置文件中指定了一个未知的设置项 “gateway.recover_after_nodes”,但它看起来像是一个无效的设置项。错误消息还提供了一些建议的有效设置项,包括 “gateway.recover_after_time” 和 “gateway.recover_after_data_nodes”。

这个错误的原因可能是:

Elasticsearch版本不匹配:你正在使用的Elasticsearch版本与配置文件中指定的设置项不兼容。在早期版本的Elasticsearch中可能没有 “gateway.recover_after_nodes” 这个设置项

org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml : [xpack.security.transport.ssl.keystore.secure_password,xpack.security.transport.ssl.truststore.secure_password]
at org.elasticsearch.xpack.core.ssl.SSLService.validateServerConfiguration(SSLService.java:650) ~[?:?]

2.2 dpkg 安装 es8

创建目录并授权,
mkdir -p /data/es-data /data/es-logs && chown -R elasticsearch:elasticsearch /data/es*

修改配置

 grep -vE "^$|^#" elasticsearch.yml 
cluster.name: es-cluster
node.name: es-node1   #每个节点修改此项
path.data: /data/es-data
path.logs: /data/es-logs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.1.136", "192.168.1.137","192.168.1.138"]
cluster.initial_master_nodes: ["192.168.1.136", "192.168.1.137","192.168.1.138"]
xpack.security.enabled: false
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12

在elasticsrarch8 默认会生成密码, xpack.security.enabled: false 设置false关闭安全认证

复制到其他节点并启动

sudo systemctl start elasticsearch.service
scp /etc/elasticsearch/elasticsearch.yml  192.168.1.138:/etc/elasticsearch
http://192.168.1.138:9200/   
root@es-node1:/etc/elasticsearch# curl http://192.168.1.138:9200/
{
   
  "name" : "es-node3",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "Q0bEUVYVS3u-1ONJdy49VA",
  "version" : {
   
    "number" : "8.9.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "8aa461beb06aa0417a231c345a1b8c38fb498a0d",
    "build_date" : "2023-07-19T14:43:58.555259655Z",
    "build_snapshot" : false,
    "lucene_version" : "9.7.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
查看集群状态
root@es-node1:/etc/elasticsearch# curl http://192.168.1.136:9200/_cat/health
1691074699 14:58:19 es-cluster green 3 3 0 0 0 0 0 0 - 100.0%


root@es-node1:/etc/elasticsearch# curl -s  http://192.168.1.136:9200/_cat/health |awk   '{print $4}'
green
-s  屏蔽curl的输出
创建索引,相当于MySQL数据库

root@es-node1:/etc/elasticsearch# curl -XPUT '192.168.1.136:9200/index1'
{
   "acknowledged":true,"shards_acknowledged":true,"index":"index1"}
列出索引
root@es-node1:/etc/elasticsearch# curl 'http://127.0.0.1:9200/_cat/indices?v'
health status index  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   index1 Av_hoO13RxGqHphUe0Sh2Q   1   1          0            0       450b           225b
 curl '192.168.1.137:9200/index1?pretty'



#创建3个分片和2个副本的索引
root@es-node1:/etc/elasticsearch# curl -XPUT '192.168.1.136:9200/index2' -H 'Content-Type: application/json' -d '
{
   
  "settings": {
   
    "index": {
   
      "number_of_shards": 3,  
      "number_of_replicas": 2
   }
 }
}'

查看
 curl '192.168.1.137:9200/index2?pretty'
 
#调整副本数为1,但不能调整分片数
curl -XPUT '192.168.1.137:9200/index2/_settings' -H 'Content-Type: application/json' -d '
{
   
  "settings": {
    
      "number_of_replicas": 1
   }
}'


root@es-node1:/etc/elasticsearch# curl '192.168.1.137:9200/index2?pretty'
{
  "index2" : {
    "aliases" : { },
    "mappings" : { },
    "settings" : {
      "index" : {
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        },
        "number_of_shards" : "3",
        "provided_name" : "index2",
        "creation_date" : "1691075649998",
        "number_of_replicas" : "1",
        "uuid" : "KgPw5nCCQt-Amld8MMdCxQ",
        "version" : {
          "created" : "8090099"
        }
      }
    }
  }
}

插入数据


7版本插入数据
[root@node1 ~]#curl -XPOST http://192.168.1.137:9200/index1/book/ -H 'Content-Type: application/json' -d '{
   "name":"linux", "author": "wangxiaochun", "version": "1.0"}'

curl -XPOST 'http://192.168.1.137:9200/index1/book?pretty' -H 'Content-Type: application/json' -d '{
   "name":"python", "author": "xuwei", "version": "1.0"}'

curl -XPOST 'http://192.168.1.137:9200/index1/book/3?pretty' -H 'Content-Type: application/json' -d '{
   "name":"golang", "author": "zhang", "version": "1.0"}'


**注意**
es8版本插入数据,原 {type} 要改为 _doc,格式如下     book改为_doc
请注意,在 Elasticsearch 8.0 版本中,默认移除了类型的概念,并使用 “_doc” 作为一个通用的默认类型。如果你的 Elasticsearch 集群版本早于 8.0,那么你需要根据你的索引映射中定义的类型来修改命令中的 “_doc”。

curl -XPOST http://192.168.1.137:9200/index1/_doc/ -H 'Content-Type: application/json' -d '{
   "name":"linux", "author": "wangxiaochun", "version": "1.0"}'


#_id 指定为3
#index1是索引数据库,book是type,3是document记录

curl -XPOST 'http://10.0.0.101:9200/index1/_doc/3?pretty' -H 'Content-Type: application/json' -d '{
   "name":"golang", "author": "zhang", "version": "1.0"}'


插入数据
curl -X POST 'http://192.168.1.137:9200/index2/_doc' -H 'Content-Type: application/json' -d '{
   
  "name": "python",
  "author": "xuwei",
  "version": "1.0"
}'

查询所有记录
 curl 'http://192.168.1.136:9200/index1/_search?pretty'
{
  "took" : 65,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 2,  #二条记录
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "index1",
        "_id" : "C2IMvIkBKofjOCsuWGu-",
        "_score" : 1.0,
        "_source" : {
          "name" : "linux",
          "author" : "wangxiaochun",
          "version" : "1.0"
        }
      },
      {
        "_index" : "index1", #库名,索引
        "_id" : "3",
        "_score" : 1.0,
        "_source" : {
          "name" : "golang",
          "author" : "zhang",
          "version" : "1.0"
        }
      }
    ]
  }
}

指定id查询
#curl -XGET ‘http://localhost:9200/{index}/{type}/{id}’
curl 'http://localhost:9200/index1/_doc/C2IMvIkBKofjOCsuWGu-?pretty' 
#C2IMvIkBKofjOCsuWGu-为id  ?pretty  格式输出
查询id为3的记录
 curl 'http://localhost:9200/index1/_doc/3'



删除索引
格式 curl -XDELETE http://kibana服务器:9200/<索引名称>
curl -XDELETE  'http://localhost:9200/index1/_doc/C2IMvIkBKofjOCsuWGu-'


查看所有索引
curl -XGET 'http://localhost:9200/_doc/indices?v'
















2.2插件

2.2.1 elasticsearch head

在这里插入图片描述

可以看出索引index1是1分片1副本 index2 是3分片1副本 重复方框是主分片

进行搜索
在这里插入图片描述

提交存入数据
http://192.168.1.136:9200/index1/_doc/

{“name”:“linux”, “author”: “wangxiaochun”,“version”: “1.0”}
在这里插入图片描述

2.2.2cerebro插件

wget https://github.com/lmenezes/cerebro/releases/download/v0.9.4/cerebro_0.9.4_all.deb

apt -y install openjdk-11-jdk


dpkg -i cerebro_0.9.4_all.deb 

启动失败,修改配置文件
vim /etc/cerebro/application.conf 
data.path: "/var/lib/cerebro/cerebro.db"
#data.path = "./cerebro.db"

# ss -ntl |grep 9000

http://192.168.1.136:9000/ #

在这里插入图片描述

这个插件 es集群插件的 索引无法查看

es读操作发生在所有的副本节点上

分片(shard): 主分片,副本分片,:因为ES是个分布式的搜索引擎, 所以索引通常都会分解成不同部分, 而这些分布在不同节点的数据就是分片

副本:副本策略对index中的每个分片创建冗余的副本。

https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/apt/pool/main/m/metricbeat/metricbeat-8.9.0-amd64.deb

2.3安装kibana

192.168.1.140

 wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/apt/pool/main/k/kibana/kibana-8.9.0-amd64.deb
修改配置
vi /etc/kibana/kibana.yml 
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.1.136:9200","http://192.168.1.137:9200","http://192.168.1.138:9200"]

启动,访问kibana
192.168.1.140

filebeat 安装

wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/apt/pool/main/f/filebeat/filebeat-8.9.0-amd64.deb

从标准输入读取再输出至 Json 格式的文件

cat stdin-file.yml 
filebeat.inputs:
- type: stdin
  enabled: true
output.file:
  path: "/tmp/filebeat"
  filename: filebeat

运行查看日志
filebeat -e -c stdin-file.yml

cat /tmp/filebeat/filebeat-20230806.ndjson 
{
   "@timestamp":"2023-08-06T01:36:55.556Z",
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值