Elasticsearch、Logstash、Kibana、FileBeat

安装elasticsearch,修改elasticsearch.yml:

# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
http.enabled: true

创建es用户,切换至es用户启动elasticsearch,进入目录执行 nohup ./elasticsearch &

安装 kibana, 修改配置kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 9400 

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

安装logstash
添加配置 first-pipeline.conf:

input {
	beats {
		port => "9401" #输入端口
	}
}

filter {
	grok{
		match => { 		
			"message" => "(?<temMsg>(?<=interface=chat  )(.*)/?)"  #对日志进行删减
		}
	}
	
	json {
   		source => "temMsg" #  把temMsg转成json格式
  	}
	
}

output {
    elasticsearch {
        hosts => [ "localhost:9200" ]#输出端口
    }
}

通过该配置启动logstash:./logstash -f first-pipeline.conf

各个服务器,安装filebeat,修改 filebeat.yml:

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/logs/chat_log/*.log
    #- c:\programdata\elasticsearch\logs\*

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.logstash:
  # Array of hosts to connect to.
  hosts: ["47.112.128.98:9401"]#输出到logstash

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

通过nohup ./filebeat -e -c filebeat.yml -d “publish” &启动

elasticsearch 查询

var conds = 
    {
        "size": 5000,
        "body": {
            "query": {
                "bool": {
                    "must":[
                        { "match": { "roleid": roleid } },
                        { "match": { "logtype": tags } }
                    ],
                    
                    "filter": {
                    "range":{
                        "time":{
                            "gte" : stime,
                            "lte" : etime
                        }
                    }                 
                 }
                }
            },
            "sort": { "@timestamp": { "order": "desc" } }
        }
    }
var conds = 
    {
        "size": 5000,
        "body": {
            "query": {
                "bool": {
                  "must": {
                    "bool" : { 
                      "should": [
                        { "match": { "logtype": tags }},
                        { "match": { "logtype": tags1 }} 
                      ],
                      "minimum_should_match": 1,  #约束should条件至少满足一条,否则should不起作用
                      "must":[
                            { "match": { "gid": gid } },
                            { "match": { "dept": dept } },
                            { "match": { "sid": sid } }
                        ]
                    }
                  },
                  
                  "filter": {
                        "range":{
                            "time":{
                                "gte" : stime,
                                "lte" : etime
                            }
                        }
                    }
                }
            },
            "sort": { "@timestamp": { "order": "desc" } }
        }
    }

遇到的坑:elasticsearch 日志占用达到磁盘90%,不在进行日志记录,对磁盘进行扩容之后,还是不能够记录日志,查看日志,发现是logstash那边报错,elasticsearch的索引是只读状态,不能进行写操作

[logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})

官网链接
执行修改状态:

curl -XPUT -H 'Content-Type: application/json' http://106.14.46.249:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值