logstash使用及es,kibana安装

logstash使用

  • 1.下载插件
  • 2.开启字符转义
  • 2.创建记录读取位置的index.txt

5.6.16下载地址(解压即安装):

https://artifacts.elastic.co/downloads/logstash/logstash-5.6.16.tar.gz

1.要测试您的Logstash安装,请运行最基本的Lo gstash管道。例如:

# cd logstash-5.6.16
# bin/logstash -e 'input { stdin { } } output { stdout {} }'

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WQvFL1wg-1588240637776)(C:\Users\fei\Desktop\logstash\images\测试安装.bmp)]

​ 退出执行ctrl + D


备注:不使用filebeat 可以忽略2和3

2.做一个将日志文件导入es的测试,需要准备:

  • filebeat (轻量型日志采集器)

https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.16-linux-x86_64.tar.gz

  • log (日志文件)

https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz

3.修改filebeta的配置文件

  • filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
    - /path/to/file/logstash-tutorial.log (注意它的type是log)
output.logstash:
  hosts: ["localhost:5044"]
  • 运行
# ./filebeat -e -c filebeat.yml -d "publish"

4.编写logstash配置文件

  • first-pipeline.conf
input {
    beats {
        host => "192.168.100.73"
        port => 5044
    }
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
    stdout { codec => rubydebug }
}

放置在logstash根目录下即可

1).验证配置文件是否正确
    bin/logstash -f finalEdition.conf --config.test_and_exit
2).启动/重新加载

(输入命令后实时加载配置文件,所以不用重启)

bin/logstash -f finalEdition.conf --config.reload.automatic

或

bin/logstash -f first-pipeline.conf --path.data=/opt/myLogstash/data --config.reload.automatic

 

控制台会打印信息

3).logstash命令
a.查看有哪些插件
# ./bin/logstash-plugin list

b.添加插件
# ./bin/logstash-plugin install logstash-filter-multiline

c.更新插件
# ./bin/logstash-plugin install --no-verify

------------------以上方法不行,尝试手动添加------------------
--下载插件地址
https://github.com/logstash-plugins/

# mkdir plugin
# cd /opt/myLogstash/logstash
# vim Gemfile

-- 增加一行
gem "logstash-filter-multiline",:path => "/opt/myLogstash/logstash/plugin/logstash-filter-multiline"

-- 参考最下面展示配置,配置logstash

备注:

​ 更新插件的时候可能会一直显示Install… 等一段时间后就会自己安装上(应该和我的网络有关系)

-- 判断是否安装好

# ./bin/logstash-plugin list

-- 没有报错就说明ok
4).部署logstash需要完成项:
  • 1.下载jdbc(给插件中加jar包)和multiline插件
  • 2.修改logstash.yml
  • 3.注意路径正确性
  • 4.自己创建plugin目录
  • 5.自己创建index.txt文件
  • 6.创建自定义正则文件patterns
  • 7.创建启动的conf

5.使用Grok筛选器插件解析Web日志

grok filter插件允许您将非结构化的日志数据解析为结构化的、可查询的数据。

多行合并:
# The # character at the beginning of a line indicates a comment. Use
# comments to describe your configuration.
input {
   beats {
        host => "192.168.100.73"
        port => 5044
    }
}
# The filter part of this file is commented out to indicate that it is
# optional.
filter {
         multiline {
            pattern => "^\d{4}-\d{1,2}-\d{1,2}"
            negate => true
            what => "previous"
        }
 }
output {
    	# 打印控制台
        stdout {}
}
拆分message字段:
  • 方式一:
input {
   file {
        path => "/opt/myLog/topicres_error_20200410.0.log"
        start_position => "beginning"
        sincedb_path => "/dev/null"
        }
}
filter {
         multiline {
            pattern => "^\d{4}-\d{1,2}-\d{1,2}"
            negate => true
            what => "previous"
        }
        
        mutate {
            split => ["message","|"]
            add_field => {
                "demo1" => "%{[message][0]}"
                "demo2" => "%{[message][1]}"
                "demo3" => "%{[message][2]}"
                "demo4" => "%{[message][3]}"
                "demo5" => "%{[message][4]}"
                "demo6" => "%{[message][5]}"
                "demo7" => "%{[message][6]}"
            }
            
            remove_field => [ "message" ]
        }
 }
output {
    
        stdout {}
}

最终版:

finalEdition.conf:
# 开始
input {
# 读取文件
   file {
   		# 读取文件的位置(此方法可以读取目录下面的所有日志文件)
        path => ["/opt/ambariconf/log/info/*","/opt/ambariconf/log/error/*"]
        
        # 从头部读取
        start_position => "beginning"
        
        # 没有记录位置每次都是重新读取文件
        #sincedb_path => "/dev/null"
        
        # 记录读取的位置
        sincedb_path => "/opt/logstash/logstash/index.txt"
        }
}

filter {
# 匹配正则表达式作为开头,不匹配的往后拼接作为一行,知道找到下一个匹配才另起一行
         multiline {
            pattern => "^\d{4}-\d{1,2}-\d{1,2}"
            negate => true
            what => "previous"	
        }

# 匹配上面的一行数据,按照编写的正则表达式修改格式,重新赋值给message,判断error和info格式不一样
		if "error" in [path] {
             grok {
             # 自定义正则表达式的位置
               patterns_dir => "/opt/logstash/logstash/patterns"

               match => {
                    "message" => "%{TIMESTAMP_ISO8601:dateTime}\|%{MYIP:myIp}%{MYID:myUserId}%{MYCLASS:myClass}%{MYID:myTranceId}%{MYERRORMESSAGE:myErrorMessage}%{MYERRORINFO:myErrorInfo}"

                    }
           }
		}else {
		     grok {
             # 自定义正则表达式的位置
               patterns_dir => "/opt/logstash/logstash/patterns"

               match => {
                    "message" => "%{TIMESTAMP_ISO8601:dateTime}\|%{MYIP:myIp}%{MYID:myUserId}%{MYCLASS:myClass}%{MYID:myTranceId}%{MYINFOMESSAGE:myErrorMessage}"

                    }
           } 
		}

# 根据文件名创建不同的_type,"[@metadata][target_type]"属性不会写入到es中
		if "topicres_error" in [path] {
                mutate {
                add_field => {"[@metadata][target_type]" => "topicres_error"}
                add_field => {"myType" => "topicres"}
                add_field => {"myStatus" => "error"}}
        }else if "datacube_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "datacube_error"}
        	    add_field => {"myType" => "datacube"}
        	    add_field => {"myStatus" => "error"}}
        }else if "ddopservice_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ddopservice_error"}
        	    add_field => {"myType" => "ddopservice"}
        	    add_field => {"myStatus" => "error"}}
        }else if "ddopws_plat_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ddopws_plat_error"}
        	    add_field => {"myType" => "ddopws_plat"}
        	    add_field => {"myStatus" => "error"}}
        }else if "ddopws_admin_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ddopws_admin_error"}
        	    add_field => {"myType" => "ddopws_admin"}
        	    add_field => {"myStatus" => "error"}}
        }else if "openapimgn_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "openapimgn_error"}
        	    add_field => {"myType" => "openapimgn"}
        	    add_field => {"myStatus" => "error"}}
        }else if "openws_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "openws_error"}
        	    add_field => {"myType" => "openws"}
        	    add_field => {"myStatus" => "error"}}
        }else if "topicmeta_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "topicmeta_error"}
        	    add_field => {"myType" => "topicmeta"}
        	    add_field => {"myStatus" => "error"}}
        }else if "openapiacc_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "openapiacc_error"}
        	    add_field => {"myType" => "openapiacc"}
        	    add_field => {"myStatus" => "error"}}
        }else if "tqlexecutor_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "tqlexecutor_error"}
        	    add_field => {"myType" => "tqlexecutor"}
        	    add_field => {"myStatus" => "error"}} 
        }else if "ntimes-dal_error" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ntimes-dal_error"}
        	    add_field => {"myType" => "ntimes-dal"}
        	    add_field => {"myStatus" => "error"}} 
        }else if "topicres_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "topicres_info"}
        	    add_field => {"myType" => "topicres"}
        	    add_field => {"myStatus" => "info"}}
        }else if "datacube_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "datacube_info"}
        	    add_field => {"myType" => "datacube"}
        	    add_field => {"myStatus" => "info"}}
        }else if "ddopservice_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ddopservice_info"}
        	    add_field => {"myType" => "ddopservice"}
        	    add_field => {"myStatus" => "info"}}
        }else if "ddopws_plat_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ddopws_plat_info"}
        	    add_field => {"myType" => "ddopws_plat"}
        	    add_field => {"myStatus" => "info"}}
        }else if "ddopws_admin_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ddopws_admin_info"}
        	    add_field => {"myType" => "ddopws_admin"}
        	    add_field => {"myStatus" => "info"}}
        }else if "openapimgn_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "openapimgn_info"}
        	    add_field => {"myType" => "openapimgn"}
        	    add_field => {"myStatus" => "info"}}
        }else if "openws_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "openws_info"}
        	    add_field => {"myType" => "openws"}
        	    add_field => {"myStatus" => "info"}}
        }else if "topicmeta_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "topicmeta_info"}
        	    add_field => {"myType" => "topicmeta"}
        	    add_field => {"myStatus" => "info"}}
        }else if "openapiacc_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "openapiacc_info"}
        	    add_field => {"myType" => "openapiacc"}
        	    add_field => {"myStatus" => "info"}}
        }else if "tqlexecutor_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "tqlexecutor_info"}
        	    add_field => {"myType" => "tqlexecutor"}
        	    add_field => {"myStatus" => "info"}}
        }else if "ntimes-dal_info" in [path] {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "ntimes-dal_info"}
        	    add_field => {"myType" => "ntimes-dal"}
        	    add_field => {"myStatus" => "info"}}
        }else {
        	    mutate {
        	    add_field => {"[@metadata][target_type]" => "unknown"}
        	    add_field => {"myType" => "unknown"}
        	    add_field => {"myStatus" => "unknown"}}
        }

# 由于上面取值有多余的"|"以及"\n",下面将多余的部分去掉
mutate {
		# 将字符串中的"|"去掉
		gsub => ["myIp", "\|", ""]
		gsub => ["myUserId", "\|", ""]
		gsub => ["myClass", "\|", ""]
		gsub => ["myTranceId", "\|", ""]
		gsub => ["myErrorMessage", "\n", ""]
		gsub => ["myErrorInfo", "\n\t", "<br>"]
		remove_field => [ "message" ]
        }
    
# 这样@timestamp就是日志实时生成的时间,就算服务器宕机只要日志的日期正确,数据转换的时候就不怕混乱.
date {
        match => [ "dateTime", "yyyy-MM-dd HH:mm:ss,SSS"]
        target => "@timestamp"
    } 
# 由于上面的@timestamp是UTC时间,比北京时间晚了八个小时,现在将时差补上
ruby {
	code => "
       event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)
       event.set('@timestamp',event.get('timestamp'))
       "
       remove_field => ["timestamp"]
   }

}
output {
# 这里是将封装过滤后的数据导入ES
        elasticsearch {
        hosts => [ "192.168.100.73:9200" ]
        # 这里的时间会根据@timestamp的时间生成
        index => "log-%{+YYYYMMdd}"
        document_type => "%{[@metadata][target_type]}"
    }
    # 判断如果是error类型的才写入数据库
	if [myStatus] == "error"{
		jdbc {
    	driver_jar_path => "/opt/logstash/logstash/vendor/jar/jdbc/mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar"
   		driver_class => "com.mysql.jdbc.Driver"
    	connection_string => "jdbc:mysql://xxx.xxx.xxx.xx:3306/数据库?user=root&password=password&useSSL=false"
    	statement => [ "INSERT INTO es_log_info (myIp, myUserId,myClass,myType,myStatus,myTranceId,dateTime,myErrorMessage,myErrorInfo) VALUES(?, ?,?,?,?,?,?,?,?)", "myIp", "myUserId","myClass","myType","myStatus","myTranceId","dateTime","myErrorMessage","myErrorInfo" ]
  	}
}
}

增加字段原因:

myType: 数据类型,方便展示

dateTime: 由于前端不能识别@timestamp

mystatus: 方便区分error和info

自定义正则:

  • /opt/myLogstash/logstash/patterns

使用中:

下面的正则保证了某一个参数不存在也能正常匹配

MYID ([0-9a-zA-Z]*)?(\|{1})
MYCLASS ([a-zA-Z]*.[a-zA-Z]*)?(\|{1})
MYERRORMESSAGE (\{).*(\}\n)
MYINFOMESSAGE (\{).*(\})
MYERRORINFO .*
MYIP (%{IPV4})?([\|]{1})
自定义的文件:
一:logstash

1).filter插件(目录)

/opt/myLogstash/logstash/plugin

2).记录读取位置(文件)

/opt/myLogstash/logstash/index.txt

3).导数据参数配置(文件)

/opt/myLogstash/logstash/first-pipeline.conf

4).自定义的正则(文件)

/opt/myLogstash/logstash/patterns

https://www.elastic.co/guide/en/logstash/5.6/plugins-filters-grok.html

linux安装es

下载es

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.16.tar.gz

  • 解压即安装

1).修改配置

# cd /opt/myEs/elasticsearch/config

# vim elasticsearch.yml

增加两行配置:

#开启跨域访问支持,默认为false
http.cors.enabled: true

#跨域访问允许的域名地址
http.cors.allow-origin: "*"

2).由于es不能直接用root用户操作,那么创建一个新的用户

# adduser fei

# passwd fei

-- 确认密码

-- 切换用户

-- 给用户授权
# chown -R fei elasticsearch/ 

# su fei

-- 退出当前用户
# exit

3).后台运行es

# ./bin/elasticsearch -d

或

# nohup ./bin/elasticsearch &

linux安装kibana

下载kibana

https://artifacts.elastic.co/downloads/kibana/kibana-5.6.16-linux-x86_64.tar.gz

  • 解压即安装

1).解压

# tar -zxvf kibana-5.6.16-linux-x86_64.tar.gz

# mv kibana-5.6.16-linux-x86_64 kibana

2).修改配置

# cd /opt/myKibana/kibana/config

# vim kibana.yml

想外部访问修改配置:

# 默认5601
server.port: 5601

# 默认是localhost
server.host: "192.168.100.73"

3).启动kibana

# ./bin/kibana
kibana语句
格式: < REST 动词> / <索引> / < 类型> / < ID >
0).查询集群状态
GET /_cat/health?v
1).查看节点状态
GET /_cat/nodes?v
2)查看全部index
GET /_cat/indices?v
3.)查询单个数据
-- GET                     请求方式
-- errorlog-20200424       库(相当于)
-- logs 	               表(相当于)
-- AXGp54W1g0eJWD5SA-5C    id
-- pretty	               美化Json

GET /errorlog-20200424/logs/AXGp54W1g0eJWD5SA-5C?pretty
4).批量添加数据
-- _bulk 批量操作(减少连接)

POST /customer/external/_bulk?pretty
{"index":{"_id":"1"}}
{"name": "John Doe" }
{"index":{"_id":"2"}}
{"name": "Jane Doe" }
5).查询全部
GET /bank/_search?q=*&sort=account_number:asc&pretty

等价于:

GET /bank/_search
{
  "query": { "match_all": {} },
  "sort": [
    { "account_number": "asc" }
  ]
}

6).查询全部返回部分字段
GET /bank/_search
{
  "query": { "match_all": {} },
  "_source": ["account_number", "balance"]
}
7).分页
-- from  从哪个索引开始(默认0)
-- size  每页	个数(默认10)

GET /test/type1/_search
{
  "query": { "match_all": {} },
  "from" : 4,
  "size": 2
}
8).判断
a.全部满足
-- bool 布尔查询集合
	-- must 必须为true才会匹配
	-- must_not 全部为false才会匹配
	-- should 有一个真就匹配
GET /errorlog-20200424/logs/_search
{
  "query": {
    "bool": {
      "must": [
        { "match": { "ipaddr": "192.168.100.160" } },
        { "match": { "id": "0758ce6b7d20af0c" } },
        { "match": {"@timestamp" :"2020-04-24T01:57:35.472Z"}},
        { "match": {"tranceId" : "123"}}
      ]
    }
  }
}
b.判断大小或时间(1)
 GET /errorlog-20200424/logs/_search
{
  "query": {
    "bool": {
      "must": [
        { "match": { "ipaddr": "192.168.100.160" } },
        { "match": { "id": "0758ce6b7d20af0c" } },
        { "match": {"@timestamp" :"2020-04-24T01:57:35.472Z"}},
        {"range": {
          "@timestamp": {
            "gte": "2020-04-24T01:56:35.472Z",
            "lte": "2020-04-24T01:58:35.472Z"
          }
        }}
      ]
    }
  }
}
b.判断大小或时间(2)
GET /bank/_search
{
  "query": {
    "bool": {
      "must": { "match_all": {} },
      "filter": {
        "range": {
          "balance": {
            "gte": 20000,
            "lte": 30000
          }
        }
      }
    }
  }
}
9).删除索引库
delete /errorlog-20200423
10).查询指定索引库
GET /errorlog-20200423/_search
11).查询多个索引库
# kibana
GET /errorlog-20200413,errorlog-20200412/_search

# postmain
http://192.168.100.73:9200/errorlog-20200413,errorlog-20200412/_search
12).查询多个索引库避免不存在的索引库
# 方式一:多一次查询看那些索引库不存在

# 方式二:使用通配符(即使索引库不存在也不会报错)

http://192.168.100.73:9200/error*-20200413,error*-20200415/_search

13).排序
GET /errorlog-20200409/_search
{
  "query": { "match_all": {} },
  "_source": ["myIp", "myId","@timestamp"],
  "sort": {
		"@timestamp": {
			"order": "desc"
		}
	}
}
14).查询一个时间段的index

思路:

​ 开始时间和结束时间写成通配符,然后通过@timestamp来做判断大于开始时间小于结束时间的数据

GET /errorlog-2019*,errorlog-2020*/_search
{
  "query": {
    "bool": {
     "filter": {
       "range": {
         "@timestamp": {
           "gte": "2019-04-24T01:56:35.472Z",
           "lte": "2020-04-24T01:58:35.472Z"
         }
       }
     }  
    }
  }
}

GET /errorlog-2020*/_search
{
  "query": {
    "bool": {
     "filter": {
       "range": {
         "@timestamp": {
           "gte": "2020-04-12T",
           "lte": "2020-04-24T"
         }
       }
     }
      
    }
  }
}

15).将字段的权限
-- customer  (索引库)
-- external  (type)
-- id 字段

put /customer/_mapping/external
{
  "properties": {
    "id": { 
      "type":     "text",
      "fielddata": true
    }
  }
}
16).设置es查询最大条数

注:Elasticsearch支持的最大值是2^31-1,也就是2147483647。

-- 修改
PUT http://192.168.100.73:9200/_all/_settings
{
    "index": {
        "max_result_window": 2147483647
    }
}

-- 查询
GET http://192.168.100.73:9200/_all/_settings

展示配置:

logstash

# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
 config.support_escapes: true
#
# ------------ Module Settings ---------------
# Define modules here.  Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and 
# above the next, like this:
#
# modules:
#   - name: MODULE_NAME
#     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
#     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of 
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 250mb
#
# queue.page_capacity: 250mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false

# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
 http.host: "192.168.100.73"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []

Elasticsearch

  • Elasticsearch支持查询返回的最大值是2^31-1,也就是2147483647。
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.100.73
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true


#开启跨域访问支持,默认为false
http.cors.enabled: true

#跨域访问允许的域名地址
http.cors.allow-origin: "*"
1).修改es对外ip报错

错误信息:

ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
  • 解决方法需要修改配置信息

[1]:

vim /etc/security/limits.conf
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
*               soft    nofile            65536
#*               hard    rss             10000
*               hard    nofile            65536
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

[2]:

vim /etc/sysctl.conf
#2020-04-26新增参数用于es开启对外ip访问
vm.max_map_count=655360
# 查看修改信息
# sysctl -p

kibana

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.100.73"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
#elasticsearch.url: "http://localhost:9200"
elasticsearch.url: "http://192.168.100.73:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]


# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

logstash配置mysql

1).下载插件并解压
# wget https://github.com/theangryangel/logstash-output-jdbc/archive/v5.0.0.tar.gz

注意:

​ 使用jdbc插件的时候,启动logstash报错,说是找不到/opt/myLogstash/logstash/plugin/logstash-output-jdbc-5.0.0/vendor/jar-dependencies/runtime-jars/*.jar

​ 发现jdbc3.x以上就没有vendor目录以及下面的jar包,最终解决的方法是将2.x的vendor目录及以下的jar包放在了5.x中,运行正常.

​ 看运行的日志信息,貌似只用到了SLF4J的一个jar包

2).编辑文件
# cd /opt/myLogstash/logstash
# vim Gemfile

-- 增加一行
gem "logstash-filter-multiline",:path => "/opt/myLogstash/logstash/plugin/logstash-filter-multiline"

c.更新插件
# ./bin/logstash-plugin install --no-verify

备注:

​ 更新插件的时候可能会一直显示Install… 等一段时间后就会自己安装上(应该和我的网络有关系)

3).判断是否安装好
# ./bin/logstash-plugin list

-- 没有报错就说明ok
4).下载mysql驱动包

放置安装包的位置:

​ 1、在 Logstash 目录下,创建目录:/vendor/jar/jdbc,将 mysql-connector-java-5.1.46-bin.jar 文件放在里面,默认就能找到 mysql-connector-java-5.1.46-bin.jar,不需要配置参数 driver_jar_path。

​ 2、放在任意目录,但是要配置参数 driver_jar_path ,确保能找到 mysql-connector-java-5.1.46-bin.jar 文件

# wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz

-- 下载好解压即可
5).logstash输出配置

建表语句:

CREATE TABLE `es_log_info`(
id BIGINT PRIMARY KEY AUTO_INCREMENT,
myIp VARCHAR(32) DEFAULT NULL COMMENT 'ip',
myUserId VARCHAR(32) DEFAULT NULL COMMENT '用户id',
myClass VARCHAR(200) DEFAULT NULL COMMENT '位置信息',
myType VARCHAR(32) DEFAULT NULL COMMENT '服务类型',
myStatus VARCHAR(32) DEFAULT NULL COMMENT 'error或info',
myTranceId VARCHAR(32) DEFAULT NULL COMMENT 'tranceId',
`dateTime` VARCHAR(32) DEFAULT NULL COMMENT '日志创建时间',
myErrorMessage TEXT DEFAULT NULL COMMENT '异常信息',
myErrorInfo LONGTEXT DEFAULT NULL COMMENT '异常结果'
);

输出:

# 这是判断myStatus是error的才写入数据库
if [myStatus] == "error"{
jdbc {
    driver_jar_path => "/opt/myLogstash/logstash/vendor/jar/jdbc/mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar"
    driver_class => "com.mysql.jdbc.Driver"
    connection_string => "jdbc:mysql://xxx.xxx.xxx.xx:3306/ddop_8a_dev_202?user=root&password=password&useSSL=false"
    statement => [ "INSERT INTO es_log_info (myIp, myUserId,myClass,myType,myStatus,myTranceId,dateTime,myErrorMessage,myErrorInfo) VALUES(?, ?,?,?,?,?,?,?,?)", "myIp", "myUserId","myClass","myType","myStatus","myTranceId","dateTime","myErrorMessage","myErrorInfo" ]
  }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值