elk配置

1.kafka

kafka命令所在路径:

 

/ssd1/kafka/bin

kafka配置文件:

 

egrep -v "^#|^$" server.properties

broker.id=1 ###全局唯一

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/ssd1/kafka/data ####数据目录

num.partitions=3

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=72

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=xxxxxxxxxxxxxxxxxxxxx ##zk地址

zookeeper.connection.timeout.ms=18000

group.initial.rebalance.delay.ms=0

kafka常用命令:

kafka常用命令

elk所使用的topic和消费者组:

 

new_gc所使用的topic:logs

new_gc所使用的消费则组:log

kafka的start和stop:

./kafka-server-start.sh -daemon /app/tools/kafka/config/server.properties

./kafka-server-stop.sh

kafka重启风险:

如果消息堆积多,可能会出现起不来的现象

确认需要重启前查看消费情况

2.logstash

logstash启动命令和关闭命令:

 

启动命令路径:

cd /usr/share/logstash/bin/

启动参数:

./logstash -f /etc/logstash/conf.d/logstash.conf &

端口默认为9600

ss -lntup|grep 9600

tcp 0 50 127.0.0.1:9600 *:* users:(("java",71718,82))

停止:

kill `ps -ef|grep logstash|grep -v grep|awk '{print $2}'`

logstash重启风险:

 

三台机器上的logstash同时消费kafka的消息,短时间的重启一台logstash不会对日志收集产生影响

logstash的配置文件详解:

 

cat /etc/logstash/conf.d/logstash.conf

input{ #输入模块

kafka{

# bootstrap_servers => "xxxxxxxxx:9092"

bootstrap_servers => "xxxxxxxxxxxxxxxxxxxxxx" #kafka地址

client_id => "log"

group_id => "log" #消费者组

auto_offset_reset => "latest"

consumer_threads => "1"

decorate_events => true

topics => ["logs"]。 ##topic主题

}

#logstash也可以实现日志的采集

# file{

# path => "/var/log/linwei_all/*.log"

# type => "log"

# }

}

filter { #清洗模块

grok { #grok匹配字段

break_on_match => true #匹配成功规则后不继续向下匹配

#规则

match =>

{ "message" => "%{LEN}%{IP:remote_addr} -\[%{DATA:time_local}\] -\[%{USER:host}\] \\\"%{NOTSPACE:method} %{NOTSPACE:request} %{NOTSPACE} %{INT:status} %{INT:body_bytes_sent} \\\"%{NOTSPACE:http_referer}%{LAN:http_user_agent} \\\"%{IP:http_x_forwarded_for}\\\" request_time\[%{DATA:request_time}\] upstream_response_time\[%{DATA:upstream_response_time}\] upstream_addr\[%{DATA:upstream_addr}\] logId\[%{INT:http_x_logid}\] imid\[%{DATA:http_imid}\] statistic\[%{DATA:statistic}\] Bfe_logid\[%{INT:http_bfe_logid}\] CLIENTIP\[%{IP:http_clientip}\] device_id\[%{DATA:cookie_device_id}\] X-Deviceid\[%{DATA:http_x_deviceid}\] route\[%{DATA:http_route}\] product\[%{DATA:http_product}\] subsys\[%{DATA:http_subsys}\]%{GREEDYDATA}"}

# "message","%{LEN}%{TIME:remote_addr} \[%{DATA:warn}\] %{NOTSPACE:lin} %{NOTSPACE:ln} \[%{DATA:ls}\] utility.lua:%{INT:mm}: log_warn(%{DATA:lnn}): logid: \[%{DATA:vv}\] found data in cache cache_key:%{BASE16NUM:kss}, client: %{IP:hass}, server:%{SHEN:kkss}request: %{SHHH:method} %{NOTSPACE:request} HTTP/1.1\\\", host: \\%{SJSNS:kksssiin}%{GREEDYDATA}"

}

#微云推送的时候其中包含err.log,格式不一样且信息中access.log都包括,所以要给他drop掉

if ([message] =~".*[#].*") {

drop {}

}

#对字段进行运算

# ruby {

# code => "event.set('request_time', event.get('request_time').to_i * 1000)"

# }

# ruby {

# code => "event.set('upstream_response_time', event.get('upstream_response_time').to_i * 1000)"

# }

#格式转换,grok匹配后,所有的字段默认是字符串类型。对字段的类型进行转换

mutate {

convert => {"status" => "integer"}

convert => {"request_time" => "float"}

convert => {"upstream_response_time" => "float"}

}

}

output { #输出模块

elasticsearch {

hosts => ["http://xxxxxx:9200/","http://xxxxxxxx:9200/","http://xxxxxxxxxxxx:9200/"] #es地址

user => "" #es账号

password => "" #es密码

# hosts => "http://kafkahost:9200"

index => "" #输出到es的索引

timeout => 300

}

}

logstash的gork自定义正则配置:

 

cat /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns

USERNAME [a-zA-Z0-9._-]+

USER %{USERNAME}

EMAILLOCALPART [a-zA-Z][a-zA-Z0-9_.+-=:]+

EMAILADDRESS %{EMAILLOCALPART}@%{HOSTNAME}

INT (?:[+-]?(?:[0-9]+))

BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))

NUMBER (?:%{BASE10NUM})

BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))

BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b

POSINT \b(?:[1-9][0-9]*)\b

NONNEGINT \b(?:[0-9]+)\b

WORD \b\w+\b

NOTSPACE \S+

SPACE \s*

DATA .*?

GREEDYDATA .*

QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))

UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}

# URN, allowing use of RFC 2141 section 2.3 reserved characters

URN urn:[0-9A-Za-z][0-9A-Za-z-]{0,31}:(?:%[0-9a-fA-F]{2}|[0-9A-Za-z()+,.:=@;$_!*'/?#-])+

# Networking

MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})

CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})

WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})

COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})

IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?

IPV4 (?<![0-9])(?:(?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5]))(?![0-9])

IP (?:%{IPV6}|%{IPV4})

HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)

IPORHOST (?:%{IP}|%{HOSTNAME})

HOSTPORT %{IPORHOST}:%{POSINT}

# paths

PATH (?:%{UNIXPATH}|%{WINPATH})

UNIXPATH (/([\w_%!$@:.,+~-]+|\\.)*)+

TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))

WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+

URIPROTO [A-Za-z]([A-Za-z0-9+\-.]+)+

URIHOST %{IPORHOST}(?::%{POSINT:port})?

# uripath comes loosely from RFC1738, but mostly from what Firefox

# doesn't turn into %XX

URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%&_\-]*)+

#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?

URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]<>]*

URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?

URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?

# Months: January, Feb, 3, 03, 12, December

MONTH \b(?:[Jj]an(?:uary|uar)?|[Ff]eb(?:ruary|ruar)?|[Mm](?:a|ä)?r(?:ch|z)?|[Aa]pr(?:il)?|[Mm]a(?:y|i)?|[Jj]un(?:e|i)?|[Jj]ul(?:y)?|[Aa]ug(?:ust)?|[Ss]ep(?:tember)?|[Oo](?:c|k)?t(?:ober)?|[Nn]ov(?:ember)?|[Dd]e(?:c|z)(?:ember)?)\b

MONTHNUM (?:0?[1-9]|1[0-2])

MONTHNUM2 (?:0[1-9]|1[0-2])

MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])

# Days: Monday, Tue, Thu, etc...

DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)

# Years?

YEAR (?>\d\d){1,2}

HOUR (?:2[0123]|[01]?[0-9])

MINUTE (?:[0-5][0-9])

# '60' is a leap second in most time standards and thus is valid.

SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)

TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])

# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)

DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}

DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}

ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))

ISO8601_SECOND (?:%{SECOND}|60)

TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?

DATE %{DATE_US}|%{DATE_EU}

DATESTAMP %{DATE}[- ]%{TIME}

TZ (?:[APMCE][SD]T|UTC)

DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}

DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}

DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}

DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}

# Syslog Dates: Month Day HH:MM:SS

SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}

PROG [\x21-\x5a\x5c\x5e-\x7e]+

SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?

SYSLOGHOST %{IPORHOST}

SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>

HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}

# Shortcuts

QS %{QUOTEDSTRING}

# Log formats

SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:

# Log Levels

LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)

#my log

LEN .*message":"

LAN \\\".........*?\\\"

LV .*\\\"

TIME ..../../.. ..:..:..

SHEN .*_,

SHHH \\\"\S+

SJSNS .*com\\\"

SHSBBX \\\".*\\\"

3.elasticsearch

  • elasticsearch用户信息:

 

用户名和密码:

user:"xxxxx" #es账号

password :"xxxxxxx" #es密码

  • es路径:

 

数据路径:/ssd2/esdata

es日志路径:/home/disk1/elklogs/eslogs

  • es启动:

 

service elasticsearch start

chkconfig elasticsearch on

  • es重启风险:

    • es的索引和数据部分是存储在内存中

  • es配置

 

cluster.name: xxxxxxxx

node.name: xxxxxxxxx

path.data: /ssd2/esdata

path.logs: /ssd2/eslogs

network.host: xxxxxxxx

http.port: 9200

discovery.seed_hosts: ["xxxxx", "xxxxxxx", "xxxxxxxx"]

cluster.initial_master_nodes: ["xxxxxxxx", "xxxxxxxx", "xxxxxxxx"]

#xpack配置

xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true

xpack.security.transport.ssl.verification_mode: certificate

xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12

xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

4.kibana

  • kibana所在机器:

 

xxxxxxx/25 内网地址,机器没公网

默认端口:5601

  • 所在路径:

 

kabana的配置文件目录:/etc/kibana/kibana.yml

kabana的启动命令地址:/etc/init.d/kibana

  • 启动命令:

 

service kibana start

chkconfig kibana on

  • 重启风险:

    • 可能会出现数据初始化错误,无法连接es集群等问题

  • kibana配置文件:

 

cat /etc/kibana/kibana.yml

server.port: 8089

server.host: "xxxxxxx"

elasticsearch.hosts: ["http://xxxxxxx:9200/","http://xxxxxxx:9200/","http://xxxxxxx:9200/"]

elasticsearch.username: "xxxxxx"

elasticsearch.password: "xxxxxxxx"

#elasticsearch.hosts: [ "http://xxxxxxx:9200" ]

kibana.index: ".kibana"

logging.dest: /var/log/kibana/kibana.log

logging.quiet: false

i18n.locale: "zh-CN"

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值