最入门的ELK环境搭建

搭建简单地最入门级的elk框架,因为是入门测试用,所以所有组件都在同一台机器上。

Elasticsearch官网elasticsearch-6.3.0.tarelasticsearch官方文档
Kibana官网kibana-6.3.0下载 linux64位kibana官方文档
Logstash官网logstash-6.3.0.tarlogstash官方文档

下载后上传到虚拟机并解压缩(可以复制链接使用迅雷下载,会快很多)。

首先要保证jdk环境为1.8版本及以上。

然后配置elasticsearch,elasticsearch-6.3.0/config/elasticsearch.yml:

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0           ##服务器ip 本机
#
# Set a custom port for HTTP:
#
http.port: 9200                 ##服务端口
#
# For more information, consult the network module documentation.
#

保存后启动elasticsearch,由于使用的是root用户,启动elasticsearch导致报错:

[root@flink1 ELK]# ./elasticsearch-6.3.0/bin/elasticsearch 
[2019-11-20T05:47:06,500][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.3.0.jar:6.3.0]
	at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.3.0.jar:6.3.0]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.3.0.jar:6.3.0]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:104) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:171) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:326) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-6.3.0.jar:6.3.0]
	... 6 more

因此需要创建单独的用户组和用户来启动elasticsearch,可以参考这里

使用新创建的elasticsearch用户启动又报错:

[elasticsearch@flink1 ELK]$ ./elasticsearch-6.3.0/bin/elasticsearch
[2019-11-20T05:56:51,500][INFO ][o.e.n.Node               ] [] initializing ...
[2019-11-20T05:56:51,621][INFO ][o.e.e.NodeEnvironment    ] [gynAdXC] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [31.7gb], net total_space [36.9gb], types [rootfs]
[2019-11-20T05:56:51,621][INFO ][o.e.e.NodeEnvironment    ] [gynAdXC] heap size [1015.6mb], compressed ordinary object pointers [true]
[2019-11-20T05:56:51,622][INFO ][o.e.n.Node               ] [gynAdXC] node name derived from node ID [gynAdXC0RKWep3-1m0VnNg]; set [node.name] to override
[2019-11-20T05:56:51,622][INFO ][o.e.n.Node               ] [gynAdXC] version[6.3.0], pid[23393], build[default/tar/424e937/2018-06-11T23:38:03.357887Z], OS[Linux/3.10.0-1062.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_191/25.191-b12]
[2019-11-20T05:56:51,622][INFO ][o.e.n.Node               ] [gynAdXC] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.eWH4ZVvn, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/opt/ELK/elasticsearch-6.3.0, -Des.path.conf=/opt/ELK/elasticsearch-6.3.0/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [aggs-matrix-stats]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [analysis-common]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [ingest-common]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [lang-expression]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [lang-mustache]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [lang-painless]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [mapper-extras]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [parent-join]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [percolator]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [rank-eval]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [reindex]
[2019-11-20T05:56:57,492][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [repository-url]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [transport-netty4]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [tribe]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-core]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-deprecation]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-graph]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-logstash]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-ml]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-monitoring]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-rollup]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-security]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-sql]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-upgrade]
[2019-11-20T05:56:57,493][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-watcher]
[2019-11-20T05:56:57,494][INFO ][o.e.p.PluginsService     ] [gynAdXC] no plugins loaded
[2019-11-20T05:57:03,810][INFO ][o.e.x.s.a.s.FileRolesStore] [gynAdXC] parsed [0] roles from file [/opt/ELK/elasticsearch-6.3.0/config/roles.yml]
[2019-11-20T05:57:04,758][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/23441] [Main.cc@109] controller (64 bit): Version 6.3.0 (Build 0f0a34c67965d7) Copyright (c) 2018 Elasticsearch BV
[2019-11-20T05:57:06,078][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-11-20T05:57:06,325][INFO ][o.e.d.DiscoveryModule    ] [gynAdXC] using discovery type [zen]
[2019-11-20T05:57:07,403][INFO ][o.e.n.Node               ] [gynAdXC] initialized
[2019-11-20T05:57:07,403][INFO ][o.e.n.Node               ] [gynAdXC] starting ...
[2019-11-20T05:57:08,026][INFO ][o.e.t.TransportService   ] [gynAdXC] publish_address {172.21.89.128:9300}, bound_addresses {[::]:9300}
[2019-11-20T05:57:08,047][INFO ][o.e.b.BootstrapChecks    ] [gynAdXC] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-11-20T05:57:08,082][INFO ][o.e.n.Node               ] [gynAdXC] stopping ...
[2019-11-20T05:57:08,199][INFO ][o.e.n.Node               ] [gynAdXC] stopped
[2019-11-20T05:57:08,199][INFO ][o.e.n.Node               ] [gynAdXC] closing ...
[2019-11-20T05:57:08,220][INFO ][o.e.n.Node               ] [gynAdXC] closed
[2019-11-20T05:57:08,224][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started

可以看到有两个错误,第一个是elasticsearch进程的最大文件描述符的值最小要求65536,而目前系统允许的是4096。第二个是当前系统的最大虚拟机内存为65530,而要求的最小为262144。因此需要调整这两个配置。

首先查看当前系统的最大和最小文件描述符:

[elasticsearch@flink1 ELK]$ ulimit -Hn
4096
[elasticsearch@flink1 ELK]$ ulimit -Sn
1024

然后修改配置文件/etc/security/limits.conf:

保存后退出登录再重新登录即可生效。

接下来针对第二个报错,修改/etc/sysctl.conf文件,增加配置vm.max_map_count=262144:

 

保存后执行命令sysctl -p使其生效。

再次启动:

[elasticsearch@flink1 ELK]$ ./elasticsearch-6.3.0/bin/elasticsearch
[2019-11-20T06:24:15,158][INFO ][o.e.n.Node               ] [] initializing ...
[2019-11-20T06:24:15,281][INFO ][o.e.e.NodeEnvironment    ] [gynAdXC] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [31.6gb], net total_space [36.9gb], types [rootfs]
[2019-11-20T06:24:15,281][INFO ][o.e.e.NodeEnvironment    ] [gynAdXC] heap size [1015.6mb], compressed ordinary object pointers [true]
[2019-11-20T06:24:15,282][INFO ][o.e.n.Node               ] [gynAdXC] node name derived from node ID [gynAdXC0RKWep3-1m0VnNg]; set [node.name] to override
[2019-11-20T06:24:15,282][INFO ][o.e.n.Node               ] [gynAdXC] version[6.3.0], pid[23548], build[default/tar/424e937/2018-06-11T23:38:03.357887Z], OS[Linux/3.10.0-1062.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_191/25.191-b12]
[2019-11-20T06:24:15,282][INFO ][o.e.n.Node               ] [gynAdXC] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.Loc5bfKs, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/opt/ELK/elasticsearch-6.3.0, -Des.path.conf=/opt/ELK/elasticsearch-6.3.0/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-11-20T06:24:19,622][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [aggs-matrix-stats]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [analysis-common]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [ingest-common]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [lang-expression]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [lang-mustache]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [lang-painless]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [mapper-extras]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [parent-join]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [percolator]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [rank-eval]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [reindex]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [repository-url]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [transport-netty4]
[2019-11-20T06:24:19,623][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [tribe]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-core]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-deprecation]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-graph]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-logstash]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-ml]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-monitoring]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-rollup]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-security]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-sql]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-upgrade]
[2019-11-20T06:24:19,624][INFO ][o.e.p.PluginsService     ] [gynAdXC] loaded module [x-pack-watcher]
[2019-11-20T06:24:19,625][INFO ][o.e.p.PluginsService     ] [gynAdXC] no plugins loaded
[2019-11-20T06:24:25,052][INFO ][o.e.x.s.a.s.FileRolesStore] [gynAdXC] parsed [0] roles from file [/opt/ELK/elasticsearch-6.3.0/config/roles.yml]
[2019-11-20T06:24:26,182][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/23596] [Main.cc@109] controller (64 bit): Version 6.3.0 (Build 0f0a34c67965d7) Copyright (c) 2018 Elasticsearch BV
[2019-11-20T06:24:27,843][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-11-20T06:24:30,194][INFO ][o.e.d.DiscoveryModule    ] [gynAdXC] using discovery type [zen]
[2019-11-20T06:24:31,432][INFO ][o.e.n.Node               ] [gynAdXC] initialized
[2019-11-20T06:24:31,432][INFO ][o.e.n.Node               ] [gynAdXC] starting ...
[2019-11-20T06:24:32,231][INFO ][o.e.t.TransportService   ] [gynAdXC] publish_address {172.21.89.128:9300}, bound_addresses {[::]:9300}
[2019-11-20T06:24:32,246][INFO ][o.e.b.BootstrapChecks    ] [gynAdXC] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-11-20T06:24:38,304][INFO ][o.e.c.s.MasterService    ] [gynAdXC] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {gynAdXC}{gynAdXC0RKWep3-1m0VnNg}{EiA0q6OuRxeJ-8k3_yYATg}{172.21.89.128}{172.21.89.128:9300}{ml.machine_memory=1907789824, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-11-20T06:24:38,865][INFO ][o.e.c.s.ClusterApplierService] [gynAdXC] new_master {gynAdXC}{gynAdXC0RKWep3-1m0VnNg}{EiA0q6OuRxeJ-8k3_yYATg}{172.21.89.128}{172.21.89.128:9300}{ml.machine_memory=1907789824, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {gynAdXC}{gynAdXC0RKWep3-1m0VnNg}{EiA0q6OuRxeJ-8k3_yYATg}{172.21.89.128}{172.21.89.128:9300}{ml.machine_memory=1907789824, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2019-11-20T06:24:38,952][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [gynAdXC] publish_address {172.21.89.128:9200}, bound_addresses {[::]:9200}
[2019-11-20T06:24:38,953][INFO ][o.e.n.Node               ] [gynAdXC] started
[2019-11-20T06:24:38,968][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [gynAdXC] Failed to clear cache for realms [[]]
[2019-11-20T06:24:39,603][INFO ][o.e.g.GatewayService     ] [gynAdXC] recovered [0] indices into cluster_state
[2019-11-20T06:24:44,679][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-11-20T06:24:44,721][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.watches] for index patterns [.watches*]
[2019-11-20T06:24:44,771][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.watch-history-7] for index patterns [.watcher-history-7*]
[2019-11-20T06:24:44,872][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-11-20T06:24:44,945][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-11-20T06:24:44,996][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-11-20T06:24:45,027][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-11-20T06:24:45,114][INFO ][o.e.c.m.MetaDataIndexTemplateService] [gynAdXC] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-11-20T06:24:45,211][INFO ][o.e.l.LicenseService     ] [gynAdXC] license [9d91c984-5511-4b0c-a9ca-5e61bbf2e622] mode [basic] - valid

 启动成功,打开浏览器输入服务器IP和9200端口,看到如下内容说明启动正常:

接下来配置kibana,kibana-6.3.0-linux-x86_64/config/kibana.yml

server.port: 5601       ##服务端口
server.host: "0.0.0.0"  ##服务器ip  本机
elasticsearch.url: "http://localhost:9200" ##elasticsearch服务地址 与elasticsearch对应

 保存后启动kibana:

[root@flink1 kibana-6.3.0-linux-x86_64]# ./bin/kibana
  log   [03:53:40.786] [info][status][plugin:kibana@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:40.874] [info][status][plugin:elasticsearch@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:40.876] [info][status][plugin:xpack_main@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:40.880] [info][status][plugin:searchprofiler@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:40.885] [info][status][plugin:ml@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:40.942] [info][status][plugin:tilemap@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:40.943] [info][status][plugin:watcher@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:40.955] [info][status][plugin:license_management@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:40.957] [info][status][plugin:index_management@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:41.682] [info][status][plugin:timelion@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:41.686] [info][status][plugin:graph@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:41.688] [info][status][plugin:monitoring@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:41.689] [info][status][plugin:security@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:41.690] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
  log   [03:53:41.697] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended.
  log   [03:53:41.723] [info][status][plugin:grokdebugger@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:41.728] [info][status][plugin:dashboard_mode@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:41.730] [info][status][plugin:logstash@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:41.749] [info][status][plugin:apm@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:41.754] [info][status][plugin:console@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:41.756] [info][status][plugin:console_extensions@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:41.759] [info][status][plugin:metrics@6.3.0] Status changed from uninitialized to green - Ready
  log   [03:53:46.360] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml
  log   [03:53:46.362] [info][status][plugin:reporting@6.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [03:53:46.379] [error][status][plugin:xpack_main@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.380] [error][status][plugin:searchprofiler@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.380] [error][status][plugin:ml@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.380] [error][status][plugin:tilemap@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.381] [error][status][plugin:watcher@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.381] [error][status][plugin:index_management@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.381] [error][status][plugin:graph@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.382] [error][status][plugin:security@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.382] [error][status][plugin:grokdebugger@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.382] [error][status][plugin:logstash@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.383] [error][status][plugin:reporting@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:53:46.383] [error][status][plugin:elasticsearch@6.3.0] Status changed from yellow to red - Request Timeout after 3000ms
  log   [03:54:01.015] [info][license][xpack] Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active
  log   [03:54:01.036] [info][status][plugin:xpack_main@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.040] [info][status][plugin:searchprofiler@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.041] [info][status][plugin:ml@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.041] [info][status][plugin:tilemap@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.041] [info][status][plugin:watcher@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.041] [info][status][plugin:index_management@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.042] [info][status][plugin:graph@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.042] [info][status][plugin:security@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.042] [info][status][plugin:grokdebugger@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.043] [info][status][plugin:logstash@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.043] [info][status][plugin:reporting@6.3.0] Status changed from red to green - Ready
  log   [03:54:01.056] [info][kibana-monitoring][monitoring-ui] Stopping all Kibana monitoring collectors
  log   [03:54:01.443] [info][license][xpack] Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active
  log   [03:54:08.214] [info][kibana-monitoring][monitoring-ui] Starting all Kibana monitoring collectors
  log   [03:54:08.225] [info][status][plugin:elasticsearch@6.3.0] Status changed from red to green - Ready
  log   [03:54:22.613] [info][listening] Server running at http://0.0.0.0:5601

浏览器输入IP地址加端口号5601即可验证是否正常启动:

 

接下来配置安装logstash

logstash需要有数据源,一般是通过读取配置文件的方式来识别输入源和输出源等组件。下面创建一个es数据源配置文件logback-es.conf,内容如下:

[root@flink1 ELK]# more logstash-6.3.0/config/logback-es.conf
input {                              ##input 输入源配置
    tcp {                            ##使用tcp输入源      官网有详细文档
        port => 9601                 ##服务器监听端口9061 接受日志  默认ip localhost
        codec => json_lines          ##使用json解析日志    需要安装json解析插件
    }
}
filter {                             ##数据处理
}
output {                             ##output 数据输出配置
        elasticsearch {              ##使用elasticsearch接收
            hosts => "localhost:9200"##集群地址  多个用,隔开
        }
        stdout { codec => rubydebug }##输出到命令窗口
}

保存后就可以通过加载这个配置文件测试logstash了:

[root@flink1 ELK]# ./logstash-6.3.0/bin/logstash -f ./logstash-6.3.0/config/logback-es.conf 
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /opt/ELK/hs_err_pid24775.log

结果报错了,说所需内存不满足。这时需要修改logstash程序的jvm内存配置logstash-6.3.0/config/jvm.options,这个版本默认的是1g,而我的虚拟机内存只有1g,因此这里修改成512m:

-Xms512m  
-Xmx512m

再次启动:

[root@flink1 ELK]# ./logstash-6.3.0/bin/logstash -f ./logstash-6.3.0/config/logback-es.conf 
Sending Logstash's logs to /opt/ELK/logstash-6.3.0/logs which is now configured via log4j2.properties
[2019-11-21T23:33:44,115][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-11-21T23:33:45,505][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.0"}
[2019-11-21T23:33:49,626][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-11-21T23:33:50,530][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-11-21T23:33:50,551][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2019-11-21T23:33:51,234][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-11-21T23:33:51,677][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-11-21T23:33:51,713][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-11-21T23:33:51,806][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-11-21T23:33:51,857][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-11-21T23:33:53,210][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2019-11-21T23:33:56,281][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-11-21T23:33:56,381][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:9601", :ssl_enable=>"false"}
[2019-11-21T23:33:57,195][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x55340a94 run>"}
[2019-11-21T23:33:57,484][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-11-21T23:33:58,194][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

接下来为了方便的使用elk,使用idea创建一个springboot项目:

新建的logback.xml文件:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>172.21.89.128:9601</destination>     <!--指定logstash ip:监听端口 tcpAppender  可自己实现如kafka传输等-->
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>      <!--引用springboot默认配置-->
    <root>
        <appender-ref ref="LOGSTASH" />                                           <!--使用上述订阅logstash数据tcp传输 -->
        <appender-ref ref="CONSOLE" />                                            <!--使用springboot默认配置 调试窗口输出-->
    </root>
</configuration>

 pom.xml文件中添加的依赖:

<dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>4.11</version>
        </dependency>

应用代码:

package com.example.demo;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class DemoApplication {
    private final static Logger logger = LoggerFactory.getLogger(DemoApplication.class);
    public static void main(String[] args) {
        new Thread(()->{
            for (int i=0;i<100;i++){
                logger.info("---test---"+i);
            }
        }).start();
        SpringApplication.run(DemoApplication.class, args);
    }
}

启动应用,之后看到logstash在控制台输出了记录  此为默认无过滤器打印logback包装的全部信息(这里截取一部分):

{
    "thread_name" => "main",
        "message" => "Tomcat started on port(s): 8080 (http) with context path ''",
           "host" => "172.21.89.1",
    "logger_name" => "org.springframework.boot.web.embedded.tomcat.TomcatWebServer",
           "port" => 8955,
    "level_value" => 20000,
          "level" => "INFO",
       "@version" => 1,
     "@timestamp" => 2019-11-22T02:18:33.769Z
}
{
    "thread_name" => "main",
        "message" => "Started DemoApplication in 1.251 seconds (JVM running for 2.157)",
           "host" => "172.21.89.1",
    "logger_name" => "com.example.demo.DemoApplication",
           "port" => 8955,
    "level_value" => 20000,
          "level" => "INFO",
       "@version" => 1,
     "@timestamp" => 2019-11-22T02:18:33.772Z
}

然后打开kibana浏览器,进行索引配置,首先是创建索引随便取个名字就好,然后选择以时间属性作为过滤条件,之后再次重启程序就能在kibana看到日志信息了:

 

参考博客:https://blog.csdn.net/qq_22211217/article/details/80764568#commentBox

                  https://blog.csdn.net/oschina_41140683/article/details/93007721

对于7.x版本的集群的安装,可以参考https://www.cnblogs.com/michael-xiang/p/13715692.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值