主机环境
主机名 IP 操作系统 角色 logstash 192.168.25.129 CentOS-7.7 logstash es-master 192.168.25.130 CentOS-7.7 elasticsearch主节点/数据节点 es-node1 192.168.25.131 CentOS-7.7 elasticsearch数据节点 es-node2-kibana 192.168.25.128 CentOS-7.7 elasticsearch数据节点/kibana redis-beat 192.168.25.132 CentOS-7.7 redis/filebeat
cerebro-0.9.2
elasticsearch-7.8.0
filebeat-7.8.0
jdk-13.0.2
kibana-7.8.0
logstash-7.8.0
[ root@localhost ~]
[ root@localhost ~]
[ root@localhost ~]
[ root@localhost ~]
[ root@localhost ~]
[ root@es-master ~]
192.168.25.128 es-node2-kibana
192.168.25.129 logstash
192.168.25.130 es-master
192.168.25.131 es-node1
192.168.25.132 redis-beat
EOF
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
server4 time1.aliyun.com iburst
[ root@es-master ~]
[ root@es-master ~]
210 Number of sources = 1
节点环境配置
'logstash、es-master、es-node1、es-node2-kebana 均需要安装
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master ~]
[ root@es-master local]
[ root@es-master ~]
export JAVA_HOME= /usr/local/java
export PATH= $JAVA_HOME /bin:$PATH
[ root@es-master ~]
'logstash、es-master、es-node1、es-node2-kebana 均需要设置
[root@es-master ~]# cat >>/etc/security/limits.conf <<-EOF
# 增加以下内容
* soft nofile 65536
* hard nofile 65536
* soft nproc 32000
* hard nproc 32000
elk soft memlock unlimited
elk hard memlock unlimited
EOF
- ` soft nproc` : ' 单个用户可用的最大进程数量( 超过会警告) ;
- ` hard nproc` : '单个用户可用的最大进程数量(超过会报错);
- ` soft nofile` : ' 可打开的文件描述符的最大数( 超过会警告) ;
- ` hard nofile` : '可打开的文件描述符的最大数(超过会报错);
limits.conf这里的配置,只适用于通过PAM认证登录用户的资源限制,它对 systemd 的 service 的资源限制不生效。登录用户的限制,通过 /etc/security/limits.conf 来配置对于 ' systemd service的资源限制',现在放在 /etc/systemd/system.conf 和 /etc/systemd/user.conf这两个文件里面了,主要就是 /etc/systemd/system.conf 这个文件
'logstash、es-master、es-node1、es-node2-kebana 均需要设置
[root@es-master ~]# cat >>/etc/systemd/system.conf<<-EOF
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity
EOF
- ` nproc` ' 操作系统级别对每个用户创建的进程数的限制
- ` nofile` '是每个进程可以打开的文件数的限制
'logstash、es-master、es-node1、es-node2-kebana 均需要设置
[ root@es-master ~]
vm.max_map_count= 655360
fs.file-max= 655360
vm.swappiness= 0
EOF
[ root@es-master ~]
- ` max_map_count` 文件包含限制一个进程可以拥有的VMA( 虚拟内存区域) 的数量。虚拟内存区域是一个连续的虚拟地址空间区域。在进程的生命周期中,每当程序尝试在内存中映射文件,链接到共享内存段,或者分配堆空间的时候,这些区域将被创建。调优这个值将限制进程可拥有VMA的数量。限制一个进程拥有VMA的总数可能导致应用程序出错,因为当进程达到了VMA上线但又只能释放少量的内存给其他的内核进程使用时,操作系统会抛出内存不足的错误。如果你的操作系统在NORMAL区域仅占用少量的内存,那么调低这个值可以帮助释放内存给内核用。
- ` fs.file-max` 决定了系统级别所有进程可以打开的文件描述符的数量限制。
- ` swappiness` 的值的大小对如何使用swap分区是有着很大的联系的。swappiness= 0的时候表示最大限度使用物理内存,然后才是 swap空间,swappiness=100的时候表示积极的使用swap分区,并且把内存上的数据及时的搬运到swap空间里面。linux的基本默认设置为60
'logstash、es-master、es-node1、es-node2-kebana 均需要设置
[ root@es-master ~]
[ root@es-master ~]
elk ALL= ( ALL) NOPASSWD: ALL
'logstash、es-master、es-node1、es-node2-kebana 均需要设置
[ root@es-master ~]
安装 Elasticsearch
'es-master、es-node1、es-node2-kebana 均需要配置
[ root@es-master]
[ elk@es-master] $ sudo mkdir /usr/local/elkapp
[ elk@es-master] $ sudo mkdir /usr/local/elkdata
[ elk@es-master] $ sudo mkdir -p /usr/local/elkdata/es
[ elk@es-master] $ sudo mkdir -p /usr/local/elkdata/es/data
[ elk@es-master] $ sudo mkdir -p /usr/local/elkdata/es/log
[ elk@es-master] $ sudo chown -R elk:elk /usr/local/elkapp
[ elk@es-master] $ sudo chown -R elk:elk /usr/local/elkdata
'es-master、es-node1、es-node2-kebana 均需要下载
[ elk@es-master] $ cd /usr/local/src
[ elk@es-master src] $ sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.8.0-linux-x86_64.tar.gz
[ elk@es-master src] $ sudo tar -xf elasticsearch-7.8.0-linux-x86_64.tar.gz -C /usr/local/elkapp
[ elk@es-master local] $ cd elkapp
[ elk@es-master elkapp] $ sudo ln -s /usr/local/elkapp/elasticsearch-7.8.0/ /usr/local/elkapp/elasticsearch
配置elasticsearch
Elasticsearch 7 目录结构如下:
- ` bin` : '脚本文件,包括 ES 启动 & 安装插件等等
- ` config` : ' elasticsearch.yml(ES 配置文件)、jvm.options(JVM 配置文件)、日志配置文件等等
- ` JDK` : '内置的 JDK,JAVA_VERSION="12.0.1"
- ` lib` : ' 类库
- ` logs` : '日志文件
- ` modules` : ' ES 所有模块,包括 X-pack 等
- ` plugins` : 'ES 已经安装的插件。默认没有插件
- ` data` : ' ES 启动的时候,会有该目录,用来存储文档数据。该目录可以设置
cluster.name: elasticsearch
node.name: "node-1"
node.attr.rack: r1
node.max_local_storage_nodes: 3
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1
path.conf: /path/to/conf
path.data: /path/to/data
path.work: /path/to/work
path.logs: /path/to/logs
path.plugins: /path/to/plugins
bootstrap.memory_lock: true
network.host: 192.168.0.1
transport.tcp.port: 9300
transport.tcp.compress: true
http.port: 9200
http.enabled: false
gateway.recover_after_nodes: 1
gateway.recover_after_time: 5m
gateway.expected_nodes: 2
cluster.routing.allocation.node_initial_primaries_recoveries: 4
cluster.routing.allocation.node_concurrent_recoveries: 2
indices.recovery.max_size_per_sec: 0
indices.recovery.concurrent_streams: 5
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.timeout: 3s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [ "host1" , "host2:port" , "host3[portX-portY]" ]
http.cors.enabled: true
http.cors.allow-origin: “*
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
[ elk@es-master] $ cd /usr/local/elkapp/elasticsearch/config/
[ elk@es-master config] $ mv elasticsearch.yml{ ,.bak}
[ elk@es-master config] $ cat elasticsearch.yml | grep -v '^#'
cluster.name: my-es
node.name: es-master
node.master: true
node.data: true
bootstrap.memory_lock: true
path.data: /usr/local/elkdata/es/data
path.logs: /usr/local/elkdata/es/logs
network.host: 192.168.25.130
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: [ "192.168.25.130" , "192.168.25.131" ,"192.168.25.128" ]
cluster.initial_master_nodes: [ "es-master" ]
http.cors.enabled: true
http.cors.allow-origin: "*"
[ elk@es-node1 ~] $ cd /usr/local/elkapp/elasticsearch/config/
[ elk@es-node1 config] $ cat elasticsearch.yml | grep -v '^#'
cluster.name: my-es
node.name: es-node1
node.data: true
bootstrap.memory_lock: true
path.data: /usr/local/elkdata/es/data
path.logs: /usr/local/elkdata/es/logs
network.host: 192.168.25.131
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: [ "192.168.25.130" , "192.168.25.131" ,"192.168.25.128" ]
cluster.initial_master_nodes: [ "es-master" ]
[ elk@es-node2-kibana ~] $ cd /usr/local/elkapp/elasticsearch/config/
[ elk@es-node2-kibana config] $ cat elasticsearch.yml | grep -v '^#'
cluster.name: my-es
node.name: es-node2
node.data: true
bootstrap.memory_lock: true
path.data: /usr/local/elkdata/es/data
path.logs: /usr/local/elkdata/es/logs
network.host: 192.168.25.128
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: [ "192.168.25.130" , "192.168.25.131" ,"192.168.25.128" ]
cluster.initial_master_nodes: [ "es-master" ]
'es-master、es-node1、es-node2-kebana 均需要
[ elk@es-master] $ sudo chown -R elk:elk /usr/local/elkapp
[ elk@es-master] $ sudo chown -R elk:elk /usr/local/elkdata
配置elasticsearch通过systemctl管理启动
[ elk@es-master ~] $ sudo vim /etc/systemd/system/elasticsearch.service
[ Unit]
Description= elasticsearch
[ Service]
User= elk
Group= elk
LimitMEMLOCK= infinity
LimitNOFILE= 100000
LimitNPROC= 100000
ExecStart= /usr/local/elkapp/elasticsearch/bin/elasticsearch
[ Install]
WantedBy= multi-user.target
[ elk@es-master ~] $ sudo systemctl daemon-reload
[ elk@es-master ~] $ sudo systemctl start elasticsearch
[ elk@es-master ~] $ sudo systemctl enable elasticsearch
测试elasticsearch集群
[ elk@es-master ~] $ sudo ss -anput | grep 9200
tcp LISTEN 0 128 ::ffff:192.168.25.130:9200 :::* users:(( "java" ,pid= 14571,fd= 318))
[ elk@es-master ~] $ ps -ef | grep elasticsearch
elk 14571 1 4 13:29 ? 00:00:30 /usr/local/elkapp/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl= 60 -Des.networkaddress.cache.negative.ttl= 10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless= true -Dfile.encoding= UTF-8 -Djna.nosys= true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe= true -Dio.netty.noKeySetOptimization= true -Dio.netty.recycler.maxCapacityPerThread= 0 -Dio.netty.allocator.numDirectArenas= 0 -Dlog4j.shutdownHookEnabled= false -Dlog4j2.disable.jmx= true -Djava.locale.providers= SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent= 25 -XX:InitiatingHeapOccupancyPercent= 30 -Djava.io.tmpdir= /tmp/elasticsearch-3697254133112113797 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath= data -XX:ErrorFile= logs/hs_err_pid%p.log -Xlog:gc*,gc+age= trace,safepoint:file= logs/gc.log:utctime,pid,tags:filecount= 32,filesize= 64m -XX:MaxDirectMemorySize= 536870912 -Des.path.home= /usr/local/elkapp/elasticsearch -Des.path.conf= /usr/local/elkapp/elasticsearch/config -Des.distribution.flavor= default -Des.distribution.type= tar -Des.bundled_jdk= true -cp /usr/local/elkapp/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch
elk 14759 14571 0 13:29 ? 00:00:00 /usr/local/elkapp/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
elk 14832 9365 0 13:40 pts/0 00:00:00 grep --color= auto elasticsearch
浏览器访问测试(firefox体验最佳)
/ 查看单个集群节点状态
http: / /192.168.25.130:9200/
name : "es-master"
cluster_name : "my-es"
cluster_uuid : "kYVJyxh_SyGM59Xnw95Zyw"
version
number : "7.8.0"
build_flavor : "default"
build_type : "tar"
build_hash : "757314695644ea9a1dc2fecd26d1a43856725e65"
build_date : "2020-06-14T19:35:50.234439Z"
build_snapshot : false
lucene_version : "8.5.1"
minimum_wire_compatibility_version : "6.8.0" ,
minimum_index_compatibility_version : "6.0.0-beta1"
tagline : "You Know, for Search"
/ 查看集群健康状态
http: / / 192.168 .25 .130 : 9200 / _cluster/ health # firefox
cluster_name : "my-es"
status : "green"
timed_out : false
number_of_nodes : 3
number_of_data_nodes : 3
active_primary_shards : 1
active_shards : 2
relocating_shards : 0
initializing_shards : 0
unassigned_shards : 0
delayed_unassigned_shards : 0
number_of_pending_tasks : 0
number_of_in_flight_fetch : 0
task_max_waiting_in_queue_millis : 0
active_shards_percent_as_number : 100.0
'green'
'yellow'
'red'
http://192.168.25.128:9200/_cat
= ^.^=
/cat/allocation
/cat/shards
/cat/shards/{ index}
/cat/master
/cat/nodes
/cat/tasks
/cat/indices
/cat/indices/{ index}
/cat/segments
/cat/segments/{ index}
/cat/count
/cat/count/{ index}
/cat/recovery
/cat/recovery/{ index}
/cat/health
/cat/pending_tasks
/cat/aliases
/cat/aliases/{ alias}
/cat/thread_pool
/cat/thread_pool/{ thread_pools}
/cat/plugins
/cat/fielddata
/cat/fielddata/{ fields}
/cat/nodeattrs
/cat/repositories
/cat/snapshots/{ repository}
/_cat/templates
http://192.168.25.130:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.25.130 42 96 0 0.22 0.14 0.09 dilmrt - es-master
192.168.25.128 12 93 0 0.03 0.04 0.05 dilmrt - es-node2
192.168.25.131 55 96 0 0.00 0.03 0.05 dilmrt * es-node1
- ` heap.percent` ' 堆内存占用百分比
- ` ram.percent` ' 内存占用百分比
- ` cpu` ' CPU占用百分比
- ` master` ' * 表示节点是集群中的主节点
- ` name` ' 节点名
http://192.168.25.130:9200/_cat/master?v
id host ip node
qOQMbpHkSAqNIA7wCdkA9w 192.168.25.131 192.168.25.131 es-node1
http://192.168.25.130:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1594039407 12:43:27 my-es green 3 3 2 1 0 0 0 0 - 100.0%
- ` cluster ` '集群名称
- ` status` ' 集群状态 green代表健康;yellow代表分配了所有主分片,但至少缺少一个副本,此时集群数据仍旧完整;red代表部分主分片不可用,可能已经丢失数据。
- ` node.total` '代表在线的节点总数量
- ` node.data` ' 代表在线的数据节点的数量
- ` shards, active_shards` '存活的分片数量
- ` pri,active_primary_shards` ' 存活的主分片数量 正常情况下 shards的数量是pri的两倍。
- ` relo, relocating_shards` '迁移中的分片数量,正常情况为 0
- ` init, initializing_shards` ' 初始化中的分片数量 正常情况为 0
- ` unassign unassigned_shards` '未分配的分片 正常情况为 0
- ` pending_tasks` ' 准备中的任务,任务指迁移分片等 正常情况为 0
- ` max_task_wait_time` '任务最长等待时间
- ` active_shards_percent` ' 正常分片百分比 正常情况为 100%
http://192.168.25.130:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open index_name CyrKHbCAQcOJMHQU_K5kHg 1 1 1 0 8.7kb 4.3kb
- ` health` '索引的健康状态
- ` index` ' 索引名
- ` pri` '索引主分片数量
- ` rep` ' 索引复制分片数量
- ` store.size` '索引主分片 复制分片 总占用存储空间
- ` pri.store.size` ' 索引总占用空间, 不计算复制分片 占用空间
http://192.168.25.130:9200/_cat/shards?v
index shard prirep state docs store ip node
index_name 0 p STARTED 1 4.3kb 192.168.25.128 es-node2
index_name 0 r STARTED 1 4.3kb 192.168.25.131 es-node1
- ` index` '索引名称
- ` shard` ' 分片序号
- ` prirep` 'p表示该分片是主分片, r 表示该分片是复制分片
- ` store` ' 该分片占用存储空间
- ` node` '所属节点节点名
- ` docs` ' 分片存放的文档数
http://192.168.25.130:9200/_cat/allocation?v'
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
0 0b 2.9gb 92gb 94.9gb 3 192.168.25.131 192.168.25.131 es-node1
0 0b 3.2gb 91.6gb 94.9gb 3 192.168.25.128 192.168.25.128 es-node2
0 0b 3.4gb 91.4gb 94.9gb 3 192.168.25.130 192.168.25.130 es-master
- ` shards` ' 节点说承载的分片数
- ` disk.indices` '索引占用的空间大小
- ` disk.used` ' 节点所在机器已使用磁盘空间
- ` disk.avail` '节点可用磁盘空间
- ` disk.total` ' 节点总的磁盘空间
- ` disk.percent` '节点磁盘使用百分比
- ` ip` ' 节点所属机器IP地址
- ` node` '节点名
创建文档生成索引
[ elk@es-master ~] $ curl -H "Content-Type:application/json" -XPUT 'http://192.168.25.130:9200/index_name/type_name/1?pretty' -d '{ "name": "xuwl", "age": 18, "job": "Linux" }'
{
"_index" : "index_name" ,
"_type" : "index_type" ,
"_id" : "1" ,
"_version" : 1,
"result" : "created" ,
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
} ,
"_seq_no" : 0,
"_primary_term" : 1
}
- ` -H` '指定内容类型
- ` -X` ' 指定http请求方式,这里为PUT上传方式
- ` http://192.168.25.130:9200` '指定一台es服务器对外的http端口
- ` /index_name` ' 文档的索引名称,必须小写
- ` /type_name` '文档的类型名称,必须小写
- ` /1` ' 文档的ID编号
- ` ?pretty` '人性化创建索引
- ` -d` ' 指定使用JSON方式来撰写上传文档
- ` { "name" : "xuwl" , "age" : 18, "job" : "Linux" } '` '使用JSON格式来撰写上传文档内容
[ elk@es-master ~] $ curl -XGET 'http:#192.168.25.130:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open index_name uK-E0UPMTamByd24eamfUQ 1 1 1 0 8.3kb 4.1kb
[ root@els-master ~]
index shard prirep state docs store ip node
index_name 0 p STARTED 1 4.1kb 192.168.25.130 es-master
index_name 0 r STARTED 1 4.2kb 192.168.25.128 es-node2
部署redis
'redis-beat 安装配置'
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat local]
[ root@redis-beat local]
[ root@redis-beat local]
[ root@redis-beat redis-5.0.5]
[ root@redis-beat redis-5.0.5]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
bind 0.0.0.0
protected-mode no
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
127.0.0.1:6379> ping
PONG
部署head
[ root@els-master ~]
[ root@els-master ~]
[ root@els-master ~]
v10.16.0
[ root@els-master ~]
6.9.0
[ root@els-master ~]
[ root@els-master ~]
[ root@els-master ~]
[ root@els-master ~]
[ root@els-master ~]
[ 1] 15239
[ elk@els-master elasticsearch-head] $
> elasticsearch-head@0.0.0 start /home/elk/elasticsearch-head
> grunt server
Running "connect:server" ( connect) task
Waiting forever.. .
Started connect web server on http://localhost:9100
部署cerebro(es监控)
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
secret= "ki:s:[[@=Ag?QI`W2jMwkY:eqvrJ]JqoJyi2axj3ZvOv^/KavOT4ViJSv?6YY4[N"
basePath= "/"
pidfile.path= "/opt/cerebro/current/cerebro.pid"
data.path= "/home/cerebro/data/cerebro.db"
es= {
gzip= true
}
auth= {
type: basic
settings: {
username= "admin"
password= "1234.com"
}
}
hosts= [
{
host= "http://192.168.152.137:9200"
name= "es_log"
}
]
EOF
[ root@redis-beat ~]
[ info] play.api.Play - Application started ( Prod)
[ info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[ root@redis-beat ~]
JAVA_HOME= /usr/local/java
[ root@redis-beat ~]
[ Unit]
Description= Cerebro
After= network.target
[ Service]
Type= folking
PIDFile= /opt/cerebro/current/cerebro.pid
User= cerebro
Group= cerebro
LimitNOFILE= 65535
ExecStart= /opt/cerebro/current/bin/cerebro -Dconfig.file= /opt/cerebro/current/conf/application.conf
Restart= on-failure
WorkingDirectory= /opt/cerebro/current
[ Install]
WantedBy= multi-user.target
EOF
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
部署logstash
[ elk@logstash ~] $ sudo mkdir /usr/local/elkapp && sudo mkdir -p /usr/local/elkdata/logstash/{ date,logs} && sudo chown -R elk.elk /usr/local/elk*
[ elk@logstash local] $ tar xf logstash-7.8.0.tar.gz -C /usr/local/elkapp/
[ elk@logstash local] $ cd elkapp/
[ elk@logstash elkapp] $ ln -s logstash-7.8.0/ logstash
[ elk@logstash elkapp] $ sudo chown elk.elk /usr/local/elk* -R
[ elk@logstash elkapp] $ sudo vim logstash- 7.8.0/config/logstash.yml
path.data : /usr/local/elkdata/logstash/data
path.logs : /usr/local/elkdata/logstash/logs
[ elk@logstash logstash] $ sudo mkdir conf.d
[ elk@logstash logstash] $ vim config/pipelines.yml
- pipeline.id : test
pipeline.workers : 1
path.config : "/usr/local/elkapp/logstash/conf.d/*.conf"
[ elk@logstash ~] $ /usr/local/elkapp/logstash/bin/logstash -f config/input-output.conf -t
- -f 指定配置文件
- -t 检查配置是否语法正确
配置systemctl管理启动
[ elk@logstash ~] $ sudo vim /usr/local/elkapp/logstash/bin/logstash
JAVA_HOME= /usr/local/java
[ elk@logstash ~] $ sudo vim /etc/systemd/system/logstash.service
[ Unit]
Description= logstash
[ Service]
User= elk
Group= elk
LimitMEMLOCK= infinity
LimitNOFILE= 100000
LimitNPROC= 100000
ExecStart= /usr/local/elkapp/logstash/bin/logstash
[ Install]
WantedBy= multi-user.target
[ elk@logstash ~] $ sudo systemctl daemon-reload
[ elk@logstash ~] $ sudo systemctl start logstash
[ elk@logstash ~] $ sudo systemctl enable logstash
部署filebeat
tar包安装
[ root@redis-beat ~]
[ root@redis-beat src]
[ root@redis-beat src]
[ root@redis-beat src]
[ root@redis-beat local]
[ root@redis-beat filebeat-7.8.0-linux-x86_64]
[ root@redis-beat ~]
[ Unit]
Description= filebeat server daemon
Documentation= /usr/local/filebeat/filebeat -help
Wants= network-online.target
After= network-online.target
[ Service]
User= root
Group= root
Environment= "BEAT_CONFIG_OPTS=-c /usr/local/filebeat/filebeat.yml"
ExecStart= /usr/local/filebeat/filebeat $BEAT_CONFIG_OPTS
Restart= always
[ Install]
WantedBy= multi-user.target
EOF
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
[ root@redis-beat ~]
yum安装
[ root@redis-beat ~]
[ root@redis-beat ~]
[ elastic-7.x]
name= Elastic repository for 7.x packages
baseurl= https://artifacts.elastic.co/packages/7.x/yum
gpgcheck= 1
gpgkey= https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled= 1
autorefresh= 1
type= rpm-md
EOF
[ root@redis-beat ~]
rpm 安装
[ root@redis-beat ~]
[ root@redis-beat ~]
Ubuntu-apt-get
[ elk@beat ~] $ sudo wget -qO - https://
[ elk@beat ~] $ sudo apt-get install apt-transport-https
[ elk@beat ~] $ echo "deb https://#artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
[ elk@beat ~] $ sudo apt-get update && sudo apt-get install filebeat
[ elk@beat ~] $ sudo update-rc.d filebeat defaults 95 10
Ubuntu-deb
[ elk@beat ~] $ curl -L -O https://
Docker
[ root@redis-beat ~]
[ root@redis-beat ~]
docker.elastic.co/beats/filebeat:7.8.0 \
setup -E setup.kibana.host= kibana:5601 \
-E output.elasticsearch.hosts= [ "elasticsearch:9200" ]
配置Filebeat
[ root@redis-beat ~]
[ root@redis-beat ~]
部署kibana
[ root@es-node2-kibana ~]
[ elk@es-node2-kibana ~] $ sudo mkdir /usr/local/elkapp
[ elk@es-node2-kibana ~] $ sudo mkdir /usr/local/elkdata
[ elk@es-node2-kibana ~] $ sudo mkdir -p /usr/local/elkdata/kibana
[ elk@es-node2-kibana ~] $ sudo mkdir -p /usr/local/elkdata/kibana/data
[ elk@es-node2-kibana ~] $ sudo mkdir -p /usr/local/elkdata/kibana/logs
[ elk@es-node2-kibana ~] $ sudo chown -R elk:elk /usr/local/elkapp
[ elk@es-node2-kibana ~] $ sudo chown -R elk:elk /usr/local/elkdata
[ elk@es-node2-kibana ~] $ sudo mkdir /usr/local/elkapp && sudo mkdir -p /usr/local/elkdata/kibana/{ data,logs} && sudo chown -R elk:elk /usr/local/elkapp && sudo chown -R elk:elk /usr/local/elkdata
[ elk@es-node2-kibana ~] $ cd /usr/local/src
[ elk@es-node2-kibana src] $ sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-7.8.0-linux-x86_64.tar.gz
[ elk@es-node2-kibana src] $ sudo tar -xf kibana-7.8.0-linux-x86_64.tar.gz -C /usr/local/elkapp
[ elk@es-node2-kibana src] $ cd /usr/local/elkapp
[ elk@es-node2-kibana local] $ sudo ln -s kibana-7.8.0-linux-x86_64 kibana
[ elk@es-node2-kibana local] $ sudo chown -R elk:elk /usr/local/elkapp
[ elk@es-node2-kibana local] $ sudo chown -R elk:elk /usr/local/elkdata
[ elk@es-node2-kibana src] $ sudo tar -zxvf kibana-7.8.0-linux-x86_64.tar.gz -C /usr/local/elkapp && sudo ln -s kibana-7.8.0-linux-x86_64 kibana && sudo chown -R elk:elk /usr/local/elkapp && sudo chown -R elk:elk /usr/local/elkdata
[ elk@es-node2-kibana ~] $ vim /usr/local/elkapp/kibana-7.8.0-linux-x86_64/config/kibana.yml
server.port: 5601
server.host: "192.168.152.128"
elasticsearch.hosts: [ "http:#192.168.152.130:9200" ]
[ elk@es-node2-kibana ~] $ vi /etc/systemd/system/kibana.service
[ Unit]
Description= kibana
[ Service]
User= elk
Group= elk
LimitMEMLOCK= infinity
LimitNOFILE= 100000
LimitNPROC= 100000
ExecStart= /usr/local/elkapp/kibana/bin/kibana
[ Install]
WantedBy= multi-user.target
[ elk@es-node2-kibana ~] $ sudo systemctl daemon-reload
[ elk@es-node2-kibana ~] $ sudo systemctl start kibana
[ elk@es-node2-kibana ~] $ sudo systemctl enable kibana
[ elk@es-node2-kibana ~] $ sudo systemctl daemon-reload && sudo systemctl start kibana && sudo systemctl enable kibana
[ elk@es-node2-kibana ~] vim /usr/local/elkapp/kibana/config/kibana.yaml
i18n.locale: "zh-CN"