ELK环境搭建
下载软件
安装ElasticSearch
- 创建elastic用户
# 建用户
# seradd -m -g users -G wheel -s /bin/bash elastic
# 设置密码
# passwd elastic
# 到elastic用户
# su elastic
- 上传elasticsearch压缩包到服务器,并解压到相关目录
$ sudo tar -zxf elasticsearch-x.x.x.tar.gz
$ sudo mkdir /opt/elk
$ sudo mv elasticsearch-x.x.x /opt/elk/elasticsearch
- 运行elasticsearch
$ cd /opt/elk/elasticsearch
# 启动elasticsearch
$ ./bin/elasticsearch
- 验证elasticsearch是否启动成功
$ curl "http://127.0.0.1:9200"
{
"name" : "Yp4a_I1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "DYASl-fdRRaoaPK7N1ykbA",
"version" : {
"number" : "6.6.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "a9861f4",
"build_date" : "2019-01-24T11:27:09.439740Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
- 配置/opt/elk/elasticsearch/config/elasticsearch.yml
#配置跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
#配置实例名称
cluster.name: elastic
#配置节点名称
node.name: master
#设置ip地址,物理机的实际ip
network.host: 192.168.20.110
- 重新启动
$ ./bin/elasticsearch
出现报错信息如下:
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max number of threads [3797] for user [elastic] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
修改/etc/security/limits.conf文件
$ sudo vim /etc/security/limits.conf
# 在最下面加入
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
修改 /etc/sysctl.conf文件
$ sudo vim /etc/sysctl.conf
# 在最下面加入
vm.max_map_count=262144
配置重新生效
$ sudo sysctl -p
重启服务器,然后重新启动elasticsearch
- 安装ik中文分词器
去github上elasticsearch-analysis-ik下载对应版本的压缩包,上传到elasticsearch的plugins目录下解压
$ unzip -o elasticsearch-analysis-ik-x.x.x.zip -d /opt/elk/elasticsearch/plugins
直接安装
$ ./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.6.0/elasticsearch-analysis-ik-6.6.0.zip
如果console中出现这个日志说明成功
[2020-06-09T16:59:29,618][INFO ][o.e.p.PluginsService ] [master] loaded plugin [analysis-ik]
浏览器访问http://ip地址:9200/_cat/plugins,如果出现下面的内容说明安装成功
master analysis-ik 6.6.0
安装Kibana
- 上传kibana到服务器,并解压到相关目录
$ tar -zxf kibana-x.x.x-linux-x86_64.tar.gz
$ sudo mv kibana-x.x.x-linux-x86_64 /opt/elk/kibana
- 修改/opt/elk/kibaba/config/kibaba.yml
$ vim /opt/elk/kibaba/config/kibaba.yml
# 在最下面加入:
# 设置本机ip
server.host: 192.168.20.110
# elasticsearch的地址
elasticsearch.hosts: http://192.168.20.110:9200
- 启动kibana
$ ./bin/kibana
- 测试ik分词器
访问http://192.168.20.110:5601,点击左侧Dev Tools
POST _analyze
{
"analyzer": "ik_max_word",
"text": "我是中国人"
}
返回值
{
"tokens" : [
{
"token" : "我",
"start_offset" : 0,
"end_offset" : 1,
"type" : "CN_CHAR",
"position" : 0
},
{
"token" : "是",
"start_offset" : 1,
"end_offset" : 2,
"type" : "CN_CHAR",
"position" : 1
},
{
"token" : "中国人",
"start_offset" : 2,
"end_offset" : 5,
"type" : "CN_WORD",
"position" : 2
},
{
"token" : "中国",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 3
},
{
"token" : "国人",
"start_offset" : 3,
"end_offset" : 5,
"type" : "CN_WORD",
"position" : 4
}
]
}
安装Logstash
- 上传logstash到服务器,并解压到相关目录
$ tar -zxf logstash-x.x.x.tar.gz
$ sudo mv logstash-x.x.x-linux-x86_64 /opt/elk/logstash
- 配置上传本地日志上传到elasticsearch
$ vim config/logstash.conf
# 添加以下内容
# 配置输入端
input {
file {
path => "/home/elastic/diplatform.log" #本地日志文件地址
type => "logs" #文件类型
start_position => "beginning" #起始位置
}
}
# 配置输出端
output {
# 输出到elasticsearch
elasticsearch {
hosts => ["192.168.20.110:9200"] #elasticsear地址
index => "diplatform_index_%{+YYYY.MM.dd}" # 配置index名称
}
}
- 配置实施日志上传到elasticsearch
- 运行logstash
$ ./bin/logstash -f config/logstash.conf