注意:elasticsearch、ik分词器、kibana、logstash的版本必须要一致,否则集成使用的时候,会出现版本不兼容的问题
第一步:必须要有jre支持
elasticsearch是用Java实现的,跑elasticsearch必须要有jre支持,所以必须先安装jre…这里jre的部署就不讲解了…可以百度了解下…
第二步:下载elasticsearch
官网地址:https://www.elastic.co/downloads/elasticsearch
因为是centos中运行 所以我们选 tar.gz压缩包,下载后 用ftp上传到centos里 我们把这个文件上传到 /home/tools/下…这里也可以自定义其他目录…
第三步:安装和配置elasticsearch
选择需要的版本,进行Download操作…
下载上传完成后,进行下面 操作…
进入elasticsearch-7.1.1-linux-x86_64.tar.gz压缩包所在目录 解压
我们执行,来启动 elasticsearch
[root@bogon ~]# sh elasticsearch
报错了:
[2017-09-07T19:43:10,628][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.5.2.jar:5.5.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:106) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) ~[elasticsearch-5.5.2.jar:5.5.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.5.2.jar:5.5.2]
… 6 more
意思是不能用root用户来启动,那我们新建一个用户来启动
创建用户和用户组
以root用户登录主机
执行以下命令,查看lijianhui组是否存在。
#cat /etc/group|grep lijianhui
如果lijianhui组已存在,则执行以下命令删除lijianhui组
#/usr/sbin/groupdel lijianhui
执行以下命令,创建lijianhui组,指定用户组id。
#/usr/sbin/groupadd -g 123 lijianhui
----结束
执行以下命令,检查lijianhui用户是否存在。
#cat /etc/passwd|grep lijianhui
如果userceshi用户已存在,则执行以下命令删除lijianhui用户。
#/usr/sbin/userdel -rf lijianhui
执行以下命令,创建lijianhui用户,指定用户id。
#/usr/sbin/useradd -u 456 -m -g lijianhui lijianhui
执行以下命令,修改lijianhui用户密码。
#passwd lijianhui
在root用户下执行以下命令,切换到lijianhui用户。
#su - lijianhui
#chmod -R 755 /home/lijianhui
chown -R lijianhui:lijianhui /home/lijianhui/elasticsearch-7.1.1
我们切换成lijianhui用户,然后执行
[root@bogon ~]# su - lijianhui
[elastic@bogon root]$ sh /home/lijianhui/elasticsearch-7.1.1/bin/elasticsearch
出来一大串info 说明成功了,但是这种方式是前台运行,不方便我们操作其他的 我们加下 -d 后台运行
先ctrl+c退出执行;
[elastic@bogon root]$sh /home/lijianhui/elasticsearch-7.1.1/bin/elasticsearch -d
我们来检查下是否启动成功
[elastic@bogon root]$ ps -ef | grep elasticsearch
elastic 2962 1 23 19:48 pts/1 00:00:02 /home/java/jdk1.8.0_144/bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/home/es/elasticsearch-5.5.2 -cp /home/es/elasticsearch-5.5.2/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
elastic 2977 2849 0 19:48 pts/1 00:00:00 grep --color=auto elasticsearch
需要注意的是,好多小伙伴经常出现 如下错误:
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
解决方案:
问题[1]的解决方案:
修改/etc/security/limits.conf文件,添加或修改如下行: (请切换到root用户 然后强制修改文件)
* hard nofile 65536
* soft nofile 65536
问题[2]的解决方案:
修改 /etc/sysctl.conf 文件,添加 “vm.max_map_count”设置 永久改变(sudo sysctl -p /etc/sysctl.conf生效).....
比如:vm.max_map_count=2621441
我们来验证下服务是否正常运行 curl http://localhost:9200
[elastic@bogon root]$ curl http://localhost:9200
{
“name” : “K22mJd5”,
“cluster_name” : “elasticsearch”,
“cluster_uuid” : “R2qfXKtrQl2PwKdJMmPuMA”,
“version” : {
“number” : “5.5.2”,
“build_hash” : “b2f0c09”,
“build_date” : “2017-08-14T12:33:14.154Z”,
“build_snapshot” : false,
“lucene_version” : “6.6.0”
},
“tagline” : “You Know, for Search”
}
出来这个 说明配置OK。
第四步:允许外网连接配置
前面我们配置的仅仅是本机使用 但是我们比如集群以及其他机器连接 ,则需要配置下。
可以修改 /home/es/elasticsearch/config/elasticsearch.yml 文件
把 network.host 和 http.port 前面的 备注去掉 然后Host改成你的局域网IP即可
修改后 保存退出
然后我们把防火墙也关了 :
firewall-cmd --state 查看防火墙的状态,如果为running,进行下面两个操作
systemctl stop firewalld.service 关闭防火墙的命令
systemctl disable firewalld.service 禁止防火墙开机启动
最后我们重启下elasticsearch服务
ps -ef | grep elasticsearch 找到进程号
然后kill -9 进程号
重启es 异常
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
修改
elasticsearch.yml
取消注释保留一个节点
cluster.initial_master_nodes: [“node-1”]
这个的话,这里的node-1是上面一个默认的记得打开就可以了
重启 正常
再启动下elasticsearch
我们用谷歌浏览器请求下 http://10.237.149.45:9200/
当访问这个地址,返回一个json,才算配置完成…
第四步:安装中文分词器
到 https://github.com/medcl/elasticsearch-analysis-ik/releases地址找到es对应的ik的zip包下载
将zip移动到plugins/ik/(ik目录需要自己创建)如下图所示:
注意:解压的文件需要全部放到ik根目录。否则启动失败
启动es,测试中文分词器:
curl -XGET http://10.237.147.26:9200/_analyze?pretty -H 'Content-Type:application/json' -d'
{
"analyzer": "ik_smart",
"text": "听说看这篇博客的哥们最帅、姑娘最美"
}'
出现以上结果则安装成功
第五步:开始安装kibana
5.1将下载好的kibana上传到服务器位置,我这里一般习惯是上传到/usr/local的目录下
5.2、通过解压命令:tar -zxvf 文件名进行解压
5.3、进入到kibana文件夹的config目录下,如图3
5.4、我们需要修改kibana.yml这个配置文件
命令: vim kibana.yml
其中elasticsearch.url就是你安装好es的服务器访问地址,如果es配置了密码那些,那么就把kibana里面的elasticsearch.username这个配置项放开
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://150.109.32.106:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
5.5 启动kibana
我们需要进入到kibana的bin目录下,通过命令: ./kibana启动
kibana默认访问端口是5601,在访问之前记得检查服务器是否放开5601这个端口
通过服务器ip+5601端口访问成功就会出现如下图:
第六步:开始安装logstash
6.1.下载logstash
官网地址 : https://www.elastic.co/cn/downloads/past-releases#elasticsearch
6.2.解压到目录
6.3.创建配置文件first.conf(从redis获取数据发送到elasticsearch)
input {
redis {
port => "7000"
host => "10.237.149.89"
data_type => "list" # 数据类型为队列
type => "log"
key => "testlog" # 存储Redis的key
password => "Aa123456"
db => 0
timeout => 5
}
}
output {
elasticsearch {
# localhost对应安装elasticsearch的服务器
hosts => ["10.237.149.89:9200"]
index => "logstashtest"
}
}
6.4.启动logstash
/usr/share/logstash/bin/logstash -f first.conf &