安装elasticsearch
# tar -zxvf elasticsearch-2.2.0.tar.gz -C /opt/modules/
2.修改elasticsearch.yml
1)暴漏集群名 --
几个节点集群名字要一致
cluster.name: ES-application
node.name: node-1 #注意每个节点不一样,自己设置
network.host: 192.168.30.129 #注意每个节点不一样,当前机器IP
4)设置data和logs #不设置就默认路径,设置之后,注意创建没有的文件
path.data: /opt/local/elasticsearch-2.4.3/data
path.logs: /opt/local/elasticsearch-2.4.3/datalog
参考:--注意冒号后面加空格
cluster.name:elasticsearch #
集群的名称,同一个集群该值必须设置成相同的
node.name
:"es5" #
该节点的名字
node.master:false
#
该节点有机会成为
master
节点
node.data:true #
该节点可以存储数据
node.rack
:rack5 #
该节点所属的机架
index.number_of_shards:5 #shard
的数目
index.number_of_replicas:3
#
数据副本的数目
network.bind_host:0.0.0.0 #
设置绑定的
IP
地址,可以是
IPV4
或者
IPV6
network.publish_host:10.0.0.11 #
设置其他节点与该节点交互的
IP
地址
network.host:10.0.0.11 #
该参数用于同时设置
bind_host
和
publish_host
transport.tcp.port:9300 #
设置节点之间交互的端口号
transport.tcp.compress:true #
设置是否压缩
tcp
上交互传输的数据
http.port:9200 #
设置对外服务的
http
端口号
http.max_content_length:100mb #
设置
http
内容的最大大小
http.enabled:true #
是否开启
http
服务对外提供服务
discovery.zen.minimum_master_nodes:2 #
设置这个参数来保证集群中的节点可以知道其它
N
个有
master
资格的节点。默认为
1
,对于大的集群来说,可以设置大一点的值(
2-4
)
4)脑裂
discovery.zen.ping.multicast.enabled: false --关闭自动发现机制
discovery.zen.ping_timeout: 120s --ping时间间隔
client.transport.ping_timeout: 60s
discovery.zen.ping.unicast.hosts: ["192.168.30.129","192.168.30.130", "192.168.30.131"] --需要监控的服务器列表
一个节点的配置文件信息(elasticsearch.yml):
cluster.name: es-hww
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /opt/local/elasticsearch-2.4.3/data
#
# Path to log files:
#
path.logs: /opt/local/elasticsearch-2.4.3/datalog
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 10.0.13.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping_timeout: 120s
client.transport.ping_timeout: 60s
discovery.zen.ping.unicast.hosts: ["10.0.13.1", "10.0.13.2","10.0.13.3"]
3.安装插件
1)bin/plugin install license
2)bin/plugin install marvel-agent
3)bin/plugin install mobz/elasticsearch-head
插件安装完之后 elasticsearch-2.2.0下会有一个plugins目录
4.分发
修改相关配置
$ scp -r elasticsearch-2.2.0 sxt@hadoop02:/opt/modules/
注意:分发默认22端口,要是22端口关闭:$ scp -r
-P 10022 elasticsearch-2.2.0 sxt@hadoop02:/opt/modules/
useradd -g root test 在root 用户组下添加test用户
chown -R test elasticsearch-2.2.0/ 给elasticsearch-2.2.0目录赋权给test用户
su test 切换到test用户
$ bin/elasticsearch --不要用root用户启动 su test(切换到test用户) exit退出
(各个节点依次启动)
nohup bin/elasticsearch --后台启动
bin/elasticsearch -d 也是后台启动
记得启动之前在elasticsearch-2.2.0下创建logs和data文件
chown -R test bin/ 给bin目录赋权给test用户
注意:启动第一次会报写入日志权限不够,退出先把新建日志文件再复权限给新建的用户
查看安装了哪些插件:
[root@cyber elasticsearch]# bin/plugin list
http://hadoop01:9200/_plugin/head/ 访问
eclipse需要的包直接在elasticsearch-2.2.0的lib下
参考:
kibana插件安装:
kibana后台启动:
nohup bin/kibana &
查看kibana进程:ps -ef|grep node
在nohup启动kibana后,不要直接关闭掉shell终端,先执行exit就可以了