分布式ElasticSearch-6.4.2
测试启动elasticsearch是否会报错
elasticsearch-5.6.12/bin/elasticsearch
报错:seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
linux版本偏低导致,重新选择linux版本为CentOS-7-x86_64-DVD-1804.iso再次运行./elasticsearch
显示启动成功
修改配置文件elasticsearch.yml
#集群的名称
cluster.name: wf-es6.4.2
#节点名称,其余两个节点分别为node-2 和node-3
node.name: node-1
#指定该节点是否有资格被选举成为master节点,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master
node.master: true
#允许该节点存储数据(默认开启)
node.data: true
#索引数据的存储路径
path.data: mkdir /home/hadoop/elasticsearch/data
#日志文件的存储路径
path.logs: mkdir /home/hadoop/elasticsearch/logs
#设置为true来锁住内存。因为内存交换到磁盘对服务器性能来说是致命的,当jvm开始swapping时es的效率会降低,所以要保证它不swap
bootstrap.memory_lock: true
#绑定的ip地址
network.host: 0.0.0.0
#设置对外服务的http端口,默认为9200
http.port: 9200
# 设置节点间交互的tcp端口,默认是9300
transport.tcp.port: 9300
#Elasticsearch将绑定到可用的环回地址,并将扫描端口9300到9305以尝试连接到运行在同一台服务器上的其他节点。
#这提供了自动集群体验,而无需进行任何配置。数组设置或逗号分隔的设置。每个值的形式应该是host:port或host
#(如果没有设置,port默认设置会transport.profiles.default.port 回落到transport.tcp.port)。
#请注意,IPv6主机必须放在括号内。默认为127.0.0.1, [::1]
discovery.zen.ping.unicast.hosts: ["192.168.218.143:9300", "192.168.218.144:9300", "192.168.218.145:9300"]
#如果没有这种设置,遭受网络故障的集群就有可能将集群分成两个独立的集群 - 分裂的大脑 - 这将导致数据丢失
discovery.zen.minimum_master_nodes: 2
创建es索引数据和日志文件的存放路径
mkdir /home/hadoop/elasticsearch/data
mkdir /home/hadoop/elasticsearch/logs
建立用户并授权(es不能用root运行)
chown -R hadoop:hadoop /home/hadoop/elasticsearch
在单个节点上测试启动Elasticsearch
elasticsearch -d
使用jps -l或者ps -ef|grep elasticsearch命令查看es是否成功启动
如果没有,看日志/home/hadoop/elasticsearch/logs/
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: memory locking requested for elasticsearch process but memory is not locked
[3]: max number of threads [3802] for user [hadoop] is too low, increase to at least [4096]
[4]: max virtual memory areas 262144 [65530] is too low, increase to at least [262144]
解决办法:
[1][3] vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 4096
参数解释:
- soft nproc:可打开的文件描述符的最大数(软限制)
- hard nproc:可打开的文件描述符的最大数(硬限制)
- soft nofile:单个用户可用的最大进程数量(软限制)
- hard nofile:单个用户可用的最大进程数量(硬限制)
同步/etc/security/limits.conf到其他节点
scp /etc/security/limits.conf root@bg2:/etc/security/limits.conf
scp /etc/security/limits.conf root@bg3:/etc/security/limits.conf
[2] vi elasticsearch.yml
bootstrap.memory_lock: false
[4] vi /etc/sysctl.conf
sysctl -p使这个文件修改后生效
vm.max_map_count=655360
同步/etc/sysctl.conf到es集群中其他节点
scp /etc/sysctl.conf root@bg2:/etc/sysctl.conf && sysctl -p
scp /etc/sysctl.conf root@bg3:/etc/sysctl.conf && sysctl -p
同步es到其他节点,修改其他节点的环境变量和elasticsearch.yml配置文件
scp -r /home/hadoop/elasticsearch hadoop@192.168.218.144:~
scp -r /home/hadoop/elasticsearch hadoop@192.168.218.145:~
再次启动单个节点,发现没有报错
bin/elasticsearch
jps -l
ps -ef | grep elasticsearch
直接运行bin/elasticsearch是前台启动elasticsearch,可以使用Ctrl+C停掉elasticsearch
测试在浏览器上访问es
如果失败,请检查服务器上防火墙状态
Centos7:
su root用户下
systemctl stop firewalld 关闭
systemctl disable firewalld 开机禁用
kill -9 `ps -ef | grep elasticsearch | awk '{print $2}' | head -n 1`
安装Kibana
Kibana 是一款开源的数据分析和可视化平台,它是 Elastic Stack 成员之一,设计用于和 Elasticsearch 协作。您可以使用 Kibana 对 Elasticsearch 索引中的数据进行搜索、查看、交互操作。您可以很方便的利用图表、表格及地图对数据进行多元化的分析和呈现。Kibana 可以使大数据通俗易懂。它很简单,基于浏览器的界面便于您快速创建和分享动态数据仪表板来追踪 Elasticsearch 的实时数据变化。
修改配置文件kibana.yml
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.218.143:9200"
后台启动kibana
bin/kibana & (默认情况下从 $KIBANA_HOME/config/kibana.yml 加载配置文件)
浏览器访问http://192.168.218.143:5601
使用kibana工具
GET _search
{
"query": {
"match_all": {}
}
}
PUT /book
{
"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
}
}
}
PUT /book2
GET /book,book2/_settings
GET /_all/_settings
PUT /book/book_type/1
{
"first_name": "Jane",
"last_name": "Smith",
"age": 32,
"about": "I like to collect rock albums",
"interests": ["music"]
}
POST /book/book_type/
{
"first_name": "Douglas",
"last_name": "Fir",
"age": 24,
"about": "I like to build cabinets",
"interests": ["forestry"]
}
GET /book/book_type/1
GET /book/book_type/1?_source=first_name,interests
PUT /book/book_type/1
{
"first_name": "mq",
"last_name": "d",
"age": 23,
"about": "I like to collect rock albums",
"interests": ["music"]
}
POST /book/book_type/1/_update
{
"doc": {
"about": "I want to do all those i like"
}
}
GET /_mget
{
"docs": [
{
"_index": "book",
"_type": "book_type",
"_id":1
},
{
"_index": "book",
"_type": "book_type",
"_id":2,
"_source": ["first_name"]
}
]
}
GET /book/_mget
{
"docs": [
{
"_type": "book_type",
"_id":1
},
{
"_index": "book",
"_type": "book_type",
"_id":2,
"_source": ["first_name"]
}
]
}