目录
Elasticsearch7.13.1单机及集群环境搭建部署
准备文件
jdk-8u191-linux-x64.tar.gz —> JDK
elasticsearch-7.13.1-linux-x86_64.tar.gz —> elasticsearch安装包
elasticsearch-analysis-ik-7.13.1.zip —> elasticsearch中文分词插件
kibana-7.13.1-linux-x86_64.tar.gz —> Kibana
jdk-8u191-linux-x64.tar.gz从oracle官网下载 https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
其他下载地址:
elasticsearch中文分词插件:https://github.com/medcl/elasticsearch-analysis-ik/releases
elasticsearch安装包:https://www.elastic.co/cn/downloads/past-releases#elasticsearch
Kibana: https://www.elastic.co/cn/downloads/past-releases#kibana
集群环境搭建
Ip | Hostname |
---|---|
192.168.56.101 | esnode1 |
192.168.56.102 | esnode2 |
192.168.56.103 | esnode3 |
以下的所有操作,如无说明Root用户下,均在testuser账号下执行
装jdk(三台机器上执行,Root下执行)
tar -xvf jdk-8u191-linux-x64.tar.gz -C /usr/local
cat >> /etc/profile << EOF
export JAVA_HOME=/usr/local/jdk1.8.0_191/
export PATH=\$JAVA_HOME/bin:\$PATH
export CLASSPATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar
EOF
source /etc/profile
##测试jdk安装情况
java -version
配置linux参数(三台机器上执行,需要root权限)
如果没有配置参数,会导致Es启动异常。
####
cat >> /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
EOF
####
sed -i 's/^* soft nproc 65536$/* soft nproc 65536/' /etc/security/limits.d/20-nproc.conf
####
cat >> /etc/sysctl.conf << EOF
vm.max_map_count=655360
EOF
sysctl -p
然后关掉xshell或secucrt重新连接进入即可生效,通过ulimit -n查看效果,看到数字为65536就算成功了,不行及重启服务器试试。
####如果重启服务器试试还不可用,采用终极方案
vim /etc/systemd/system.conf
DefaultLimitNOFILE=65536
DefaultLimitNPROC=65536
改这两条配置,然后重启一次服务器
安装elasticsearch(三台机器上执行)
以安装在testuser下的主目录/home/testuser/为例
##解压到/home/testuser/目录下
tar -xvf /home/testuser/elasticsearch-7.13.1-linux-x86_64.tar.gz -C /home/testuser/
##修改为testuser用户所有,后面将以testuser用户启用elasticsearch
cd /home/testuser/
chown -R testuser:testuser /home/testuser/elasticsearch-7.13.1/
##建立相应的数据存储目录
cd /home/testuser/elasticsearch-7.13.1/
mkdir data
修改配置
/home/testuser/elasticsearch-7.13.1/config/elasticsearch.yml
----配置说明----->>>>>
cluster.name: testuser-es ---->>>集群名称,相同名称为一个集群
node.name: node-1 ---->>>节点名称,集群模式下每个节点名称唯一
node.master: true ---->>>当前节点是否可以被选举为master节点,是:true、否:false
node.data: true ---->>>当前节点是否用于存储数据,是:true、否:false
path.data: ---->>>索引数据存放的位置
path.logs: ---->>>日志文件存放的位置
bootstrap.memory_lock ---->>>>需求锁住物理内存,是:true、否:false
bootstrap.system_call_filter ---->>>>SecComp检测,是:true、否:false
network.host ---->>>>监听地址,用于访问该es
network.publish_host ---->>>>可设置成内网ip,用于集群内各机器间通信
http.port ---->>>>es对外提供的http端口,默认 9200
discovery.seed_hosts ---->>>>es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
cluster.initial_master_nodes ---->>>>es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
http.cors.enabled ---->>>>是否支持跨域,是:true,在使用head插件时需要此配置
http.cors.allow-origin ---->>>>“*” 表示支持所有域名
1.修改配置[192.168.56.101]
cat > /home/testuser/elasticsearch-7.13.1/config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node-1
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch-7.13.1/data
path.logs: /home/testuser/elasticsearch-7.13.1/logs
network.host: 192.168.56.101
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101", "192.168.56.102", "192.168.56.103"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
EOF
2.修改配置[192.168.56.102]
cat > /home/testuser/elasticsearch-7.13.1/config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node-2
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch-7.13.1/data
path.logs: /home/testuser/elasticsearch-7.13.1/logs
network.host: 192.168.56.102
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101", "192.168.56.102", "192.168.56.103"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
EOF
3.修改配置[192.168.56.103]
cat > /home/testuser/elasticsearch-7.13.1/config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node-3
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch-7.13.1/data
path.logs: /home/testuser/elasticsearch-7.13.1/logs
network.host: 192.168.56.103
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101", "192.168.56.102", "192.168.56.103"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
EOF
中文分词器Ik安装(三台机上执行,可选)
将elasticsearch-analysis-ik-7.13.1.zip解压后放在plugins/ik/下即可(手工执行或按以下命令均可)
mkdir /home/testuser/elasticsearch-7.13.1/plugins/ik
unzip -o -d /home/testuser/elasticsearch-7.13.1/plugins/ik elasticsearch-analysis-ik-7.13.1.zip
##如果报-bash: unzip: command not found,先执行unzip的安装
yum install unzip -y
端口开放(三台机上执行)
如果防火墙开启了,需要开放9200及9300端口提供服务,如果防火墙未开启,此步骤不用管。
firewall-cmd --zone=public --add-port=9200/tcp --permanent
firewall-cmd --zone=public --add-port=9300/tcp --permanent
firewall-cmd --reload
至此,三台机的es集群已搭建好。
后台启动elasticsearch,三台机器上执行(testuser用户)
cd /home/testuser/elasticsearch-7.13.1/bin/
./elasticsearch -d
查看集群状态
curl http://192.168.56.101:9200/_cat/nodes
curl http://192.168.56.102:9200/_cat/nodes
curl http://192.168.56.103:9200/_cat/nodes
安装kibana(可选)
可任意机器或三台均安装都可,以在192.168.56.101上安装为例
##解压
cd /home/testuser
tar -zxvf kibana-7.13.1-linux-x86_64.tar.gz -C /home/testuser/
mv kibana-7.13.1-linux-x86_64 kibana
##配置修改
vi kibana.yml
server.host: "192.168.56.101"
elasticsearch.hosts: ["http://192.168.56.101:9200"]
##启动
/home/testuser/kibana/bin/kibana &
##开启5601端口
firewall-cmd --zone=public --add-port=5601/tcp --permanent
firewall-cmd --reload
##在浏览器上输入url进行访问测试
http://192.168.56.101:5601/app/dev_tools#/console
单机环境搭建
单机版,即只是将三台机变成一台机的集群配置就可,即所有的配置均在192.168.56.101上执行,同时修改配置discovery.seed_hosts,及cluster.initial_master_nodes为1台,其他步骤均参照集群的部署模式就可以了,配置如下
cat > /home/testuser/elasticsearch-7.13.1/config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node-1
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch-7.13.1/data
path.logs: /home/testuser/elasticsearch-7.13.1/logs
network.host: 192.168.56.101
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101"]
cluster.initial_master_nodes: ["node-1"]
EOF
增删改查测试
1.增加index,默认为5个分片,一个副本,这里改成3个分片,一个副本
PUT /indextest1
{
“settings”: {
“number_of_shards”: 3,
“number_of_replicas”: 1
}
}
2.获取刚才加的index的信息
GET /indextest1/_settings
3.添加put,需要指定id
PUT /indextest1/_doc/1
{
“first_name”:“Jane”,
“last_name”:“Smith”,
“age”:32,
“about”:“i like to collect rock albums”,
“interests”:[“music”]
}
4.添加POST,没指定id会由es生成,使用post
POST /indextest1/_doc
{
“first_name”:“Douglas”,
“last_name”:“Fir”,
“age”:23,
“about”:“i like to build cabinets”,
“interests”:[“forestry”]
}
5.根据ID查询
GET /indextest1/_doc/xklYGHoB1iFjYcRaTFye
5.根据ID查询,指定字段返回
GET /indextest1/_doc/1?_source=first_name,age
6.更新文档,PUT将以前的覆盖掉,变成新文档
PUT /indextest1/_doc/1
{
“first_name”:“Jane”,
“last_name”:“Smith”,
“age”:36,
“about”:“i like to collect rock albums”,
“interests”:[“music”]
}
7.更新文档,POST是修改字段
POST /indextest1/_doc/1
{
“age”:18,
“last_name”:“hua1”
}
8.删除记录
DELETE /indextest1/_doc/1
9.删除索引
DELETE /indextest1