ES
Elasticsearch配置
- node01
#node01 config/elasticsearch.yml
#集群名字
cluster.name: elastic-7.8.1
##集群中当前的节点
node.name: node-1
#数据目录,配置到指定的分区挂载目录
path.data: /opt/app/elasticsearch/data
#日志目录
path.logs: /opt/app/elasticsearch/log
#当前主机的ip地址及ES服务端口【具体的服务器修改IP】
network.host: node01
#WebUI 默认9200
http.port: 7920
#是不是有资格主节点
node.master: true
node.data: true
# head 插件需要这打开这两个配置
#*表示支持所有域名
http.cors.allow-origin: "*"
#是否支持跨域
http.cors.enabled: true
#linux安装es的一个bug解决的配置
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
#es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举 master
cluster.initial_master_nodes:
- node-1
#es7.x 之后新增的配置,节点发现 【Linux为机器IP地址,根据自己的服务器修改】
discovery.seed_hosts:
- node01:9300
- node02:9300
- node03:9300
- node02
cluster.name: elastic-7.8.1
##集群中当前的节点
node.name: node-2
#数据目录,配置到指定的分区挂载目录
path.data: /opt/app/elasticsearch/data
#日志目录
path.logs: /opt/app/elasticsearch/log
#当前主机的ip地址及ES服务端口【具体的服务器修改IP】
network.host: node02
#WebUI 默认9200
http.port: 7920
#是不是有资格主节点
node.master: false
node.data: true
# head 插件需要这打开这两个配置
#*表示支持所有域名
http.cors.allow-origin: "*"
#是否支持跨域
http.cors.enabled: true
#linux安装es的一个bug解决的配置
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
#es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举 master
cluster.initial_master_nodes:
- node-1
- node-2
- node-3
#es7.x 之后新增的配置,节点发现 【Linux为机器IP地址,根据自己的服务器修改】
discovery.seed_hosts:
- node01:9300
- node02:9300
- node03:9300
- node03
#集群名字
cluster.name: elastic-7.8.1
##集群中当前的节点
node.name: node-3
#数据目录,配置到指定的分区挂载目录
path.data: /opt/app/elasticsearch/data
#日志目录
path.logs: /opt/app/elasticsearch/log
#当前主机的ip地址及ES服务端口【具体的服务器修改IP】
network.host: node03
#WebUI 默认9200
http.port: 7920
#是不是有资格主节点
node.master: false
node.data: true
# head 插件需要这打开这两个配置
#*表示支持所有域名
http.cors.allow-origin: "*"
#是否支持跨域
http.cors.enabled: true
#linux安装es的一个bug解决的配置
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
#es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举 master
cluster.initial_master_nodes:
- node-1
- node-2
- node-3
#es7.x 之后新增的配置,节点发现 【Linux为机器IP地址,根据自己的服务器修改】
discovery.seed_hosts:
- node01:9300
- node02:9300
- node03:9300
Linux上的索引操作[例]
生产中ES集群做了登陆认证,操作时需要认证账号和密码:
例:
curl -u gtes:gt-zhfxes66 -XPUT node01:7920/_cat/indices?v
1、检查ES节点是否正常启动
curl http://192.168.6.16:9200
2、检查集群的健康状况
curl http://node01:7920/_cat/health?v
3、查询ES的所有索引
curl http://node01:7920/_cat/indices?v
4、创建新的索引
curl -XPUT http://node01:7920/my_new_index?pretty
5、对新增的索引,插入一条数据
curl -XPUT http://node01:7920/my_new_index/user/1?pretty -d '{"name":"张三","age":"23"}'
6、删除数据,根据ID删除
curl -XDELETE http://node01:7920/cars?pretty
7、查看ES集群的主节点
http://node01:7920/_cat/nodes?pretty
Logstash
Logstash采集csv文件
1、logstash-hs300_total_profit_rate.conf
input {
file {
path => ["/opt/app/logstash/start/data/zhfx_hs300_profit_curmonth.csv"]
start_position => "beginning"
}
}
filter {
csv {
separator => "|$|"
columns => ["tradedate","recent","hs300_total_profit_rate"]
}
mutate {
convert => {
"tradedate" => "integer"
"recent" => "integer"
"hs300_total_profit_rate" => "float"
}
}
mutate {
remove_field => ["message","host","@timestamp","path","@version","column4"]
}
}
output {
elasticsearch {
hosts => ["node01:7920","node02:7920","node03:7920"]
index => "hs300_total_profit_rate"
document_id => "%{tradedate}"
template => "/opt/app/logstash/start/template/hs300_total_profit_rate.json"
manage_template => true
template_name => "hs300_total_profit_rate"
template_overwrite => true
}
stdout { codec => rubydebug }
}
Mapping 设计
要配置template,需要在output添加
output {
elasticsearch {
host => "IP:Port" #ES的服务器地址
protocol => "http" #使用的协议,默认可能会使用Node,具体还要看机器的环境
index => "logstash-%{+YYYY.MM.dd}"
document_type => "test" #索引的类型,旧的配置会使用index_type,但是这个字段在新版本中已经被舍弃了,推荐使用document_type
manage_template => true #注意默认为true,一定不能设置为false
template_overwrite => true #如果设置为true,模板名字一样的时候,新的模板会覆盖旧的模板
template_name => "myLogstash" #注意这个名字是用来查找映射配置的,尽量设置成全局唯一的
template => "/opt/logstash/config/templates/test.json" #映射配置文件的位置
}
}
hs300_total_profit_rate.json
{
"template": "hs300_total_profit_rate*",
"order": 1,
"settings": {
"index.number_of_replicas": "1",
"index.number_of_shards": "3",
"refresh_interval": "20s",
"index.search.slowlog.threshold.query.warn": "10s",
"index.search.slowlog.threshold.query.info": "5s",
"index.search.slowlog.threshold.query.debug": "2s",
"index.search.slowlog.threshold.query.trace": "500ms"
},
"mappings": {
"properties": {
"hs300_total_profit_rate": {
"type": "float"
},
"recent": {
"type": "integer"
},
"tradedate": {
"type": "integer"
}
}
},
"order": 0
}
执行脚本
start_logstash-hs300_total_profit_rate.sh
echo "若原配置已在运行,执行此命令前,请先kill掉原配置文件logstash 的进程id"
curl -XDELETE http://node01:7920/hs300_total_profit_rate?pretty >/opt/app/logstash/start/logs/hs300_total_profit_rate.out 2>&1 &
rm -rf path/hs300_total_profit_rate/
nohup ../bin/logstash -f ./confs/logstash-hs300_total_profit_rate.conf --path.data=/opt/app/logstash/start/path/hs300_total_profit_rate >/opt/app/logstash/start/logs/hs300_total_profit_rate.out 2>&1 &
ES认证
单台
1、在 elasticsearch.yml 中添加如下配置
# 配置X-Pack
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
需要重启es
2、设置密码
cd /opt/app/elasticsearch
./bin/elasticsearch-setup-passwords interactive -u 'http://node01:7920'
集群
- 任意一台激活xpack
curl -H "Content-Type:application/json" -XPOST http://node01:7920/_xpack/license/start_trial?acknowledge=true
[elastic@node03 elasticsearch]$ curl -H "Content-Type:application/json" -XPOST http://node01:7920/_xpack/license/start_trial?acknowledge=true
{"acknowledged":true,"trial_was_started":true,"type":"trial"}[elastic@node03 elasticsearch]$
报错 在每个配置文件指定初始节点:
{
"error": {
"root_cause": [
{
"type": "master_not_discovered_exception",
"reason": null
}
],
"type": "master_not_discovered_exception",
"reason": null
},
"status": 503
}
- 开启 xpack
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
1、证书
如下操作在其中一个node节点执行即可,生成完证书传到集群其他节点即可。
/opt/app/elasticsearch/bin/elasticsearch-certutil ca
=======Please enter the desired output file [elastic-stack-ca.p12]: 回车
=======Enter password for elastic-stack-ca.p12 : 123456
#生成 elastic-stack-ca.p12后,执行命令elasticsearch-certutil,需要注意的是elastic-stack-ca.p12文件必须是完整路径
./bin/elasticsearch-certutil cert --ca /opt/app/elasticsearch/elastic-stack-ca.p12
=======Enter password for CA (/opt/app/elasticsearch/elastic-stack-ca.p12) : 123456
=======Please enter the desired output file [elastic-certificates.p12]: 回车
=======Enter password for elastic-certificates.p12 : 123456
2、分发
使用创建的es的用户
#将生成的文件都移动到 config
mv elastic-* config/
#我做了映射,如果未做elasticsearch-7.8.1
scp -r elastic-* elastic@node02:/opt/app/elasticsearch/config
scp -r elastic-* elastic@node03:/opt/app/elasticsearch/config
3、配置[所有节点]
elasticsearch.yml添加 xpack
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
4、添加密码
- 各个节点为 xpack.security.transport 添加密码,每个节点都执行这两个命令【我这里123456】
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
5、查看集群状态,重启ES节点
curl -u elastic:123456 -XGET node01:7920/_cat/health?v
4、为内置账号添加密码【一定要三台节点都启动】
不然会出现 ERROR: Failed to set password for user [apm_system].
ES中内置了几个管理其他集成组件的账号即:apm_system
, beats_system
, elastic
, kibana
, logstash_system
, remote_monitoring_user
,使用之前,首先需要添加一下密码。
./bin/elasticsearch-setup-passwords interactive
[elastic@node01 elasticsearch]$ ./bin/elasticsearch-setup-passwords interactive
future versions of Elasticsearch will require Java 11; your Java version from [/opt/app/jdk1.8.0_181/jre] does not meet this requirement
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
Logstash 增加访问es集群的用户及密码
user => "elastic" # 注意:这里演示使用超级账号,安全起见最好是使用自定义的账号,并授予该用户创建索引的权限,具体看下方地址
password => "123456" # 密码是上面步骤设置的
elasticsearch-head访问es集群的用户及密码
#配置elasticsearch-head访问es集群的用户及密码
#http.cors.allow-headers: Authorization,content-type
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
#http://192.168.88.111:9100/?auth_user=elastic&auth_password=1234567
http://wsy1:9100/?auth_user=elastic&auth_password=changeme
Kibana组件访问带有安全认证的Elasticsearch集群
elasticsearch.username: "elastic" # 注意:此处不用超级账号elastic,而是使用kibana跟es连接的账号kibana
elasticsearch.password: "123456"
注:Kibana7.x
elasticsearch.url 换成elasticsearch.hosts