ElasticSearch单节点部署并通过monstache同步MongoDB数据

参考自文档:
https://www.cnblogs.com/balloon72/p/13177872.html elasticsearch和kibana安装
https://www.cnblogs.com/fuguang/p/13745336.html monstache同步数据

1.ElasticSearch和kibana 安装

准备配置文件

mkdir -p /mydata/elasticsearch/config
mkdir -p /mydata/elasticsearch/data
mkdir -p /mydata/kibana/config
mkdir -p /mydata/monstache-conf
echo "http.host: 0.0.0.0" >> /mydata/elasticsearch/config/elasticsearch.yml
echo "http.cors.enabled: true" >> /mydata/elasticsearch/config/elasticsearch.yml
echo "http.cors.allow-origin: \"*\"" >> /mydata/elasticsearch/config/elasticsearch.yml
#2022-5-7新增以下两行 开启安全验证 
echo "xpack.security.enabled: true" >> /mydata/elasticsearch/config/elasticsearch.yml
echo "xpack.security.transport.ssl.enabled: true" >> /mydata/elasticsearch/config/elasticsearch.yml

chmod 777 /mydata/elasticsearch/config
chmod 777 /mydata/kibana/config
chmod 777 /mydata/elasticsearch/data
chmod 777 /mydata/monstache-conf

编辑:/mydata/kibana/config/kibana.yml

elasticsearch.hosts: http://elasticsearch:9200
server.host: "0.0.0.0"
server.name: kibana
xpack.monitoring.ui.container.elasticsearch.enabled: true
#2022-5-7新增以下两行 开启安全验证 
elasticsearch.username: "elastic"  # es账号
elasticsearch.password: "******"   # es密码
i18n.locale: zh-CN

编辑:/mydata/monstache-conf/monstache.config.toml文件内容:

# connectionn settings

# connect to MongoDB using the following URL
2022-5-12修改以下一行密码内容 
mongo-url = "mongodb://root:******@192.168.3.208:27017,192.168.3.208:27018/nfy-csia?slaveOk=true&write=1&readPreference=secondaryPreferred&connectTimeoutMS=300000&authSource=admin&authMechanism=SCRAM-SHA-1"
# connect to the Elasticsearch REST API at the following node URLs
elasticsearch-urls = ["http://192.168.3.208:9200"]
direct-read-namespaces = ["nfy-csia.capMessage","nfy-csia.vehicleMessage"]
change-stream-namespaces = ["nfy-csia.capMessage","nfy-csia.vehicleMessage"]

# use the following user name for Elasticsearch basic auth
elasticsearch-user = "elastic"
# use the following password for Elasticsearch basic auth
#2022-5-7修改以下一行密码内容 
elasticsearch-password = "******"
# use 4 go routines concurrently pushing documents to Elasticsearch
elasticsearch-max-conns = 4
# propogate dropped collections in MongoDB as index deletes in Elasticsearch
dropped-collections = true
# propogate dropped databases in MongoDB as index deletes in Elasticsearch
dropped-databases = true
# in the log if you had synced previously. This just means that you are replaying old docs which are already
# in Elasticsearch with a newer version. Elasticsearch is preventing the old docs from overwriting new ones.
replay = false
# resume processing from a timestamp saved in a previous run
resume = true
index-as-update = true
# use a custom resume strategy (tokens) instead of the default strategy (timestamps)
# tokens work with MongoDB API 3.6+ while timestamps work only with MongoDB API 4.0+
resume-strategy = 0
# print detailed information including request traces
verbose = true

准备容器配置
docker-compose编排脚本新增内容(实际追加在192.168.3.249服务器原docker-compose):

elasticsearch:
  image: elasticsearch:7.14.2
  restart: always
  container_name: elasticsearch
  deploy:
    resources:
      limits:
        cpus: "4"
        memory: 6G
      reservations:
        memory: 2G
	environment:
    - discovery.type=single-node
	  - "ES_JAVA_OPTS=-Xms1024m -Xmx4096m"
  volumes:
    - /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
	  - /mydata/elasticsearch/data:/usr/share/elasticsearch/data
	  - /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins
	ports:
    - 9200:9200
	  - 9300:9300
	networks: 
	  - csia
kibana:
  image: kibana:7.14.2
  restart: always
  container_name: es-kibana
  deploy:
    resources:
      limits:
        cpus: "1"
        memory: 500M
      reservations:
        memory: 100M
	environment:
    - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
  volumes:
    - /mydata/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
  ports:
    - 5601:5601
	depends_on: 
	  - elasticsearch
	networks: 
	  - csia

#2022-5-7新增密码设置步骤:
es启动后进入容器设置密码:

# 进入容器
docker exec -it elasticsearch /bin/bash
# 设置密码-手动设置密码 会有多个用户需要设置,如下图
elasticsearch-setup-passwords interactive
# 完成后测试访问
curl 127.0.0.1:9200 -u elastic:******

![image.png](https://img-blog.csdnimg.cn/img_convert/292072c0d2495c84e1359b04d6d2c49c.png#clientId=ucab7770b-1a4a-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=309&id=uc3c1dda0&margin=[object Object]&name=image.png&originHeight=309&originWidth=431&originalType=binary&ratio=1&rotation=0&showTitle=false&size=11731&status=done&style=none&taskId=u2f43fff7-aee5-46dd-8989-8a23a5175ab&title=&width=431)
如图为以上多个用户配置密码,最主要是第一个elastic用户设为 ****** ,后面用户密码相同即可,不重要。

验证:http://IP:5601/app/kibana
![image.png](https://img-blog.csdnimg.cn/img_convert/020136f10886d145ff2471d5f9dec80c.png#clientId=udd9edb89-db77-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=371&id=uc60ef676&margin=[object Object]&name=image.png&originHeight=371&originWidth=819&originalType=binary&ratio=1&rotation=0&showTitle=false&size=26458&status=done&style=none&taskId=u6a68b4dc-5c4d-4014-889b-d1dd281c21f&title=&width=819)

2.mongoDB同步配置和 工具monstache安装:

在已有的mongoDB容器追加一行,并重启:

command: mongod --replSet repset

docker-compose添加mongoDB副本集容器:

mongo-replSet:
  image: mongo:4.1.13
  restart: always
  deploy:
    resources:
      limits:
        cpus: "2"
        memory: 4G
      reservations:
        memory: 200M
  logging:
    driver: "json-file"
    options:
      max-size: "500m"
  privileged: true
  ports:
    - 27018:27017
  networks:
    - csia
  command: mongod --replSet repset


mongo两个容器都运行后,进入其中一个执行关联副本集命令:

docker exec -it mongo容器名 bash
mongo
rs.initiate({_id:"repset",members:[{_id:0,host:"192.168.3.249:27017"},{_id:1,host:"192.168.3.249:27018"}]})

返回ok结束
docker-compose新增同步工具monstache容器:

monstache:
  image: rwynn/monstache:rel6
  restart: always
  container_name: monstache
  volumes:
    - /mydata/monstache-conf/monstache.config.toml:/app/monstache.config.toml
  restart: always
  deploy:
    resources:
      limits:
        cpus: "1"
        memory: 500M
      reservations:
        memory: 100M
  command: -f /app/monstache.config.toml
	depends_on: 
	  - mongo

完成后启动docker-compose

查看es内索引和数据量:curl -s -XGET --user elastic:密码 'http://127.0.0.1:9200/_cat/indices/?v’
正常情况如图,能看到mongo的索引并数据不断增加
![image.png](https://img-blog.csdnimg.cn/img_convert/87c49a96f43d09bc715e5e9943f97d32.png#clientId=udd9edb89-db77-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=198&id=ue955af5e&margin=[object Object]&name=image.png&originHeight=198&originWidth=1046&originalType=binary&ratio=1&rotation=0&showTitle=false&size=100363&status=done&style=none&taskId=u59c74c1e-d26b-4a1e-bbb9-502583dce6c&title=&width=1046)

2022-5-7更新: 以下功能实测效率低已放弃使用
附:由于技战法功能需要用到聚合查询,初始化es后需要将cameraId和archivesInfo.archivesId两个字段执行如下配置:
kibana->开发工具->
执行如下
1、
PUT nfy-csia.capmessage/_mapping?pretty
{
~~ “properties”: {
“cameraId”: {
“type”: “text”,
“fielddata”: true~~
~~ }
}
}
2、
PUT nfy-csia.capmessage/_mapping?pretty
{
“properties”: {
“archivesInfo.archivesId”: {
“type”: “text”,
“fielddata”: true~~
~~ }
}
}~~
![image.png](https://img-blog.csdnimg.cn/img_convert/9e724c0ca6c904d924d9d9a9cd80e9d1.png#clientId=ufccc6ecc-c302-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=397&id=u28edb26a&margin=[object Object]&name=image.png&originHeight=397&originWidth=961&originalType=binary&ratio=1&rotation=0&showTitle=false&size=35151&status=done&style=none&taskId=ue00f65a5-d031-4e52-b53e-61d784e1a1f&title=&width=961)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Elasticsearch是一个开源的分布式搜索和分析引擎,可以用于快速和准确地搜索、分析和可视化大量数据。下面是节点部署Elasticsearch的步骤: 1. 下载和安装Java:Elasticsearch是用Java编写的,所以首先需要安装Java。可以前往Oracle官网下载Java,并按照安装向导进行安装。 2. 下载和解压缩Elasticsearch:在Elasticsearch官网下载最新稳定版本的Elasticsearch。将下载的压缩文件解压缩到所需位置。 3. 配置Elasticsearch:进入解压缩后的Elasticsearch目录,打开config目录下的elasticsearch.yml文件。可以编辑该文件以配置Elasticsearch的各种参数,例如集群名称、网络绑定地址等。 4. 启动Elasticsearch:在命令行中进入Elasticsearch目录,并执行bin目录下的elasticsearch.bat(Windows)或elasticsearch命令(Linux/Mac)来启动Elasticsearch。 5. 检查Elasticsearch状态:可以使用curl或浏览器访问http://localhost:9200来检查Elasticsearch的状态是否正常。如果返回类似以下内容表示成功: { "name" : "node-1", "cluster_name" : "myFirstCluster", "cluster_uuid" : "xxxxxxxxxxxxxxxxxxxx", "version" : { "number" : "7.10.2", "build_flavor" : "default", "build_type" : "zip", "build_hash" : "aa7f54684a9c2c91b66f9346176bdfd331cc624b", "build_date" : "2021-01-19T19:19:18.137384Z", "build_snapshot" : false, "lucene_version" : "8.8.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } 至此,就成功完成了Elasticsearch节点部署。 请注意,在实际生产环境中,节点部署可能无法提供足够的性能和容错能力。通常建议使用多节点集群部署以提高可用性。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值