docker 安装 elasticsearch:7.14.2

docker 官网es镜像:Docker Hubhttps://hub.docker.com/_/elasticsearch?tab=tags&page=1

docker pull elasticsearch:7.14.2

sudo mkdir /mnt/sda/mount/docker/elasticsearch
cd /mnt/sda/mount/docker/elasticsearch
mkdir data
sudo mkdir data
sudo mkdir plugins
sudo mkdir config
sudo chmod 777 data plugins config

在config下配置写入es.yml 在启动的时候挂载上

# 集群名称
cluster.name: elasticsearch-cluster
# 节点名称
node.name: es-node1
# 绑定host,0.0.0.0代表当前节点的ip
network.host: 0.0.0.0
# 设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址(本机ip)
network.publish_host: 192.168.22.130
# 设置对外服务的http端口,默认为9200
http.port: 9200
# 设置节点间交互的tcp端口,默认是9300
transport.tcp.port: 9300
# 是否支持跨域,默认为false
http.cors.enabled: true
# 当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式。比如只允许本地地址。 /https?:\/\/localhost(:[0-9]+)?/
http.cors.allow-origin: "*"
# 表示这个节点是否可以充当主节点
node.master: true
# 是否充当数据节点
node.data: true
# 所有主从节点ip:port
discovery.seed_hosts: ["192.168.22.130:9300"]
# 这个参数决定了在选主过程中需要 有多少个节点通信  预防脑裂
discovery.zen.minimum_master_nodes: 1
docker run -d --name es7.14.2 -p 9200:9200 -p 9300:9300  -v /mnt/sda/mount/docker/elasticsearch/data:/usr/share/elasticsearch/data  -v /mnt/sda/mount/docker/elasticsearch/plugins:/usr/share/elasticsearch/plugins  -v /mnt/sda/mount/docker/elasticsearch/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml  elasticsearch:7.14.2



docker run -d --name es7.8.1 -p 9200:9200 -p 9300:9300 -v /home/es/data:/usr/share/elasticsearch/data -v /home/es/plugins:/usr/share/elasticsearch/plugins -v /home/es/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -e "discovery.type=single-node" a529963ec236

 启动报错:bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

原因是jvm 虚拟内存不足,修改系统文件

/etc/security/limits.conf

增加下面参数

* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535
vm.max_map_count=655360
vim  /etc/sysctl.conf 
vm.max_map_count=655360

配置权限:登陆 docker 修改文件 增加权限校验

config/elasticsearch.yml

# 跨域允许设置的头信息,默认为X-Requested-With,Content-Type,Content-Lengt 		
http.cors.allow-headers: Authorization 		
# 这条配置表示开启xpack认证机制  		
xpack.security.enabled: true 		
xpack.security.transport.ssl.enabled: true 	

重启docker

然后重新进入docker

报错:Cause: Cluster state has not been recovered yet, cannot write to the [null] index

出现该错误的主要原因是没有设置ElasticSearch的主节点配置,我们只需要在elasticsearch.yml配置文件中设置单节点或者集群节点的主节点名称即可,如下所示:

#主节点名称,如果不设置,在启用账户密码的时候可能会报“Cluster state has not been recovered yet, cannot write to the [null] index”之类的错误
cluster.initial_master_nodes: ["node-1"]
        在设置了该配置之后再去设置账户密码,即可设置成功,如下图所示:
 

xpack.security.transport.ssl.enabled: true
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

按照ik中文分词器登陆docker,执行

# 安装中文分词器
elasticsearch-plugin install https://githubom/medcl/elasticsearch-analysis-ik/releases/download/v7.14.2/elasticsearch-analysis-ik-7.14.2.zip
# 安装拼音分词器
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-pinyin/releases/download/v7.14.2/elasticsearch-analysis-pinyin-7.14.2.zip

重启docker

创建名为doc

分词器使用ik分词analysis

number_of_replicas 指定副本数,一个副本相当于存一份数据。相应的磁盘占用也会增加一份

number_of_shards 指定分片数

请求:PUT :http://192.168.1.30:9200/doc_index

{
    "settings": {
        "index": {
            "analysis": {
                "analyzer": {
                    "my_analyzer": {
                        "char_filter": [
                            "html_strip"
                        ],
                        "tokenizer": "ik_max_word"
                    }
                }
            },
            "number_of_replicas": 1,
            "number_of_shards": 2,
            "refresh_interval": "20s",
            "search": {
                "slowlog": {
                    "level": "info",
                    "threshold": {
                        "fetch": {
                            "info": "1000ms",
                            "warn": "1000ms"
                        },
                        "query": {
                            "info": "1000ms",
                            "warn": "1000ms"
                        }
                    }
                }
            }
        }
    }
}

给索引创建映射:

POST: :http://192.168.1.30:9200/doc_index/_mapping

{
    "properties": {
       
        "doc_id": {
            "type": "long"
        },
        "doc_name": {
            "type": "text"
        }
    }
}
{
  "query":{
    "match":{
      "para_content":"看看是谁"
       
    }
  },
    "highlight": {
      "boundary_chars":".,!? \t\n,。!?",
      "pre_tags" : ["<font color='red'>"],
      "post_tags" : ["</font>"],
      "fields": {
        "para_content" : {
          "number_of_fragments" : 0
        },
        "para_content.pinyin" : {
          "number_of_fragments" : 0
        },
        "address" : {
          "number_of_fragments" : 0
        }
      }
    }
}

参考:

一、Docker部署ElasticSearch7.8.1并挂载+配置X-Pack设置帐号密码+Kibana7.8.1 - 掘金

【ES从入门到实战】完整合集版,带思维导图 - 掘金

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值