Centos 单节点部署 Elasticsearch

Elasticsearch 安装设置

Centos 部署 Elasticsearch

# 下载
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.3-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.3-linux-x86_64.tar.gz.sha512
shasum -a 512 -c elasticsearch-7.16.3-linux-x86_64.tar.gz.sha512 
tar -xzf elasticsearch-7.16.3-linux-x86_64.tar.gz
cd elasticsearch-7.16.3/ 
# 配置 vi config/elasticsearch.yml
etwork.host: 0.0.0.0 # 让其他机器可访问
node.name: node-1
xpack.security.enabled: true  # https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-minimal-setup.html
discovery.type: single-node # 单节点运行
# 安装分词插件
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.16.3/elasticsearch-analysis-ik-7.16.3.zip # 可能需要管理员权限安装
# 分词插件配置文件:config/analysis-ik/config/IKAnalyzer.cfg.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
	<comment>IK Analyzer 扩展配置</comment>
	<!--用户可以在这里配置自己的扩展字典 -->
	<entry key="ext_dict">custom/mydict.dic;custom/single_word_low_freq.dic</entry>
	 <!--用户可以在这里配置自己的扩展停止词字典-->
	<entry key="ext_stopwords">custom/ext_stopword.dic</entry>
 	<!--用户可以在这里配置远程扩展字典 -->
	<entry key="remote_ext_dict">http://xxx.com/xxx.dic</entry>
 	<!--用户可以在这里配置远程扩展停止词字典-->
	<entry key="remote_ext_stopwords">http://xxx.com/xxx.dic</entry>
</properties>
# 通过设置 remote_ext_dict 实现热更新 IK 分词
http 请求需要返回两个头部(header),一个是 Last-Modified,一个是 ETag,这两者都是字符串类型,只要有一个发生变化,该插件就会去抓取新的分词进而更新词库。
http 请求返回的内容格式是一行一个分词,换行符用 \n 即可。
满足上面两点要求就可以实现热更新分词了,不需要重启 ES 实例。

# 运行
./bin/elasticsearch (需新建es用户执行)
# 自动生成密码
./bin/elasticsearch-setup-passwords auto

# 测试 elasticsearch
curl -u elastic:Qy6EOzspEHXHa18EhRg9 127.0.0.1:9200
# 插件测试 https://github.com/medcl/elasticsearch-analysis-ik
curl -u elastic:Qy6EOzspEHXHa18EhRg9 -XPOST http://localhost:9200/index/_mapping -H 'Content-Type:application/json' -d'
{
        "properties": {
            "content": {
                "type": "text",
                "analyzer": "ik_max_word",
                "search_analyzer": "ik_smart"
            }
        }
}'

curl -u elastic:Qy6EOzspEHXHa18EhRg9 -XPOST http://localhost:9200/index/_create/1 -H 'Content-Type:application/json' -d'
{"content":"美国留给伊拉克的是个烂摊子吗"}'

curl -u elastic:Qy6EOzspEHXHa18EhRg9 -XPOST http://localhost:9200/index/_search  -H 'Content-Type:application/json' -d'
{
    "query" : { "match" : { "content" : "美国" }},
    "highlight" : {
        "pre_tags" : ["<tag1>", "<tag2>"],
        "post_tags" : ["</tag1>", "</tag2>"],
        "fields" : {
            "content" : {}
        }
    }
}'
# 安装 kibana 一个开源的 Elasticsearch 分析和可视化平台
curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-7.16.3-linux-x86_64.tar.gz
tar xzvf kibana-7.16.3-linux-x86_64.tar.gz
cd kibana-7.16.3-linux-x86_64/
# 配置 vi config/kibana.yml
server.host: "0.0.0.0" 
server.publicBaseUrl: "http://<your-ip>:5601"
# 运行
./bin/kibana
# 连接
http://192.168.0.111:5601  # 使用用户名:elastic 及生成的密码登录

停止

ps -ef | grep elastic
kill <pid>

docker 安装 ES

环境:Centos 8

参考:https://www.elastic.co/guide/en/elasticsearch/reference/7.16/docker.html

# 拉取镜像
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.16.3
# 单节点运行(开发或测试环境)
# -p 127.0.0.1:9200:9200 仅本机访问
docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.16.3
# docker-compose.yml

## 多节点
version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - elastic
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic

volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local

networks:
  elastic:
    driver: bridge

生产环境配置要求

/etc/sysctl.conf 中设置:

vm.max_map_count=262144

临时设置:

sysctl -w vm.max_map_count=262144

使用

An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data.

Elasticsearch indexes all data in every field and each indexed field has a dedicated, optimized data structure.

For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees. The ability to use the per-field data structures to assemble and return search results is what makes Elasticsearch so fast.

When dynamic mapping is enabled, Elasticsearch automatically detects and adds new fields to the index. This default behavior makes it easy to index and explore your data—just start indexing documents and Elasticsearch will detect and map booleans, floating point and integer values, dates, and strings to the appropriate Elasticsearch data types.

It’s often useful to index the same field in different ways for different purposes. For example, you might want to index a string field as both a text field for full-text search and as a keyword field for sorting or aggregating your data. Or, you might choose to use more than one language analyzer to process the contents of a string field that contains user input.

The analysis chain that is applied to a full-text field during indexing is also used at search time. When you query a full-text field, the query text undergoes the same analysis before the terms are looked up in the index.

From your applications, you can use the Elasticsearch client for your language of choice: Java, JavaScript, Go, .NET, PHP, Perl, Python or Ruby.

The Elasticsearch REST APIs support structured queries, full text queries, and complex queries that combine the two. Structured queries are similar to the types of queries you can construct in SQL. For example, you could search the gender and age fields in your employee index and sort the matches by the hire_date field. Full-text queries find all documents that match the query string and return them sorted by relevance—how good a match they are for your search terms.

In addition to searching for individual terms, you can perform phrase searches, similarity searches, and prefix searches, and get autocomplete suggestions.

在 Kibana Dev Tools 中测试

# 创建单个文档数据
POST logs-my_app-default/_doc
{
  "@timestamp": "2099-05-06T16:21:15.000Z",
  "event": {
    "original": "192.0.2.42 - - [06/May/2099:16:21:15 +0000] \"GET /images/bg.jpg HTTP/1.0\" 200 24736"
  }
}
# 批量创建

PUT logs-my_app-default/_bulk
{ "create": { } }
{ "@timestamp": "2099-05-07T16:24:32.000Z", "event": { "original": "192.0.2.242 - - [07/May/2020:16:24:32 -0500] \"GET /images/hm_nbg.jpg HTTP/1.0\" 304 0" } }
{ "create": { } }
{ "@timestamp": "2099-05-08T16:25:42.000Z", "event": { "original": "192.0.2.255 - - [08/May/2099:16:25:42 +0000] \"GET /favicon.ico HTTP/1.0\" 200 3638" } }
# 搜索全部并排序
GET logs-my_app-default/_search
{
  "query": {
    "match_all": { }
  },
  "sort": [
    {
      "@timestamp": "desc"
    }
  ]
}
# 按范围条件查询指定字段值
GET logs-my_app-default/_search
{
  "query": {
    "range": { # 范围查询
      "@timestamp": {
        "gte": "2099-05-05",
        "lt": "2099-05-08"
      }
    }
  },
  "fields": [
    "@timestamp" # 指定字段
  ],
  "_source": false,
  "sort": [ # 排序
    {
      "@timestamp": "desc"
    }
  ]
}
# 运行时字段
GET logs-my_app-default/_search
{
  "runtime_mappings": {
    "source.ip": { # 运行时字段,从非结构化内容中读取
      "type": "ip",
      "script": """
        String sourceip=grok('%{IPORHOST:sourceip} .*').extract(doc[ "event.original" ].value)?.sourceip;
        if (sourceip != null) emit(sourceip);
      """
    }
  },
  "query": {
    "bool": { # 组合查询
      "filter": [
        {
          "range": {
            "@timestamp": {
              "gte": "2099-05-05",
              "lt": "2099-05-08"
            }
          }
        },
        {
          "range": {
            "source.ip": {
              "gte": "192.0.2.0",
              "lte": "192.0.2.240"
            }
          }
        }
      ]
    }
  },
  "fields": [
    "@timestamp",
    "source.ip"
  ],
  "_source": false,
  "sort": [
    {
      "@timestamp": "desc"
    }
  ]
}

# 聚合查询数据
GET logs-my_app-default/_search
{
  "runtime_mappings": {
    "http.response.body.bytes": {
      "type": "long",
      "script": """
        String bytes=grok('%{COMMONAPACHELOG}').extract(doc[ "event.original" ].value)?.bytes;
        if (bytes != null) emit(Integer.parseInt(bytes));
      """
    }
  },
  "aggs": {
    "average_response_size":{
      "avg": {
        "field": "http.response.body.bytes"
      }
    }
  },
  "query": {
    "bool": {
      "filter": [
        {
          "range": {
            "@timestamp": {
              "gte": "2099-05-05",
              "lt": "2099-05-08"
            }
          }
        }
      ]
    }
  },
  "fields": [
    "@timestamp",
    "http.response.body.bytes"
  ],
  "_source": false,
  "sort": [
    {
      "@timestamp": "desc"
    }
  ]
}

# 删除数据
DELETE _data_stream/logs-my_app-default

Java 客户端

https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/introduction.html

全文搜索引擎 Elasticsearch 入门教程

http://www.ruanyifeng.com/blog/2017/08/elasticsearch.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
CentOS 7上部署Elasticsearch 7.10节点的步骤如下: 1. 首先,禁用SELinux和防火墙。可以使用以下命令来禁用SELinux: ``` sed -i 's/enforcing/disabled/g' /etc/selinux/config setenforce 0 ``` 然后,使用以下命令禁用防火墙: ``` systemctl disable firewalld systemctl stop firewalld ``` 2. 下载Elasticsearch安装包。可以使用以下命令下载Elasticsearch 7.10的安装包: ``` wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.2-x86_64.rpm -P /opt/ ``` 3. 安装Elasticsearch。使用以下命令安装下载的安装包: ``` yum -y install /opt/elasticsearch-7.10.2-x86_64.rpm ``` 4. 配置Elasticsearch。使用编辑器打开`/etc/elasticsearch/elasticsearch.yml`文件,并进行以下配置: ``` cluster.name: my-application node.name: node-1 network.host: 0.0.0.0 http.port: 9200 cluster.initial_master_nodes: \["node-1"\] ``` 5. 启动Elasticsearch。使用以下命令启动Elasticsearch服务: ``` systemctl start elasticsearch ``` 现在,Elasticsearch已经在CentOS 7上成功部署节点。 请注意,以上步骤假设您已经安装了必要的依赖项,并且您已经具有适当的权限来执行这些操作。此外,您还可以根据您的需求进行其他配置和调整。 #### 引用[.reference_title] - *1* [elasticsearch 7.10.0 节点安装](https://blog.csdn.net/qq_21442867/article/details/115632876)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [CentOS7 安装 ElasticSearch7.10](https://blog.csdn.net/m0_54850825/article/details/123730051)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AaronZZH

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值