OpenSearch 学习

OpenSearch 是一个可缩放、灵活且可扩展的开源软件套件,用于在 Apache 2.0 下获得许可的搜索、分析和可观察性应用程序。由Apache Lucene提供支持并由OpenSearch Project 社区驱动,OpenSearch 提供了一个与供应商无关的工具集,您可以使用它来构建安全、高性能、经济高效的应用程序。将 OpenSearch 用作端到端解决方案,或将其与您首选的开源工具或合作伙伴项目相连接。官网OpenSearch

OpenSearch和Elasticsearch (ES) 的关系,因为OpenSearch是由一群离开Elastic公司的开发人员创建的,他们离开了Elastic公司是因为Elastic公司宣布他们的商业许可证将改变,使得一些开发人员对于开源许可证的使用感到担忧。这些离开的开发人员创建了OpenSearch作为Elasticsearch的一个分支,并承诺将OpenSearch保持作为一个真正的开源项目,同时继续开发和支持OpenSearch。OpenSearch最初就是基于Elasticsearch的代码库。它们都是搜索引擎,使用相似的查询语言和索引管理方法。但是,OpenSearch在某些方面与Elasticsearch不同,例如OpenSearch更加注重可移植性和互操作性,因此它提供了一些与Elasticsearch不同的功能和API。

1.虚拟机安装opensearch集群和opensearch-dashboard管理面板

 准备好虚拟机并安装好docker-compose 利用docker-compose.yml文件 可以实现一键安装

参考官方文档 官方文档Docker - OpenSearch documentation

docker-compose.yml如下 切换到docker-compose文件同级目录 使用docker-compose up -d命令等待即可 

version: '3'
services:
  opensearch-node1: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
    image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster # Name the cluster
      - node.name=opensearch-node1 # Name the node that will run in this container
      - discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligible to serve as cluster manager
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
    ports:
      - 9200:9200 # REST API
      - 9600:9600 # Performance Analyzer
    networks:
      - opensearch-net # All of the containers will join the same Docker bridge network
  opensearch-node2:
    image: opensearchproject/opensearch:latest # This should be the same image used for opensearch-node1 to avoid issues
    container_name: opensearch-node2
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node2
      - discovery.seed_hosts=opensearch-node1,opensearch-node2
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data2:/usr/share/opensearch/data
    networks:
      - opensearch-net
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:latest # Make sure the version of opensearch-dashboards matches the version of opensearch installed on other nodes
    container_name: opensearch-dashboards
    ports:
      - 5601:5601 # Map host port 5601 to container port 5601
    expose:
      - "5601" # Expose port 5601 for web access to OpenSearch Dashboards
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query
    networks:
      - opensearch-net

volumes:
  opensearch-data1:
  opensearch-data2:

networks:
  opensearch-net:

安装后 docker-compose ps 查看 正常都为running 用docker-compose logs -f <容器名称>查看日志

 浏览器打开虚拟机 ip:5601查看Dashboard 默认账号 admin 密码 admin

点击Add sample data添加opensearch自带的测试数据 这里就不展开了

2.Java程序连接opensearch集群 

参考官方Java client - OpenSearch documentation

官方文档的案例

依赖

<dependency>
  <groupId>org.opensearch.client</groupId>
  <artifactId>opensearch-rest-client</artifactId>
  <version>2.6.0</version>
</dependency>

<dependency>
  <groupId>org.opensearch.client</groupId>
  <artifactId>opensearch-java</artifactId>
  <version>2.3.0</version>
</dependency>

代码 

import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.opensearch.client.RestClient;
import org.opensearch.client.RestClientBuilder;
import org.opensearch.client.base.RestClientTransport;
import org.opensearch.client.base.Transport;
import org.opensearch.client.json.jackson.JacksonJsonpMapper;
import org.opensearch.client.opensearch.OpenSearchClient;
import org.opensearch.client.opensearch._global.IndexRequest;
import org.opensearch.client.opensearch._global.IndexResponse;
import org.opensearch.client.opensearch._global.SearchResponse;
import org.opensearch.client.opensearch.indices.*;
import org.opensearch.client.opensearch.indices.put_settings.IndexSettingsBody;

import java.io.IOException;

public class OpenSearchClientExample {
  public static void main(String[] args) {
    RestClient restClient = null;
    try{
    System.setProperty("javax.net.ssl.trustStore", "/full/path/to/keystore");
    System.setProperty("javax.net.ssl.trustStorePassword", "password-to-keystore");

    //Only for demo purposes. Don't specify your credentials in code.
    final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
    credentialsProvider.setCredentials(AuthScope.ANY,
      new UsernamePasswordCredentials("admin", "admin"));

    //Initialize the client with SSL and TLS enabled
    restClient = RestClient.builder(new HttpHost("localhost", 9200, "https")).
      setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
        @Override
        public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
        return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
        }
      }).build();
    Transport transport = new RestClientTransport(restClient, new JacksonJsonpMapper());
    OpenSearchClient client = new OpenSearchClient(transport);

    //Create the index
    String index = "sample-index";
    CreateRequest createIndexRequest = new CreateRequest.Builder().index(index).build();
    client.indices().create(createIndexRequest);

    //Add some settings to the index
    IndexSettings indexSettings = new IndexSettings.Builder().autoExpandReplicas("0-all").build();
    IndexSettingsBody settingsBody = new IndexSettingsBody.Builder().settings(indexSettings).build();
    PutSettingsRequest putSettingsRequest = new PutSettingsRequest.Builder().index(index).value(settingsBody).build();
    client.indices().putSettings(putSettingsRequest);

    //Index some data
    IndexData indexData = new IndexData("first_name", "Bruce");
    IndexRequest<IndexData> indexRequest = new IndexRequest.Builder<IndexData>().index(index).id("1").document(indexData).build();
    client.index(indexRequest);

    //Search for the document
    SearchResponse<IndexData> searchResponse = client.search(s -> s.index(index), IndexData.class);
    for (int i = 0; i< searchResponse.hits().hits().size(); i++) {
      System.out.println(searchResponse.hits().hits().get(i).source());
    }

    //Delete the document
    client.delete(b -> b.index(index).id("1"));

    // Delete the index
    DeleteRequest deleteRequest = new DeleteRequest.Builder().index(index).build();
    DeleteResponse deleteResponse = client.indices().delete(deleteRequest);

    } catch (IOException e){
      System.out.println(e.toString());
    } finally {
      try {
        if (restClient != null) {
          restClient.close();
        }
      } catch (IOException e) {
        System.out.println(e.toString());
      }
    }
  }
}

修改localhost为你的opensearch虚拟机ip, 代码中新建了一个index添加了一条数据又将其删掉了 index也删掉了 自己可以注释掉后面的删除代码通过dashboard的dev-tools(下面介绍) 执行dql查看数据是否写入opensearch

这里证书相关的配置会报错 注释掉

//        System.setProperty("javax.net.ssl.trustStore", "");
//        System.setProperty("javax.net.ssl.trustStorePassword", "");

直接访问 虚拟机ip:9200端口 提示不安全点击继续前往 输入上面的admin账户和密码

点击证书无效这个按钮 

 

 导出到找随便一个路径 起名xxx.cer

管理员身份打开win系统的cmd命令行 找到自己的java环境变量路径 不知道的可以电脑设置里查看高级系统设置

 cd到security目录下 我的路径为C:\Program Files\Java\jdk1.8.0_241\jre\lib\security在这里执行

keytool -import -alias abc -keystore cacerts -file D://abc.cer

D://abc.cer就是刚才导出的xxx.cer的路径 

显示 是否信任此证书 输入Y 像下面这样就ok

 再次运行代码 报错

Caused by: javax.net.ssl.SSLPeerUnverifiedException: Host name '192.168.177.129' does not match the certificate subject provided by the peer (CN=node-0.example.com, OU=node, O=node, L=test, DC=de)
    at org.apache.http.nio.conn.ssl.SSLIOSessionStrategy.verifySession(SSLIOSessionStrategy.java:217)

此时修改windows的hosts文件添加 

虚拟机ip node-0.example.com 比如下面这样 火绒安全工具就可以修改

 再将代码中框柱的位置改为node-0.example.com 再次运行 没有报错 

 利用Dashboard 侧边栏底下的dev-tools可以查看添加的数据

 右边就是查出来的数据 

3 结合代码 增删改查

可以将索引(index)认为是数据库 映射(mapping)认为是字段

创建测试索引 mapping中途不能改变但是可以增加 

put test_index
{
		"mappings": {
			"properties": {
				"group_create_time": {
					"type": "date",
					"format": "yyyy-MM-dd HH:mm:ss"
				},
				"log_group_name": {
					"type": "keyword"
				},
				"log_stream": {
				  "type":"nested",
					"properties": {
						"create_time": {
							"type": "date",
							"format": "yyyy-MM-dd HH:mm:ss"
						},
						"deploy_type": {
							"type": "keyword"
						},
						"log_path": {
							"type": "text"
						},
						"log_stream_name": {
							"type": "keyword"
						},
						"server_ip": {
							"type": "keyword"
						},
						"status":{
						  "type":"keyword"
						}
					}
				}
			}
		}
}

java代码实现

    public String testIndex(String indexName,HashMap mapping) {
        CreateIndexRequest request = new CreateIndexRequest(indexName);
        request.settings(Settings.builder()
                .put("index.number_of_shards", 4)
                .put("index.number_of_replicas", 3));
        request.mapping(mapping);
        try {
            CreateIndexResponse createIndexResponse = client.indices().create(request, RequestOptions.DEFAULT); // client 为RestHighLevelClient 提前设置好连接opensearch的参数 直接注入的
            return Boolean.toString(createIndexResponse.isAcknowledged());
        } catch (IOException e) {
            e.printStackTrace();
        }
        return "error happened!";
    }

插入测试数据

POST test_index/_doc
{
  "log_group_name" : "heihei",
          "group_create_time" : "2022-08-15 16:25:30",
          "log_stream" : [
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "111",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm",
              "status" : "stoped"
            },
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "22",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm",
              "status" : "stoped"
            }
          ]
        }
}

查看添加的数据

# 查看数据 size指定返回条数 默认10条
GET test_index/_search
{
  "size": 20
}

# 查看映射
GET test_index/_mapping

 查看数据得到的结果

{
  "took" : 864,     // 耗时单位为毫秒
  "timed_out" : false,     // 是否超时
  "_shards" : {     // 分片信息的统计。total表示总共参与搜索的分片数
    "total" : 1,    // 共参与搜索的分片数
    "successful" : 1,  // 成功搜索的分片数
    "skipped" : 0, // 跳过搜索的分片数(比如搜索操作只涉及了一个分片,那么就没有被跳过的分片
    "failed" : 0   // 失败的分片数
  },
  "hits" : { // 关于搜索命中结果的信息
    "total" : { // 命中结果信息
      "value" : 3,  // 命中结果的总数
      "relation" : "eq" // 表示比较符号(这里是“eq”表示等于)。
    },
    "max_score" : 1.0,     // 最高得分,得分是评估文档与查询匹配程度的指标
    "hits" : [   // 具体命中的文档信息
      {
        "_index" : "test_index", // 文档所在的索引
        "_type" : "_doc",         // 文档所属的类型
        "_id" : "SbhIj4cBfnlzLPVVITje", // 文档的唯一标识
        "_score" : 1.0,          // 文档的得分
        "_source" : {             // 文档的原始内容,即被索引的数据。
          "log_group_name" : "88888",
          "group_create_time" : "2022-08-15 16:25:30",
          "log_stream" : [
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm"
            },
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "111",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm"
            },
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "222",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm"
            }
          ]
        }
      },
      {
        "_index" : "test_index",
        "_type" : "_doc",
        "_id" : "TbhIj4cBfnlzLPVVwTgl",
        "_score" : 1.0,
        "_source" : {
          "log_group_name" : "hahaha",
          "group_create_time" : "2022-08-15 16:25:30",
          "log_stream" : [
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "111",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm",
              "status" : "stop"
            },
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "22",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm",
              "status" : "stoped"
            }
          ]
        }
      },
      {
        "_index" : "test_index",
        "_type" : "_doc",
        "_id" : "w7hhj4cBfnlzLPVVqjin",
        "_score" : 1.0,
        "_source" : {
          "log_group_name" : "heihei",
          "group_create_time" : "2022-08-15 16:25:30",
          "log_stream" : [
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "111",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm",
              "status" : "stoped"
            },
            {
              "server_ip" : "192.168.177.128",
              "log_stream_name" : "22",
              "create_time" : "2022-08-15 16:25:30",
              "log_path" : "/path/to/log",
              "deploy_type" : "vm",
              "status" : "stoped"
            }
          ]
        }
      }
    ]
  }
}

 向其中的log_stream添加数据 "ctx._source.log_stream.add(params)这句脚本意思是向_source下面的log_stream映射添加元素params

POST test_index/_update/ITZfjocBwd7nfRg9WjsH
{
  "script": {
    "source": "ctx._source.log_stream.add(params)",
    "params": 
    {
        "create_time": "2022-08-15 16:25:30",
        "deploy_type": "vm",
        "log_path": "/path/to/log",
        "log_stream_name": "8888",
        "server_ip": "192.168.177.128"
      }
  }
}

public String addLogStream(LogStreamDTO logStream) {
        try {
            HashMap<String, String> sourceMap = new HashMap<>(16);
            sourceMap.put("create_time", logStream.getCreateTime());
            sourceMap.put("deploy_type", logStream.getType());
            sourceMap.put("log_path", logStream.getPath());
            sourceMap.put("log_stream_name", logStream.getName());
            sourceMap.put("server_ip", logStream.getIp());
            sourceMap.put("status", logStream.getStatus());
            UpdateRequest updateRequest = new UpdateRequest("test_index", "_doc", logStream.getLogGroupId());
            Map<String, Object> parameters = new HashMap<>();
            parameters.put("item", sourceMap);
            Script script = new Script(ScriptType.INLINE, "painless", "ctx._source.log_stream.add(params.item)", parameters);
            updateRequest.script(script);
            UpdateResponse updateResponse = client.update(updateRequest, RequestOptions.DEFAULT);
            log.debug("updateResponse: " + updateResponse);
            int failed = updateResponse.getShardInfo().getFailed();
            if (failed==0){
                return "success";
            }
            return "error";
        } catch (IOException e) {
            e.printStackTrace();
            return "error";
        }
    }

 删除log_stream映射中 名字为test的数据 删除了无法复原

POST test_index/_update/
{
  "script": {
    "source": "ctx._source.log_stream.removeIf(item -> item.log_stream_name == params.log_stream_name)",
    "params": {
      "log_stream_name": "test"
    }
  }
}
  public String deleteLogStream(String id, String logStreamName) {
        UpdateRequest updateRequest = new UpdateRequest("test_index", id)
                .script(new Script(
                        ScriptType.INLINE, "painless",
                        "ctx._source.log_stream.removeIf(item -> item.log_stream_name == params.log_stream_name)",
                        Collections.singletonMap("log_stream_name", logStreamName)
                ));
        try {
            UpdateResponse updateResponse = client.update(updateRequest, RequestOptions.DEFAULT);
            log.debug("updateResponse: " + updateResponse);
            if (updateResponse.getShardInfo().getFailed() == 0) {
                return "success";
            }
            return "error";
        } catch (Exception e) {
            e.printStackTrace();
            return "error";
        }
    }

删除数据

# 删除指定id的文档数据 ejR3focBYtMnlZCoQd1V为其id
DELETE log_group_manager/_doc/ejR3focBYtMnlZCoQd1V
#彻底删除索引 
DELETE test_index 

未完待续....

Configuring TLS certificates - OpenSearch documentation

  • 7
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值