09.需要加强练习

1. 集群配置

  1. 有三个节点,node1是专用的master
  2. node2,node3是data+ingest节点
  3. ip node1 192.168.1.123, 192.168.1.124,192.168.1.125

2. security一个单节点的集群

对单节点进行security操作,elastic的初始化密码为elastic-password
其余账户的也是如此
和下面的创建用户可以同时使用

3. 使用api来创建role,user,并使用kibana登录测试

创建一个role,对所有role*的index有所有权限,对cluster有monitor权限,能够登陆和使用kibana
使用这个账户重新登录kibana进行测试

4. script的使用题型

文档的数据是

PUT hamlet02/_bulk
{"index":{"_id":0}}
{"line_number":"1","speaker":"BERNARDO","text_entry":"Whos there?"}
{"index":{"_id":1}}
{"line_number":"2","speaker":"FRANCISCO","text_entry":"Nay, answer me:stand ","reindexBatch":1}

下面这样的
编写一个script,name是control_reindex_batch,用来对hamlet02进行update,如果原文档中有reindexBatch的话,reindexBatch的值会add update的时候传进来的increment参数。如果原文档没有reindexBatch字段的话,就直接初始化为传进来的init参数,同时将speak重命名为leader。

5. search template 使用方式

对于kibana_sample_data_logs中的数据
编写一个search_template name是with_response_and_tag,有三个参数
with_min_response,with_max_response,with_tag
对应的query满足 with_min_response<=response<=with_max_response
tags contains with_tag value

再来一个复杂点的,夹带条件判断,如果
(i) if the “with_max_response” parameter is not set, then don’t set an upper bound to the response value

(ii) if the “with_tag” parameter is not set, then do not apply that filter at all

答案

PUT _scripts/with_response_and_tag05
{
  "script": {
    "lang": "mustache",
    "source":"""{
      "query": {
        "bool": {
          "must": [
            {
              "range": {
                "response": {
                  "gte": {{with_min_response}}, 
                  "lte": {{with_max_response}}
                }
              }
            },
            {
              "match": {
                "tags": "{{with_tag}}"
              }
            }
          ]
        }
      }
    }
    """
  }
}

POST _scripts/with_response_and_tag03
{
  "script": {
    "lang": "mustache",
    "source": """
    {
      "query": {
        "bool": {
          "must": [
            {
              "range": {
                "response": {
                  "gte": {{with_min_response}}{{#with_max_response}},
                  "lte": {{with_max_response}}{{/with_max_response}}
                }
              }
            }{{#with_tag}},
            {
              "match": {
                "tags": "{{with_tag}}"
              }
            }{{/with_tag}}
          ]
        }
      }
    }
"""
  }
}

还有就是json化的方式

6. pipeline使用

文档是数据是这个样子的

POST hamlet-2/_doc/4
{
 "text_entry": "With turbulent and dangerous lunacy?",
 "line_number": "3.1.4",
 "number_act": "3",
 "speaker": "KING CLAUDIUS",
 "date":"09/12/2020"
}

写一个pipeline叫split_act_scene_line,将line_number使用点号"."切分成三个字段number_act, number_scenenumber_line

这个挂一个答案吧,自己写写试试哦

POST _ingest/pipeline/_simulate
{
  "pipeline": {
    "description": "string split by dot",
    "processors": [
      {
        "split": {
          "field": "line_number",
          "separator": "\\.",
          "target_field":"temp_arry"
        }
      },
      {
        "script": {
          "lang": "painless",
          "source": """
        ctx.number_act=ctx.temp_arry[0];
        ctx.number_scene=ctx.temp_arry[1];
        ctx.number_line=ctx.temp_arry[2];
"""
        }
      },
      {
        "remove": {
          "field": "temp_arry"
        }
      }
    ]
  },
  "docs": [
    {
      "_source": {
        "line_number": "1.1.3",
        "text_entry": "Long live the king!",
        "reindexBatch": 2,
        "speaker": "BERNARDO"
      }
    }
  ]
}

7. index filter的使用场景

部署三个节点的ES节点,有一个属性叫warm_hot
node01为hot,node02和node03是warm节点。
创建两个索引task701,task702,两个索引都是2个shard。一个shard都存在hot一个都存在warm当中。

第二题
三个节点有一个属性area
node01,node02为rack01, node03为rack02

创建一个索引task703,2个shard,1个replica,
让task703的所有shard能够实现在rack01,rack02上的互备份。

8. search中的几个组合查询,附加alias

filter fuzzy compound term

9. cross_cluster search

在cluster1中写入如下数据



PUT hamlet/_bulk
{"index":{"_id":0}}
{"line_number":"1","speaker":"BERNARDO","text_entry":"Whos there?"}
{"index":{"_id":1}}
{"line_number":"2","speaker":"FRANCISCO","text_entry":"Nay answer me: stand, and unfold yourself."}
{"index":{"_id":2}}
{"line_number":"3","speaker":"BERNARDO","text_entry":"Long live theking!"}
{"index":{"_id":3}}
{"line_number":"4","speaker":"FRANCISCO","text_entry":"Bernardo?"}
{"index":{"_id":4}}
{"line_number":"5","speaker":"BERNARDO","text_entry":"He."}

在cluster2中写入如下数据



PUT hamlet02/_bulk
{"index":{"_id":0}}
{"line_number":"1","speaker":"BERNARDO","text_entry":"Whos there?"}
{"index":{"_id":1}}
{"line_number":"2","speaker":"FRANCISCO","text_entry":"Nay answer me: stand, and unfold yourself."}
{"index":{"_id":2}}
{"line_number":"3","speaker":"BERNARDO","text_entry":"Long live theking!"}
{"index":{"_id":3}}
{"line_number":"4","speaker":"FRANCISCO","text_entry":"Bernardo?"}
{"index":{"_id":4}}
{"line_number":"5","speaker":"BERNARDO","text_entry":"He."}

在cluster1中同时搜索两个集群中speaker为FRANCISCO 的doc

10. snapshot restore

单节点,写入数据



PUT hamlet02/_bulk
{"index":{"_id":0}}
{"line_number":"1","speaker":"BERNARDO","text_entry":"Whos there?"}
{"index":{"_id":1}}
{"line_number":"2","speaker":"FRANCISCO","text_entry":"Nay answer me: stand, and unfold yourself."}
{"index":{"_id":2}}
{"line_number":"3","speaker":"BERNARDO","text_entry":"Long live theking!"}
{"index":{"_id":3}}
{"line_number":"4","speaker":"FRANCISCO","text_entry":"Bernardo?"}
{"index":{"_id":4}}
{"line_number":"5","speaker":"BERNARDO","text_entry":"He."}

shared file system repository可以放在home/repo,home/my_repo 两个目录下
在home/my_repo下创建back_repo的repository,然后对hamlet02 进行备份,备份名称为hamlet_backup

删除hamlet02 验证一下是否能正常恢复(这个要注意可以先做一个reindex保命。。。)

11. dynaic_maping and template

创建一个template,要求

  1. number_开头的string类型会设置为int
  2. 其他类型的string设置为keyword
  3. text_entry字段分别使用standard,english,dutch analyzer
  4. 日期格式为"MM/dd/yyyy"
  5. 所有hamlet开头的索引都会使用这个pattern

测试写入

POST hamlet-2/_doc/4
{
 "text_entry": "With turbulent and dangerous lunacy?",
 "line_number": "3.1.4",
 "number_act": "3",
 "speaker": "KING CLAUDIUS",
 "date":"09/12/2020"
}

12. join mapping

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值