分布式搜索被分为两个阶段:query and fetch。查询和取回。
query phase
查询阶段During the initial query phase, the query is broadcast to a shard copy (a primary or replica shard) of every shard in the index. Each shard executes the search locally and builds a priority queue of matching documents.
Figure 14. Query phase of distributed search
The query phase consists of the following three steps:
- The client sends a
search
request toNode 3
, which creates an empty priority queue of sizefrom + size
. Node 3
forwards the search request to a primary or replica copy of every shard in the index. Each shard executes the query locally and adds the results into a local sorted priority queue of sizefrom + size
.- Each shard returns the doc IDs and sort values of all the docs in its priority queue to the coordinating node,
Node 3
, which merges these values into its own priority queue to produce a globally sorted list of results.
When a search request is sent to a node, that node becomes the coordinating node. It is the job of this node to broadcast the search request to all involved shards, and to gather their responses into a globally sorted result set that it can return to the client.
The first step is to broadcast the request to a shard copy of every node in the index. Just likedocument GET
requests, search requests can be handled by a primary shard or by any of its replicas.This is how more replicas (when combined with more hardware) can increase search throughput. A coordinating node will round-robin through all shard copies on subsequent requests in order to spread the load.
fetch phase
The distributed phase consists of the following steps:
- The coordinating node identifies which documents need to be fetched and issues a multi
GET
request to the relevant shards. - Each shard loads the documents and enriches them, if required, and then returns the documents to the coordinating node.
- Once all documents have been fetched, the coordinating node returns the results to the client.
Deep Pagination
The query-then-fetch process supports pagination with the from
and size
parameters, but within limits. Remember that each shard must build a priority queue of length from + size
, all of which need to be passed back to the coordinating node. And the coordinating node needs to sort through number_of_shards * (from + size)
documents in order to find the correct size
documents.
Depending on the size of your documents, the number of shards, and the hardware you are using, paging 10,000 to 50,000 results (1,000 to 5,000 pages) deep should be perfectly doable. But with big-enough from
values, the sorting process can become very heavy indeed, using vast amounts of CPU, memory, and bandwidth. For this reason, we strongly advise against deep paging.
In practice, “deep pagers” are seldom human anyway. A human will stop paging after two or three pages and will change the search criteria. The culprits are usually bots or web spiders that tirelessly keep fetching page after page until your servers crumble at the knees.
If you do need to fetch large numbers of docs from your cluster, you can do so efficiently by disabling sorting with the scan
search type, which we discuss later in this chapter.
search options
The preference
parameter allows you to control which shards or nodes are used to handle the search request. It accepts values such as _primary
, _primary_first
, _local
, _only_node:xyz
, _prefer_node:xyz
, and _shards:2,3
Bouncing Results
Imagine that you are sorting your results by a timestamp
field, and two documents have the same timestamp. Because search requests are round-robined between all available shard copies, these two documents may be returned in one order when the request is served by the primary, and in another order when served by the replica.
This is known as the bouncing results problem: every time the user refreshes the page, the results appear in a different order. The problem can be avoided by always using the same shards for the same user, which can be done by setting the preference
parameter to an arbitrary string like the user’s session ID.
search_typeedit
While query_then_fetch
is the default search type, other search types can be specified for particular purposes, for example:
GET /_search?search_type=count
-
The
count
search type has only aquery
phase. It can be used when you don’t need search results, just a document count or aggregations on documents matching the query. -
The
query_and_fetch
search type combines the query and fetch phases into a single step. This is an internal optimization that is used when a search request targets a single shard only, such as when arouting
value has been specified. While you can choose to use this search type manually, it is almost never useful to do so. -
The
dfs
search types have a prequery phase that fetches the term frequencies from all involved shards in order to calculate global term frequencies. We discuss this further in Relevance Is Broken!. -
The
scan
search type is used in conjunction with thescroll
API to retrieve large numbers of results efficiently. It does this by disabling sorting. We discuss scan-and-scroll in the next section. -
scan and scroll
用来取出大量数据,避免深度分页所带来的效率问题。from size分页所带来的问题就是如果查询页数过多所带来的排序排序问题,我们只要禁止排序,就可以优化from size分页查询 - scroll 类似关系型database中的游标,可以向下滚动。只是一个数据的view,当scan查询初始化的时候,scroll的只是那个时间点的数据快照,之后的更新不会在scroll中出现。
- scan.禁止排序,只是取出数据。
- 打开游标,设定游标的有效时间为1m,1分钟过后游标失效。
- 返回_scroll_id,用_scroll_id可以向下滚动。
-
index management 索引管理
-
In fact, if you want to, you can prevent the automatic creation of indices by adding the following setting to the
config/elasticsearch.yml
file on each node:action.auto_create_index: false
指定index的mapping -
{
"settings": {
"number_of_shards" : 1,
"number_of_replicas" : 0
},
- "mappings":{
- "index":{
- "type":{
- "properties":{
- "field":{
- "type":"string",
- "index":"analyzed",
- "analyzer":"ik",
- "raw":{
- "type":"string",
- "index":"no"
- }
- }
- }
- }
- }
- }
- }
-
PUT /spanish_docs
创建scope为index作用域的analyzer 名字为es_std
{
"settings": {
"analysis": {
"analyzer": {
"es_std": {
"type": "standard",
"stopwords": "_spanish_"
}
}
}
}
} -
-
the root object
- The string ID of the document
- The type name of the document
- The index where the document lives
-
The
_type
and_id
concatenated together astype#id
-
The
_id
field does have one setting that you may want to use: thepath
setting tells Elasticsearch that it should extract the value for the_id
from a field within the document itself.PUT /my_index
VIEW IN SENSE document _id的值来源于已经存在的某个值
{
"mappings": {
"my_type": {
"_id": {
"path": "doc_id"
},
"properties": {
"doc_id": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
} -
dynamic mapping
- Add new fields dynamically—the default
- Ignore new fields
-
Throw an exception if an unknown field is encountered
PUT /my_index
{
"mappings": {
"my_type": {
"dynamic": "strict",
"properties": {
"title": { "type": "string"},
"stash": {
"type": "object",
"dynamic": true
}
}
}
}
}
Settingdynamic
tofalse
doesn’t alter the contents of the_source
field at all. The_source
will still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.
count
query_and_fetch
dfs_query_then_fetch
and dfs_query_and_fetch
scan
_id
_type
_index
_uid
true
false
strict
customizing dynamic mapping
date_detection
dynamic
to
false
doesn’t alter the contents of the
_source
field at all. The
_source
will still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.
PUT /my_indexDefault mappings
{
"mappings": {
"my_type": {
"date_detection": false
}
}
}
dynamic
to
false
doesn’t alter the contents of the
_source
field at all. The
_source
will still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.
PUT /my_index
{
"mappings": {
"_default_": {
"_all": { "enabled": false }
},
"blog": {
"_all": { "enabled": true }
}
}
}
reindexing your data
bulk
API
to push them into the new index.
-
index aliases and zero downtime
dynamic
to
false
doesn’t alter the contents of the
_source
field at all. The
_source
will still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.