elasticsearch调优经验

es节点重启注意点:
1、暂停数据写入程序 
(如果条件允许,正式环境一般不会允许 大笑,我们是es写如果有问题数据会落地回头再写入es,所以也可以允许!!!!! 这种情况基本不会出现 需要重启整个es集群)
2、关闭集群shard allocation
3、手动执行POST /_flush/synced
4、重启结点
5、重新开启集群shard allocation
6、等待recovery完成,集群health status变成green
7、重新开启数据写入程序

!!!没有template的数据字段类型又多变 很可能拖累es

https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html#_using_and_sizing_bulk_requests
Segment merging 拖慢写数据时会有日志
now throttling indexing
默认是20MB  如果ssd建议100-200
PUT /_cluster/settings
{
    "persistent" : {
        "indices.store.throttle.max_bytes_per_sec" : "100mb"
    }
}
如果只录入数据,不做索引查询,甚至可以关掉这个(重新打开将其设置为merge)
PUT /_cluster/settings
{
    "transient" : {
        "indices.store.throttle.type" : "none"
    }
}

机械硬盘减少磁盘io压力方法
(This setting will allow max_thread_count + 2 threads to operate on the disk at one time, so a setting of 1 will allow three threads.)
For SSDs, you can ignore this setting. The default is Math.min(3, Runtime.getRuntime().availableProcessors() / 2), which works well for SSD.

这个是写在配置文件elasticsearch.yml配置文件的
         index.merge.scheduler.max_thread_count: 1

Finally, you can increase index.translog.flush_threshold_size from the default 512 MB to something larger, such as 1 GB.
!!!这样能减轻磁盘压力,但会加重内存压力
This allows larger segments to accumulate in the translog before a flush occurs.
By letting larger segments build, you flush less often, and the larger segments merge less often.
All of this adds up to less disk I/O overhead and better indexing rates

=================================
重启es data节点前  先关闭分片分配
curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{"transient" : {"cluster.routing.allocation.enable" : "none"}}'
重启完后打开分片分配
curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{"transient" : {"cluster.routing.allocation.enable" : "all"}}'

=================================
查看当前线程池、查看当前节点信息
curl -XGET 'http://localhost:9200/_nodes/stats?pretty'

curl -XGET 'localhost:9200/_cat/nodes?h=name,ram.current,ram.percent,ram.max,fielddata.memory_size,query_cache.memory_size,request_cache.memory_size,percolate.memory_size,segments.memory,segments.index_writer_memory,segments.index_writer_max_memory,segments.version_map_memory,segments.fixed_bitset_memory,heap.current,heap.percent,heap.max,\&v'

curl -XPOST "localhost:9200/_cache/clear"

=================================
节点快速重启时  尤其是没有数据写入的冷索引  这个配置很管用 默认1m   节点没恢复 就会进行reallocation
curl -XPUT localhost:9200/_all/_settings -d  '{"settings": {"index.unassigned.node_left.delayed_timeout": "15m"}}'
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "none"
    }
}

=================================
tower落盘数据写入es
curl -s -XPOST localhost:9200/_bulk --data-binary "@file";

=================================
创建索引时指定副本数和shard数
curl -XPUT 'localhost:9200/my_index/' -d '{
    "settings" : {
        "number_of_shards" : 20,
        "number_of_replicas" : 0
    }
}'
curl -XPUT 'localhost:9200/tan-load-20171021/' -d '{
    "settings" : {
        "number_of_shards" : 1,
        "number_of_replicas" : 1
    }
}'

=================================
大量数据短时间写入时可以先关闭副本
curl -XPUT 'localhost:9200/my_index/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 1
    }
}'

===============================
查看最大shard
max=0;for i in `curl localhost:9200/_cat/shards?v |awk '{print $6}' |grep gb |awk -F 'gb' '{print $1}'`;do c=$(echo "$i>$max"|bc);if [ $c -eq 1 ];then max=$i;fi;done;echo $max
查看最大索引
max=0;for i in `curl localhost:9200/_cat/indices?v |grep open |awk '{print $8}' |grep gb |awk -F 'gb' '{print $1}'`;do c=$(echo "$i>$max"|bc);if [ $c -eq 1 ];then max=$i;fi;done;echo $max

================================
禁用根据节点磁盘的使用情况来配置分片分配(待测试是否能缓解只写入磁盘空间多的问题?磁盘没有利用起来,写效率低下)
curl -XPUT localhost:9200/_cluster/settings -d '{ 
    "transient" : { 
        "cluster.routing.allocation.disk.threshold_enabled" : false 
    } 
}'

=================================
按条件查询删除,可以删除某个type
POST twitter/_delete_by_query
{
  "query": {
    "match": {
      "message": "some message"
    }
  }
}

=================================
磁盘利用上限设置
cluster.routing.allocation.disk.watermark.low
Controls the low watermark for disk usage. It defaults to 85%, meaning ES will not allocate new shards to nodes once they have more than 85% disk used. It can also be set to an absolute byte value (like 500mb) to prevent ES from allocating shards if less than the configured amount of space is available

cluster.routing.allocation.disk.watermark.high
Controls the high watermark. It defaults to 90%, meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%. It can also be set to an absolute byte value (similar to the low watermark) to relocate shards once less than the configured amount of space is available on the node.

curl -XPUT 'localhost:9200/_cluster/settings' -d
'{
    "transient": {  
      "cluster.routing.allocation.disk.watermark.low": "90%"    
    }
}'

==================================
问题分片 unassigned分片处理方法
https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

==================================
开启疯狂写入模式可以先禁用refresh
curl -XPUT  localhost:9200/my_index/_settings -d '{"index":{"refresh_interval":-1}}'

==================================
分片操作
#动态设置es索引副本数量  
curl -XPUT 'http://168.7.1.67:9200/log4j-emobilelog/_settings' -d '{  
   "number_of_replicas" : 2  
}'  
  
#设置es不自动分配分片  
curl -XPUT 'http://168.7.1.67:9200/log4j-emobilelog/_settings' -d '{  
   "cluster.routing.allocation.disable_allocation" : true  
}'  
  
#手动移动分片  
curl -XPOST "http://168.7.1.67:9200/_cluster/reroute' -d  '{  
   "commands" : [{  
        "move" : {  
            "index" : "log4j-emobilelog",  
            "shard" : 0,  
            "from_node" : "es-0",  
            "to_node" : "es-3"  
        }  
    }]  
}'  
  
#手动分配分片  
curl -XPOST "http://168.7.1.67:9200/_cluster/reroute' -d  '{  
   "commands" : [{  
        "allocate" : {  
            "index" : ".kibana",  
            "shard" : 0,  
            "node" : "es-2",  
        }  
    }]  
}'  

==================================

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值