Eleastisearch6.0.0 read_only_allow_delete: false

Eleastisearch6.0.0由单节点升级到多节点集群cluster时候出现的分片同步错误问题解决
原创 2018年01月18日 16:33:21 5
启动多个节点的ES后,ES开始推举master节点并同步分片shard数据到新ES节点上,此时观察Logstash日志抛出以下错误:


logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]


这是由于ES新节点的数据目录data存储空间不足,导致从master主节点接收同步数据的时候失败,此时ES集群为了保护数据,会自动把索引分片index置为只读read-only


解决步骤:


1.提供足够的存储空间供数据写入,如需在配置文件中更改ES数据存储目录,注意重启ES


2.放开索引只读设置,在Kibana的开发工具Dev Tools中执行(或在服务器上通过curl工具发起PUT请求,下文同)


PUT _settings
    {
    "index": {
    "blocks": {
    "read_only_allow_delete": "false"
    }
    }
    }


此时观察ES集群状态:curl http://10.0.7.220:9200/_cluster/health?pretty


注意到"active_shards_percent_as_number" : 12.0 该项的值产生变化;

[FORBIDDEN/12/index read-only / allow delete (api)] - read only elasticsearch indices

If your elasticsearch is responding with 403 and this message:

{
  "type": "cluster_block_exception",
  "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
}

Then you probably recovered from a full hard drive. The thing is, elasticsearch is switching to read-only if it cannot index more documents because your hard drive is full. With this it ensures availability for read-only queries. Elasticsearch will not switch back automatically but you can disable it by sending

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值