elasticsearch通过文件批量导入数据

前言

有一个需求,需要测试es单个索引的性能,需要将一个索引灌1亿条数据,比较了3种常用的批量导入方式,选择了文件+shell批量导入

索引的mapping,如下

PUT corpus_details_17
{
  "settings": {
    "index.blocks.read_only_allow_delete": "false",
    "index.max_result_window": "10000000",
    "number_of_replicas": "0",
    "number_of_shards": "1"
  },
  "mappings": {
      "properties": {
        "targetContent": {
          "type": "text"
        },
        "sourceContent": {
          "type": "text"
        },
        "sourceLanguageId": {
          "type": "long"
        },
        "realmCode": {
          "type": "long"
        },
        "createTime": {
          "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis",
          "type": "date"
        },
        "corpusScore": {
          "type": "float"
        },
        "id": {
          "type": "long"
        },
        "targetLanguageId": {
          "type": "long"
        }
      
    }
  }
}

方式1:restAPI+shell

通过restAPI导入数据在数据量非常小的情况下也可以使用,一次性导入一亿条数据,这个要很长时间,非常慢,不推荐

方式2:java客户端批量导入

这种方式可以使用多线程,但是这种方式与restAPI无本质区别,无非是快了几倍,但是对于1亿数量级来说也非常慢,不推荐

方式3:批量生成数据文件+shell

官网:https://www.elastic.co/guide/cn/elasticsearch/guide/current/bulk.html#bulk
这篇文章简单介绍了批量导入数据的操作

1、首先生成导入的数据文件,文件是json的格式,最后一定要多一行回车

{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":15,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.1","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":16,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.2","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":17,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.3","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":18,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.4","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":19,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.5","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}

_index:索引、_type:类型(es默认_doc),下面是要插入的数据,一个数据文件的大小控制在25M左右。

@Slf4j
public class GenerateFile {
    
    public static void main(String[] args) throws Exception {

        final LocalDateTime time = LocalDateTime.of(2022, 12, 29, 13, 34, 33);

        int count = 1;

        String filePath = "test" + count + ".json";

        File file = new File(filePath);

        FileOutputStream out = new FileOutputStream(file, false);

        for (int i = 0; i <= 100000000; i++) {

            CorpusDetailsMapping mapping = new CorpusDetailsMapping();
            mapping.setId((long) (i+14));
            mapping.setSourceContent("测试数据");
            mapping.setSourceLanguageId(1);
            mapping.setTargetContent("It's a cold winter AA." + i);
            mapping.setTargetLanguageId(2);
            mapping.setRealmCode(0);
            mapping.setCorpusScore(0.842105f);
            mapping.setCreateTime(time);

            String json = JSONUtil.toJsonStr(mapping);

            json = "{\"index\":{\"_index\":\"corpus_details_17\",\"_type\":\"_doc\"}}\n" + json + "\n";

            out.write(json.getBytes(StandardCharsets.UTF_8));

            // 换新文件写入
            if (i % 100000  == 0) {
                if (out != null) {
                    out.close();
                }
                count++;
                log.info("已写入文件:"+filePath);
                filePath = "test" + count + ".json";
                file = new File(filePath);
                out = new FileOutputStream(file, false);
            }

        }
        out.close();
    }
}

2、可以通过curl -u name:'pwd' -XPUT "172.16.0.65:7201/_bulk" -H "Content-Type:application/json" --data-binary @test1.json 执行批量导入

3、我们通过java脚本生成了1001个文件,以shell脚本同时执行多个文件

  int=0
  while(($int<1001))
  do
	  let "int++"
	  echo test"$int".json
	  curl -u name:'pwd' -XPUT "172.16.0.65:7201/_bulk" -H "Content-Type:application/json" --data-binary @test"$int".json
  done

4、结论,单线程半个小时生成一个亿的数据文件,一个小时左右完成了1亿数据的导入

  • 1
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

L-960

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值