使用logstash同步mysql数据到elasticsearch(包含配置es,springboot集成搜索)

 

参考:

https://cloud.tencent.com/developer/article/1183253

https://www.cnblogs.com/ashleyboy/p/9612453.html

es简介:

es是一个高度可扩展的开源全文搜索和分析引擎,可以快速的、近实时地对大数据进行存储、搜索和分析,用来支撑复杂的数据搜索需求和企业级应用。简单来说,它是一个搜索引擎,可以实现全文搜索,功能类似于lucene和solr。

具体实现功能:

常规的mvc框架,因查询都是关系查询数据库,不能全局搜索。所以要实现数据增删改都依然保存在数据库,查询的时候通过es暴漏接口给客户端使用。做这个功能之前有好几个地方比较模糊,网上好多文章都只是片段的es java实现,没有完整的从数据库开始的流程,自己研究了一下觉得有两种实现方式。第一就是增删改查数据库的时候同步到es,第二是定时从数据库获取数据到es。这篇文章是第二种实现方式,从数据库建表开始,数据配置同步es脚本,springboot暴漏查询接口。

一、es的安装:

下面介绍es的安装以及head插件的安装,都是在Windows上的安装。

1、es的安装:

(1)、下载:https://www.elastic.co/downloads/elasticsearch

(2)、安装:安装很简单,只需解压即可。解压后进入bin目录,运行里面的elasticsearch.bat,即可运行es。、

(3)、注册为Windows服务:每次启动都要运行这个文件很麻烦,可以将es注册为windows的本地。 在cmd命令窗口进入es的bin目录,然后执行如下命令: elasticsearch-service.bat install 执行成功后就可以看到es的服务了:

(4)、访问es: 启动redis服务后,访问localhost:9200即可看到如下界面:

特别注意es的java端口是9300,http端口9200,千万不要在java代码中使用9200去链接

 

2、head插件的安装:

head插件可以很方便的操作es,head插件与es的关系就像navicat与MySQL数据库的关系,不过head插件的界面也是通过访问网址浏览的。 (1)、下载head插件:https://github.com/search?q=elasticsearch-head

 

下载好上图这个解压。

(2)、安装node.js:https://nodejs.org/en/download/ head插件需要node.js的支持,所以要安装它。

 

安装一路next即可。输入node -v若出现node.js的版本信息,则安装成功。

(3)、安装grunt: 运行head需要借助grunt命令,所以要安装。 在cmd窗口进入node.js安装的根目录,然后执行npm install -g grunt -cli

(4)、安装pathomjs: 在cmd命令窗口进入head插件解压后的根目录,然后执行npm install

(5)、连接es: 在es的安装根目录的config目录下有elasticsearch.yml,在此文件中添加如下内容:


network.host: 0.0.0.0 这个也加了,忘了后边遇到啥问题加的

(6)、运行head: 在cmd命令窗口进入head插件解压后的根目录,然后执行grunt server,如下图就启动成功。

 

然后访问localhost:9100看到如下画面就安装成功。

二:安装logstash、同步数据至ElasticSearch

1. 下载Logstash 安装包,需要注意版本与elasticsearch保持一致,windows系统下直接解压即可。

2.添加同步mysql数据库的配置,并将mysql连接驱动jar包放在指定的配置目录

其中sql是需要同步的数据库表的查询语句。select * from xxx.

mysql.conf是同步配置文件如下:

input {
    stdin {
    }
    jdbc {
      # mysql数据库连接
      jdbc_connection_string => "jdbc:mysql://localhost/test?characterEncoding=utf-8&useSSL=false&serverTimezone=UTC"
      # mysqly用户名和密码
      jdbc_user => "root"
      jdbc_password => ""
      # 驱动配置
      jdbc_driver_library => "D:\tools\logstash-7.0.0\myconfig\mysql-connector-java-5.1.44.jar"
      # 驱动类名
      jdbc_driver_class => "com.mysql.jdbc.Driver"
      jdbc_paging_enabled => "true"
      jdbc_page_size => "50000"
      # 执行指定的sql文件
      statement_filepath => "D:\tools\logstash-7.0.0\myconfig\blog.sql"
      # 设置监听 各字段含义 分 时 天 月  年 ,默认全部为*代表含义:每分钟都更新
      schedule => "* * * * *"
      # 索引类型
      type => "blog"
    }
}
 
filter {
    json {
        source => "message"
        remove_field => ["message"]
    }
}
 
output {

    elasticsearch {
        #es服务器
        hosts => ["localhost:9200"]
        #ES索引名称
        index => "sl_blog"
        #自增ID
        document_id => "%{id}"
    }
    

    stdout {
        codec => json_lines
    }
}

这里注意一下配置编码,我之前启动logstash的时候就遇到了一个编码的坑。

如果需要同步多个mysql表,可以修改output配置文件mysql.conf,在input和output中添加其他的表。(这里我暂时还没有测试,后边测试的话会把配置文件补充的)

3. 启动logstash,正常的话将会同步数据值elasticsearch,根据上面的配置logstash每分钟去数据库读取最新数据

  logstash -f ../myconfig/mysql.conf

会有数据刷新到es的日志记录。

再去head里边刷新就可以看到数据

 

sql:

/*
Navicat MySQL Data Transfer

Source Server         : localhost
Source Server Version : 50155
Source Host           : localhost:3306
Source Database       : test

Target Server Type    : MYSQL
Target Server Version : 50155
File Encoding         : 65001

Date: 2019-04-24 19:18:23
*/

SET FOREIGN_KEY_CHECKS=0;

-- ----------------------------
-- Table structure for t_blog
-- ----------------------------
DROP TABLE IF EXISTS `t_blog`;
CREATE TABLE `t_blog` (
  `id` varchar(32) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
  `title` varchar(32) DEFAULT NULL,
  `create_time` datetime DEFAULT NULL,
  `content` varchar(2000) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_blog
-- ----------------------------
INSERT INTO `t_blog` VALUES ('1', '中国', '2019-04-24 15:22:30', '中国当自强');
INSERT INTO `t_blog` VALUES ('2', 'myblog', '2019-04-24 15:22:50', 'my blog content');
INSERT INTO `t_blog` VALUES ('3', '上海', '2019-04-24 15:23:08', '中国上海');

三:springboot集成es完成数据搜索

代码:

pom:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.1.4.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>esdemo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>esdemo</name>
    <description>Demo project for Spring Boot</description>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <!--<dependency>
            <groupId>io.searchbox</groupId>
            <artifactId>jest</artifactId>
        </dependency>

        <dependency>
            <groupId>net.java.dev.jna</groupId>
            <artifactId>jna</artifactId>
        </dependency>-->

        <!-- lombok依赖 可以减少大量的模块代码-->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

配置文件:

#开启 Elasticsearch 仓库(默认值:true)
spring:
  data:
    elasticsearch:
      repositories:
        enabled: true
      cluster-nodes: 127.0.0.1:9300
      cluster-name: elasticsearch
  #默认 9300 是 Java 客户端的端口。9200 是支持 Restful HTTP 的接口
# ES设置连接超时时间
#spring.data.elasticsearch.properties.transport.tcp.connect_timeout=120s

controller:

package com.example.esdemo.web;

import com.example.esdemo.entity.EsBlog;
import com.example.esdemo.service.IEsBlogService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

/**
 * Created by hui.yunfei@qq.com on 2019/4/24
 */
@RestController
@RequestMapping("/blogs")
public class BlogController {
    @Autowired
    private IEsBlogService esBlogService;

    @GetMapping(value="/query")
    public Page listBlogs(//@RequestParam(value="order",required=false,defaultValue="new") String order,
                            @RequestParam(value="keyword",required=false,defaultValue="" ) String keyword,
                            //@RequestParam(value="async",required=false) boolean async,
                            @RequestParam(value="pageIndex",required=false,defaultValue="0") int pageIndex,
                            @RequestParam(value="pageSize",required=false,defaultValue="5") int pageSize,
                            Model model) {
        Pageable pageable = new PageRequest(pageIndex,pageSize);
        Page<EsBlog> page = esBlogService.getEsBlogByKeys(keyword,pageable);
//        List<EsBlog> list  = page.getContent();
//        model.addAttribute("order", order);
//        model.addAttribute("keyword", keyword);
//        model.addAttribute("page", page);
//        model.addAttribute("blogList", list);
        return page;
    }
}

service:

package com.example.esdemo.service;

import com.example.esdemo.entity.EsBlog;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;

/**
 * Created by hui.yunfei@qq.com on 2019/4/24
 */
public interface IEsBlogService {
    Page<EsBlog> getEsBlogByKeys(String keyword, Pageable pageable);
}

实现类:

package com.example.esdemo.service.impl;

import com.example.esdemo.dao.IEsBlogRepository;
import com.example.esdemo.entity.EsBlog;
import com.example.esdemo.service.IEsBlogService;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
import org.springframework.data.elasticsearch.core.query.NativeSearchQuery;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.stereotype.Service;

/**
 * Created by hui.yunfei@qq.com on 2019/4/24
 */
@Service
public class EsBlogServiceImpl implements IEsBlogService {

    @Autowired
    private ElasticsearchTemplate elasticsearchTemplate;

    @Autowired
    private IEsBlogRepository esBlogRepository;
    @Override
    public Page<EsBlog> getEsBlogByKeys(String keyword, Pageable pageable) {
        Sort sort = new Sort(Sort.Direction.DESC,"read_size","comment_size","like_size");
        if (pageable.getSort() == null) {
            pageable = new PageRequest(pageable.getPageNumber(), pageable.getPageSize(), sort);
        }
        if(keyword==null || "".equals(keyword)){
            return esBlogRepository.findAll(pageable);
        }
        //keyword 含有空格时抛异常
        //return esBlogRepository.findDistinctEsBlogByTitleContainingOrSummaryContainingOrContentContainingOrTagsContaining(keyword, keyword, keyword, keyword, pageable);

        //使用 Elasticsearch API QueryBuilder
        NativeSearchQueryBuilder aNativeSearchQueryBuilder = new NativeSearchQueryBuilder();
        aNativeSearchQueryBuilder.withIndices("sl_blog").withTypes("doc");
        final BoolQueryBuilder aQuery = new BoolQueryBuilder();
        //builder下有的must、should、mustNot 相当于逻辑运算and、or、not
        aQuery.should(QueryBuilders.queryStringQuery(keyword).defaultField("title"));
        aQuery.should(QueryBuilders.queryStringQuery(keyword).defaultField("content"));

        NativeSearchQuery nativeSearchQuery = aNativeSearchQueryBuilder.withQuery(aQuery).build();
        Page<EsBlog> plist = elasticsearchTemplate.queryForPage(nativeSearchQuery,EsBlog.class);
        return  plist;
    }
}

 

entity:

package com.example.esdemo.entity;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;

import java.util.Date;

/**
 * Created by hui.yunfei@qq.com on 2019/4/24
 */
@Data
@AllArgsConstructor
@NoArgsConstructor
//映射Elasticsearch中的索引和文档类型
@Document(indexName = "sl_blog", type = "doc")
public class EsBlog {

    @Id
    private String id;

    private String title;

    private Date createTime;

    private String content;


}










dao:

package com.example.esdemo.dao;

import com.example.esdemo.entity.EsBlog;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.repository.ElasticsearchCrudRepository;
import org.springframework.stereotype.Component;

/**
 * Created by hui.yunfei@qq.com on 2019/4/24
 */
@Component
public interface IEsBlogRepository extends ElasticsearchCrudRepository<EsBlog,String> {

    Page<EsBlog> findDistinctEsBlogByTitleContainingOrContentContaining(String title, String content, Pageable pageable);
}




 

代码示例在:https://github.com/huiyunfei/studyDemo/tree/master/esdemo

 

springboot启动后es服务端一直报这个错,大概意思是接收到的是不支持的6.4版本,目前最小稳定版是6.7。

[2019-04-24T17:17:44,749][WARN ][o.e.t.TcpTransport       ] [306C04] exception caught on transport layer [Netty4TcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:55140}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [6.4.3] minimal compatible version is: [6.7.0]
	at org.elasticsearch.transport.InboundMessage.ensureVersionCompatibility(InboundMessage.java:137) ~[elasticsearch-7.0.0.jar:7.0.0]
	at org.elasticsearch.transport.InboundMessage.access$000(InboundMessage.java:39) ~[elasticsearch-7.0.0.jar:7.0.0]
	at org.elasticsearch.transport.InboundMessage$Reader.deserialize(InboundMessage.java:76) ~[elasticsearch-7.0.0.jar:7.0.0]
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:917) ~[elasticsearch-7.0.0.jar:7.0.0]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:753) [elasticsearch-7.0.0.jar:7.0.0]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:53) [transport-netty4-client-7.0.0.jar:7.0.0]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]

https://blog.csdn.net/chengyuqiang/article/details/86135795看到了

网上很多言论:

新版本的SpringBoot 2的spring-boot-starter-data-elasticsearch中支持的Elasticsearch版本是2.X,
但Elasticsearch实际上已经发展到6.5.X版本了,为了更好的使用Elasticsearch的新特性,
所以弃用了spring-boot-starter-data-elasticsearch依赖,而改为直接使用Spring-data-elasticsearch
大致意思:Spring boot 2的spring-boot-starter-data-elasticsearch中支持的Elasticsearch 2.X版本,需要转向spring-data-elasticsearch,
https://github.com/spring-projects/spring-data-elasticsearch

spring data elasticsearch    elasticsearch
3.2.x    6.5.0
3.1.x    6.2.2
3.0.x    5.5.0
2.1.x    2.4.0
2.0.x    2.2.0
1.3.x    1.5.2
一开始我也信了。
今天使用SpringBoot 2的spring-boot-starter-data-elasticsearch整合elasticsearch 6.x,测试了一下。实践证明是可以的。

我的解决办法是springboot用的是2.x,支持的spring-boot-starter-data-elasticsearch2.1.4,所以我直接把es版本换成6.4就可以了

启动后报另外一个错:

[2019-04-24T18:13:44,564][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [x1O_zkA] failed to put mappings on indices [[[sl_blog/801OMmGSRHuchJPedmzJ_Q]]], type [blog]
java.lang.IllegalArgumentException: Rejecting mapping update to [sl_blog] as the final mapping would have more than 1 type: [doc, blog]
	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:407) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:355) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:287) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:312) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:229) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:639) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:268) ~[elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:198) [elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:133) [elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) [elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244) [elasticsearch-6.4.1.jar:6.4.1]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207) [elasticsearch-6.4.1.jar:6.4.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]

百度了一下说是什么版本的问题也没理解,看了下字面意思说是多了一个type我就直接把entity上边的type改成es自带的_type doc了。

然后serviceimpl里边代码改成


        aNativeSearchQueryBuilder.withIndices("sl_blog").withTypes("doc");

再启动postman测试就成功了。(这个问题闲了在研究,我现在就急着记录一下下班啦)

  • 2
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值