使用canal+rocketmq实现将mysql数据同步到es

文章介绍了在开发中遇到数据库与缓存一致性问题的常见原因,如同步延迟、缓存失效等,并探讨了不同数据库与Elasticsearch同步方法的优缺点。重点展示了使用Canal监听MySQL日志并通过RocketMQ实现到Elasticsearch的同步步骤,包括MySQL配置、Canal与RocketMQ的配置以及消费者端的RocketMQ消费者类来同步ES的操作。
摘要由CSDN通过智能技术生成

数据库与缓存同步问题

实际开发过程中,经常遇到数据库与缓存不一致的问题,造成这种问题的原因有很多,其中缓存数据没有及时更新、缓存中过期的数据没有及时更新,导致缓存中存在失效数据,导致数据库与缓存不一致。而这种问题的出现大部分都是因为同步延迟、缓存失效、过期和错误使用等导致的。在开发中我们经常使用es作为搜索,及c端列表展示;常用的数据库与es的同步方法:同步双写,定时任务、异步双写、数据订阅;同步双写时效性高,代码耦合严重;定时任务:实现简单,时效性没保证;异步双写:时效性高,引入新组建,代码复杂度高;数据订阅:时效性较好,代码侵入低;引入新组建复杂度高;今天简单实现下业界较流行的canal监听mysql同步es的方案。

实现步骤

1.设置mysql的binlog开启,配置canal的监听账号密码。

        演示场景使用docker-compose创建mysql,生产环境不要用

1.先创建路径(/home/cs/docker-compose/mysql)

2.在指定路径下创建my.cnf文件(/home/cs/docker-compose/mysql/conf/my.cnf)

[mysqld]

user=mysql

character-set-server=utf8

default_authentication_plugin=mysql_native_password

secure_file_priv=/var/lib/mysql

expire_logs_days=7

sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION

max_connections=1000

secure_file_priv=/var/lib/mysql

[client]

default-character-set=utf8

[mysql]

default-character-set=utf8

[mysqld]
log-bin=mysql-bin # 开启binlog
binlog-format=ROW # 选择ROW模式
server_id=1 # 配置MySQL replaction需要定义,不要和canal的slaveId重复

3.在此路径下创建docker-compose.yml文件

version: '3'

services:

  mysql:

    image: mysql:5.7

    restart: always

    privileged: true

    container_name: mysql

    environment:

      MYSQL_ROOT_PASSWORD: 123456

      TZ: Asia/Shanghai

    command:

      --default-authentication-plugin=mysql_native_password

      --character-set-server=utf8mb4

      --collation-server=utf8mb4_general_ci

      --explicit_defaults_for_timestamp=true

      --lower_case_table_names=1

      --max_allowed_packet=128M;

    ports:

      - 3306:3306

    volumes:

      - /home/cs/docker-compose/mysql/data:/var/lib/mysql

      - /home/cs/docker-compose/mysql/conf/my.cnf:/etc/my.cnf

      - /home/cs/docker-compose/mysql/init:/docker-entrypoint-initdb.d

4.在mysql路径下执行 docker-compose up -d

5.创建账号

#创建用户名和密码都为canal
CREATE USER canal IDENTIFIED BY 'canal';  
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
FLUSH PRIVILEGES;

2.安装canal-server配置相关数据。

2.1 canal-server的docker-compose.yml文件

version: '3'

services:
  canal-server:
    image: canal/canal-server:v1.1.5
    container_name: canal-server
    restart: always
    networks:
      - mynetwork
    ports:
      - 11111:11111
    environment:
      - SEATA_IP=192.168.180.145
    volumes:
      - ./conf/example:/home/admin/canal-server/conf/example
      - ./conf/canal.properties:/home/admin/canal-server/conf/canal.properties
      - ./logs:/home/admin/canal-server/logs

networks:
  mynetwork:
    external: true

2.2 配置 canal.properties文件

#################################################
#########         common argument        #############
#################################################
# tcp bind ip
canal.ip =
# register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
# canal instance user/passwd
# canal.user = canal
# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458

# canal admin config
#canal.admin.manager = 192.168.180.145:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9
# admin auto register
#canal.admin.register.auto = true
#canal.admin.register.cluster =
#canal.admin.register.name =

canal.zkServers =
# flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# tcp, kafka, rocketMQ, rabbitMQ
canal.serverMode = rocketMQ
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true

## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60

# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false
canal.instance.filter.dml.insert = false
canal.instance.filter.dml.update = false
canal.instance.filter.dml.delete = false

# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation
canal.instance.get.ddl.isolation = false

# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256

# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360

#################################################
#########         destinations        #############
#################################################
canal.destinations = example
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5
# set this value to 'true' means that when binlog pos not found, skip to latest.
# WARN: pls keep 'false' in production env, or if you know what you want.
canal.auto.reset.latest.pos.mode = false

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml

##################################################
#########           MQ Properties      #############
##################################################
# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=

canal.mq.flatMessage = true
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local

canal.mq.database.hash = true
canal.mq.send.thread.size = 30
canal.mq.build.thread.size = 8

##################################################
#########              Kafka              #############
##################################################
kafka.bootstrap.servers = 127.0.0.1:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0

kafka.kerberos.enable = false
kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf"
kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"

##################################################
#########             RocketMQ         #############
##################################################
rocketmq.producer.group = canal-topic
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 192.168.180.145:9876
rocketmq.retry.times.when.send.failed = 3
rocketmq.vip.channel.enabled = false
rocketmq.tag =

##################################################
#########             RabbitMQ         #############
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =
rabbitmq.deliveryMode =

2.3配置instance.properties文件

#################################################
#########         common argument        #############
#################################################
# tcp bind ip
canal.ip =
# register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
# canal instance user/passwd
# canal.user = canal
# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458

# canal admin config
#canal.admin.manager = 192.168.180.145:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9
# admin auto register
#canal.admin.register.auto = true
#canal.admin.register.cluster =
#canal.admin.register.name =

canal.zkServers =
# flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# tcp, kafka, rocketMQ, rabbitMQ
canal.serverMode = rocketMQ
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true

## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60

# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false
canal.instance.filter.dml.insert = false
canal.instance.filter.dml.update = false
canal.instance.filter.dml.delete = false

# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation
canal.instance.get.ddl.isolation = false

# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256

# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360

#################################################
#########         destinations        #############
#################################################
canal.destinations = example
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5
# set this value to 'true' means that when binlog pos not found, skip to latest.
# WARN: pls keep 'false' in production env, or if you know what you want.
canal.auto.reset.latest.pos.mode = false

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml

##################################################
#########           MQ Properties      #############
##################################################
# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=

canal.mq.flatMessage = true
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local

canal.mq.database.hash = true
canal.mq.send.thread.size = 30
canal.mq.build.thread.size = 8

##################################################
#########              Kafka              #############
##################################################
kafka.bootstrap.servers = 127.0.0.1:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0

kafka.kerberos.enable = false
kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf"
kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"

##################################################
#########             RocketMQ         #############
##################################################
rocketmq.producer.group = canal-topic
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 192.168.180.145:9876
rocketmq.retry.times.when.send.failed = 3
rocketmq.vip.channel.enabled = false
rocketmq.tag =

##################################################
#########             RabbitMQ         #############
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =
rabbitmq.deliveryMode =
[root@localhost conf]# ls
canal.properties  example
[root@localhost conf]# cd example/
[root@localhost example]# ls
instance.properties  meta.dat
[root@localhost example]# cat instance.properties
#################################################
## mysql serverId , v1.0.26+ will autoGen
canal.instance.mysql.slaveId=13

# enable gtid use true/false
canal.instance.gtidon=false

# position info
canal.instance.master.address=192.168.180.145:3306
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=

# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=

# table meta tsdb info
canal.instance.tsdb.enable=false
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal

#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=

# username/password
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==

# table regex
canal.instance.filter.regex=zc_cms\\..*
canal.instance.filter.black.regex=
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch

# mq config
canal.mq.topic=canal-topic
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.partitionsNum=3
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6
#################################################

3.安装rocketmq,编写通用解析canal类,编写消息者。

3.1安装rocketmq的docker-compose.yml

version: '3'
services:
  rocketmq-namesrv:
    image: foxiswho/rocketmq:4.8.0
    container_name: rocketmq-namesrv
    restart: always
    ports:
      - 9876:9876
    volumes:
    # ./namesrv/logs 主机路径(docker-compose.yml的相对路径):/home/rocketmq/logs 容器内路径
      - ./namesrv/logs:/home/rocketmq/rocketmq-4.8.0/logs
      - ./namesrv/store:/home/rocketmq/rocketmq-4.8.0/store
    environment:
      JAVA_OPT_EXT: "-Duser.home=/home/rocketmq -Xms512M -Xmx512M -Xmn128m"
    command: ["sh","mqnamesrv"]
    networks:
      mynetwork:
        aliases:
          - rocketmq-namesrv


  rocketmq-broker:
    image: foxiswho/rocketmq:4.8.0
    container_name: rocketmq-broker
    restart: always
    ports:
      - 10909:10909
      - 10911:10911
    volumes:
      - ./broker/logs:/home/rocketmq/rocketmq-4.8.0/logs
      - ./broker/store:/home/rocketmq/rocketmq-4.8.0/store
      - ./broker/conf/broker.conf:/home/rocketmq/rocketmq-4.8.0/conf/broker.conf
    environment:
      JAVA_OPT_EXT: "-Duser.home=/home/rocketmq -Xms512M -Xmx512M -Xmn128m"
      # 容器内路径
    command: ["sh","mqbroker","-c","/home/rocketmq/rocketmq-4.8.0/conf/broker.conf","-n","rocketmq-namesrv:9876","autoCreateTopicEnable=true"]
    depends_on:
      - rocketmq-namesrv
    networks:
      mynetwork:
        aliases:
          - rocketmq-broker


  rocketmq-console:
    image: styletang/rocketmq-console-ng
    container_name: rocketmq-console
    restart: always
    ports:
      - 8180:8080
    environment:
      JAVA_OPTS: "-Drocketmq.namesrv.addr=rocketmq-namesrv:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false"
    depends_on:
      - rocketmq-namesrv
    networks:
      mynetwork:
        aliases:
          - rocketmq-console

networks:
  mynetwork:
    external: true

3.2创建消费者,解析消息,同步es

(1)引入rocketmq的pom依赖

<dependency>
    <groupId>org.apache.rocketmq</groupId>
    <artifactId>rocketmq-spring-boot-starter</artifactId>
    <version>2.0.2</version>
</dependency>

(canal客户端,注意与rocketmq依赖的版本,rocketmq依赖版本过高会和以下版本冲突)

<dependency>
    <groupId>com.alibaba.otter</groupId>
    <artifactId>canal.client</artifactId>
    <version>1.1.4</version>
</dependency>

(2)yml配置

#rocketmq配置
rocketmq:
  name-server: 192.168.180.145:9876

(3)消费者类,其中esCmsServices是定义的处理es的service,这个只是简单示例,如果生产中可抽象化,泛型化处理。

@RocketMQMessageListener(
        topic = "canal-topic",
        consumerGroup = "consumer-canal-topic",selectorExpression="*")
@Component
public class RocketConsumer implements RocketMQListener<FlatMessage> {

    @Autowired
    private EsCmsServices esCmsServices;

    @Override
    public void onMessage(FlatMessage msg) {
        if (msg.getIsDdl()) {
            return;
        }
        Set<CmsContent> data = getData(msg);
        if ("INSERT".equals(msg.getType())) {
            //insert(data);
            for(CmsContent cms:data){
                esCmsServices.save(cms);
            }
        }
        if ("UPDATE".equals(msg.getType())) {
            //update(data);
        }
        if ("DELETE".equals(msg.getType())) {
           // delete(data);
            for(CmsContent cms:data){
                esCmsServices.deleteById(cms.getId());
            }
        }
    }

    /**
     * 转换Canal的FlatMessage中data成泛型对象
     * @param flatMessage Canal发送MQ信息
     */
    protected Set<CmsContent> getData(FlatMessage flatMessage) {
        List<Map<String, String>> sourceData = flatMessage.getData();
        Set<CmsContent> targetData = Sets.newHashSetWithExpectedSize(sourceData.size());
        for (Map<String, String> map : sourceData) {
            CmsContent t =  new CmsContent();
            try {
                BeanUtils.populate(t,map);
            }catch (Exception e){
                System.out.println(e);
            }

            targetData.add(t);
        }
        return targetData;
    }
}

4.elasticsearch搭建

4.1 es的docker-compose文件

version: '3'
services:
  elasticsearch:
    image: elasticsearch:7.13.3
    container_name: elasticsearch
    privileged: true
    restart: always
    networks:
      - default
      - mynetwork
    environment:
      - "cluster.name=elasticsearch" #设置集群名称为elasticsearch
      - "discovery.type=single-node" #以单一节点模式启动
      - "ES_JAVA_OPTS=-Xms512m -Xmx1096m" #设置使用jvm内存大小
      - bootstrap.memory_lock=true
    volumes:
      - ./elastic/plugins:/usr/local/dockercompose/elasticsearch/plugins #插件文件挂载
      - ./elastic/data:/usr/local/dockercompose/elasticsearch/data:rw #数据文件挂载
      - ./elastic/logs:/usr/local/dockercompose/elasticsearch/logs:rw
    ports:
      - 9200:9200
      - 9300:9300
    deploy:
     resources:
        limits:
           cpus: "2"
           memory: 1000M
        reservations:
           memory: 200M
  kibana:
    image: kibana:7.13.3
    container_name: kibana
    restart: always
    depends_on:
      - elasticsearch #kibana在elasticsearch启动之后再启动
    networks:
      - default
      - mynetwork
    environment:
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200 #设置访问elasticsearch的地址
      I18N_LOCALE: zh-CN
    ports:
      - 5601:5601

networks:
  mynetwork:
    external: true

4.2 es的pom引入依赖

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <version>7.10.0</version>
</dependency>

4.3 es的客服端定义类

@Configuration
public class ElasticConfig {

    @Bean
    public RestHighLevelClient restHighLevelClient(){
        RestHighLevelClient client = new RestHighLevelClient(
                RestClient.builder(
                        new HttpHost("192.168.180.145", 9200,  "http"))) ;
        return client;
    }

}

4.4 es定义实体,ElasticsearchRepository操作类,EsCmsServices类

@Document(indexName = "cms_content")
public class CmsContent {

    private Long id;
    private String name;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

@Repository
public interface CmsRepository extends ElasticsearchRepository<CmsContent, Long> {

    List<CmsContent> findByName(String name);

}

@Service
public class EsCmsServices {

    @Autowired
    private CmsRepository cmsRepository;

    public List<CmsContent> findByName(String name) {
        return cmsRepository.findByName(name);
    }

    public CmsContent save(CmsContent user) {
        return cmsRepository.save(user);
    }

    public void deleteById(Long id) {
        cmsRepository.deleteById(id);
    }
}

5.效果展示

加入一条数据

rocketmq已接收到数据

 es中也同步添加到数据

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值