Java 接入 canal

canal.adapter | 问题

canal.adapter | 原理
  • canal.adapter 会给它配置一个配置文件 shop.yml;
  • shop.yml 中会定义当 canal.deployer 同步过来 MySQL 的 binlog 时 canal.adapter 要执行的 SQL,这里定义的 SQL 长这样:
    select a.id,a.name,a.tags,concat(a.latitude,',',a.longitude) as location,a.remark_score,a.price_per_man,a.category_id,b.name as category_name,a.seller_id,c.remark_score as seller_remark_score,c.disabled_flag as seller_disabled_flag from shop a inner join category b on a.category_id = b.id inner join seller c on c.id = a.seller_id
canal.adapter | 问题
  • 一整条 SQL 要查出来这么多的字段,但是 canal.adapter 只查变更的字段;
  • 并且,如果 SQL 中有 a.name, b.name 的话,canal.adapter 无法区分这是两张表的两个字段,会把 a.name 和 b.name 同时更新成新的值;
canal.adapter | 问题 | 验证步骤
  • 清空 ElasticSearch 中索引 shop 的全部文档;
  • 修改 MySQL 中 shop 表的一条记录的 name 字段的值;
  • canal.adapter 索引进 ElasticSearch 的文档只有更新的 name 字段;
canal.adapter | 问题 | 解决方案
  • 不用 canal.adapter 同步数据到 ElasticSearch 中了;
  • 通过引入 canal 的依赖,在 Java 程序中,自定义的将 canal.deployer 同步过来的 binlog 索引进 ElasticSearch 中;
canal.adapter | 适用场景
  • 如果是简单的将 MySQL 中的一张表和 ElasticSearch 中的一个索引对应,用 canal.adapter 还是可以的;

SpringBoot 中引入 canal

  • 目前 mvn repository 中最新的依赖版本是 1.1.4,刚好之前自己编译的 canal 也是 1.1.4,那就正好,不用重启 canal.deployer 的其他版本;
  • 注意 canal.deployer 的版本要和引入依赖的版本一致;
canal | 依赖
<dependency>
    <groupId>com.alibaba.otter</groupId>
    <artifactId>canal.client</artifactId>
    <version>1.1.4</version>
</dependency>
<dependency>
    <groupId>com.alibaba.otter</groupId>
    <artifactId>canal.common</artifactId>
    <version>1.1.4</version>
</dependency>
<dependency>
    <groupId>com.alibaba.otter</groupId>
    <artifactId>canal.protocol</artifactId>
    <version>1.1.4</version>
</dependency>
<dependency>
    <groupId>com.google.protobuf</groupId>
    <artifactId>protobuf-java</artifactId>
    <version>3.5.1</version>
</dependency>
Bean | 连接 canal.deployer
package tech.lixinlei.dianping.canal;

import com.alibaba.google.common.collect.Lists;
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;

import java.net.InetSocketAddress;

@Component
public class CanalClient implements DisposableBean{

    private CanalConnector canalConnector;

    @Bean
    public CanalConnector getCanalConnector(){
        canalConnector = CanalConnectors.newClusterConnector(Lists.newArrayList(
                new InetSocketAddress("127.0.0.1", 11111)),
                "example",
                "canal",
                "canal"
        );
        canalConnector.connect();
        // 指定filter,格式{database}.{table},不传参数就是 subscribe 所有的内容
        canalConnector.subscribe();
        // 回滚寻找上次中断的位置
        canalConnector.rollback();
        
        return canalConnector;
    }

    /**
     * 容器销毁时调用
     * @throws Exception
     */
    @Override
    public void destroy() throws Exception {
        if(canalConnector != null){
            canalConnector.disconnect();
        }
    }

}
Bean | 定时从 canal.deployer 中读取 binlog 并解析、索引进 ElasticSearch
package tech.lixinlei.dianping.canal;

import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.InvalidProtocolBufferException;
import org.apache.commons.lang3.StringUtils;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import tech.lixinlei.dianping.dal.ShopModelMapper;

import javax.annotation.Resource;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

@Component
public class CanalScheduling implements Runnable, ApplicationContextAware {

    private ApplicationContext applicationContext;

    @Autowired
    private ShopModelMapper shopModelMapper;

    @Resource
    private CanalConnector canalConnector;

    @Autowired
    private RestHighLevelClient restHighLevelClient;

    @Override
    @Scheduled(fixedDelay = 100)
    public void run() {
        System.out.println("run");
        long batchId = -1;
        try{
            int batchSize = 1000;
            Message message = canalConnector.getWithoutAck(batchSize);
            batchId = message.getId();
            List<CanalEntry.Entry> entries = message.getEntries();
            if(batchId != -1 && entries.size() > 0){
                entries.forEach(entry -> {
                    if(entry.getEntryType() == CanalEntry.EntryType.ROWDATA){
                        // 解析处理
                        publishCanalEvent(entry);
                    }
                });
            }
            canalConnector.ack(batchId);
        }catch(Exception e){
            e.printStackTrace();
            canalConnector.rollback(batchId);
        }
    }

    /**
     * 将 binlog 中的一条(entry),
     * 解析成受影响的记录(change),再逐条解析受影响的记录(change),
     * 将记录(rowData)的数据结构从 List 转成 Map,
     * 完了交给 indexES 方式索引进 ElasticSearch;
     * @param entry binlog 中的一条;
     */
    private void publishCanalEvent(CanalEntry.Entry entry){
        CanalEntry.EventType eventType = entry.getHeader().getEventType();
        String database = entry.getHeader().getSchemaName();
        String table = entry.getHeader().getTableName();
        CanalEntry.RowChange change = null;
        try {
            change = CanalEntry.RowChange.parseFrom(entry.getStoreValue());
        } catch (InvalidProtocolBufferException e) {
            e.printStackTrace();
            return;
        }
        change.getRowDatasList().forEach(rowData -> {
            List<CanalEntry.Column> columns = rowData.getAfterColumnsList();
            String primaryKey = "id";
            CanalEntry.Column idColumn = columns.stream().filter(column -> column.getIsKey()
                    && primaryKey.equals(column.getName())).findFirst().orElse(null);

            Map<String,Object> dataMap = parseColumnsToMap(columns);
            try{
                indexES(dataMap, database, table);
            } catch (IOException e) {
                e.printStackTrace();
            }
        });
    }

    Map<String,Object> parseColumnsToMap(List<CanalEntry.Column> columns){
        Map<String,Object> jsonMap = new HashMap<>();
        columns.forEach(column -> {
            if(column == null){
                return;
            }
            jsonMap.put(column.getName(), column.getValue());
        });
        return jsonMap;
    }

    private void indexES(Map<String,Object> dataMap, String database, String table) throws IOException {
        if(!StringUtils.equals("dianping", database)){
            return;
        }

        // result 查出来的记录是全字段,不像 canal.adapter 只能查出更改的字段;
        List<Map<String,Object>> result = new ArrayList<>();
        if(StringUtils.equals("seller", table)) {
            result = shopModelMapper.buildESQuery(new Integer((String)dataMap.get("id")), null, null);
        } else if (StringUtils.equals("category", table)){
            result = shopModelMapper.buildESQuery(null, new Integer((String)dataMap.get("id")), null);
        } else if (StringUtils.equals("shop", table)){
            result = shopModelMapper.buildESQuery(null, null, new Integer((String)dataMap.get("id")));
        } else {
            return;
        }

        // 调用 ES API 将 MySQL 中变化的数据索引进 ElasticSearch
        for(Map<String,Object> map : result){
            IndexRequest indexRequest = new IndexRequest("shop");
            indexRequest.id(String.valueOf(map.get("id")));
            indexRequest.source(map);
            restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
        }

    }

    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        this.applicationContext = applicationContext;
    }

}
在 SpringBoot 层面打开定时任务的开关
@SpringBootApplication(scanBasePackages = {"tech.lixinlei.dianping"})
@MapperScan("tech.lixinlei.dianping.dal")
@EnableAspectJAutoProxy(proxyTargetClass = true)
@EnableScheduling
public class DianpingApplication {

    public static void main(String[] args) {
        SpringApplication.run(DianpingApplication.class, args);
    }

}
根据 binlog 中的内容,查出需要更新进 ElasticSearch 中的数据的 SQL
public interface ShopModelMapper {

    List<Map<String,Object>> buildESQuery(@Param("sellerId")Integer sellerId,
                                          @Param("categoryId")Integer categoryId,
                                          @Param("shopId")Integer shopId);

}
<select id="buildESQuery" resultType="java.util.Map">
  select a.id,a.name,a.tags,concat(a.latitude,',',a.longitude) as location,
  a.remark_score,a.price_per_man,a.category_id,b.name as category_name,a.seller_id,
  c.remark_score as seller_remark_score,c.disabled_flag as seller_disabled_flag
  from shop a inner join category b on a.category_id = b.id inner join seller c on c.id=a.seller_id
  <if test="sellerId != null">
    and c.id = #{sellerId}
  </if>
  <if test="categoryId != null">
    and b.id = #{categoryId}
  </if>
  <if test="shopId != null">
    and a.id = #{shopId}
  </if>
</select>
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值