1.0)meta 为一个接口,有多种实现方式,
if (!metaManager.isStart()) { metaManager.start(); }
因为配置文件已经指定了实现模式,所以进入filemixedmetamanager模式
先看一下整个start方法
public void start() { super.start(); Assert.notNull(dataDir); if (!dataDir.exists()) { try { FileUtils.forceMkdir(dataDir); } catch (IOException e) { throw new CanalMetaManagerException(e); } } if (!dataDir.canRead() || !dataDir.canWrite()) { throw new CanalMetaManagerException("dir[" + dataDir.getPath() + "] can not read/write"); } dataFileCaches = MigrateMap.makeComputingMap(new Function<String, File>() { public File apply(String destination) { return getDataFile(destination); } }); executor = Executors.newScheduledThreadPool(1); destinations = MigrateMap.makeComputingMap(new Function<String, List<ClientIdentity>>() { public List<ClientIdentity> apply(String destination) { return loadClientIdentity(destination); } }); cursors = MigrateMap.makeComputingMap(new Function<ClientIdentity, Position>() { public Position apply(ClientIdentity clientIdentity) { Position position = loadCursor(clientIdentity.getDestination(), clientIdentity); if (position == null) { return nullCursor; // 返回一个空对象标识,避免出现异常 } else { return position; } } }); updateCursorTasks = Collections.synchronizedSet(new HashSet<ClientIdentity>()); // 启动定时工作任务 executor.scheduleAtFixedRate(new Runnable() { public void run() { List<ClientIdentity> tasks = new ArrayList<ClientIdentity>(updateCursorTasks); for (ClientIdentity clientIdentity : tasks) { MDC.put("destination", String.valueOf(clientIdentity.getDestination())); try { // 定时将内存中的最新值刷到file中,多次变更只刷一次 if (logger.isInfoEnabled()) { LogPosition cursor = (LogPosition) getCursor(clientIdentity); logger.info("clientId:{} cursor:[{},{},{},{},{}] address[{}]", new Object[] { clientIdentity.getClientId(), cursor.getPostion().getJournalName(), cursor.getPostion().getPosition(), cursor.getPostion().getTimestamp(), cursor.getPostion().getServerId(), cursor.getPostion().getGtid(), cursor.getIdentity().getSourceAddress().toString() }); } flushDataToFile(clientIdentity.getDestination()); updateCursorTasks.remove(clientIdentity); } catch (Throwable e) { // ignore logger.error("period update" + clientIdentity.toString() + " curosr failed!", e); } } } }, period, period, TimeUnit.MILLISECONDS); }
文件是否存在,是否可读可写,初始化类,启动定时任务,一秒后执行,一秒执行一次
定时任务里,对文件的处理
private void flushDataToFile(String destination) { flushDataToFile(destination, dataFileCaches.get(destination)); } private void flushDataToFile(String destination, File dataFile) { FileMetaInstanceData data = new FileMetaInstanceData(); if (destinations.containsKey(destination)) { synchronized (destination.intern()) { // 基于destination控制一下并发更新 data.setDestination(destination); List<FileMetaClientIdentityData> clientDatas = Lists.newArrayList(); List<ClientIdentity> clientIdentitys = destinations.get(destination); for (ClientIdentity clientIdentity : clientIdentitys) { FileMetaClientIdentityData clientData = new FileMetaClientIdentityData(); clientData.setClientIdentity(clientIdentity); Position position = cursors.get(clientIdentity); if (position != null && position != nullCursor) { clientData.setCursor((LogPosition) position); } clientDatas.add(clientData); } data.setClientDatas(clientDatas); } String json = JsonUtils.marshalToString(data); try { FileUtils.writeStringToFile(dataFile, json); } catch (IOException e) { throw new CanalMetaManagerException(e); } } }
* 1. 先写内存,然后定时刷新数据到File * 2. 数据采取overwrite模式(只保留最后一次),通过logger实施append模式(记录历史版本)
文件meta.dat里存放的是
client 已成功消费的最后binlog位点,时间,实例 的数据。目录和canal同级别,可配置,在canal.instance里配置
canal.conf.dir = ../conf
meta.dat里的文件如下:
{"clientDatas":[{"clientIdentity":{"clientId":1001,"destination":"example","filter":""},"cursor":{"identity":{"slaveId":-1,"sourceAddress":{"address":"DESKTOP-B1R6VMO","port":3306}},"postion":{"gtid":"","included":false,"journalName":"mysql-bin.000009","position":6218,"serverId":1,"timestamp":1527665906000}}}],"destination":"example"}
1.1)alarm没有做任何处理,就log打印了一下
if (!alarmHandler.isStart()) { alarmHandler.start(); }
1.2)store 初始化内存 new了一个数组 size为16384-1
if (!eventStore.isStart()) { eventStore.start(); }
public void start() throws CanalStoreException { super.start(); if (Integer.bitCount(bufferSize) != 1) { throw new IllegalArgumentException("bufferSize must be a power of 2"); } indexMask = bufferSize - 1; entries = new Event[bufferSize]; }
其中数组中的canalenty为google的序列化protrobuf
1.3)sink
if (!eventSink.isStart()) { eventSink.start(); }
public void start() { super.start(); Assert.notNull(eventStore); for (CanalEventDownStreamHandler handler : getHandlers()) { if (!handler.isStart()) { handler.start(); } } }
1.4)parse
if (!eventParser.isStart()) { beforeStartEventParser(eventParser); eventParser.start(); afterStartEventParser(eventParser); }
before 做解析前处理,启动logposition和hacontroller
protected void startEventParserInternal(CanalEventParser eventParser, boolean isGroup) { if (eventParser instanceof AbstractEventParser) { AbstractEventParser abstractEventParser = (AbstractEventParser) eventParser; // 首先启动log position管理器 CanalLogPositionManager logPositionManager = abstractEventParser.getLogPositionManager(); if (!logPositionManager.isStart()) { logPositionManager.start(); } } if (eventParser instanceof MysqlEventParser) { MysqlEventParser mysqlEventParser = (MysqlEventParser) eventParser; CanalHAController haController = mysqlEventParser.getHaController(); if (haController instanceof HeartBeatHAController) { ((HeartBeatHAController) haController).setCanalHASwitchable(mysqlEventParser); } if (!haController.isStart()) { haController.start(); } } }
1.4.2) 比较复杂的代码来了,和主库连接,发送dump指令,解析等
public void start() throws CanalParseException { if (runningInfo == null) { // 第一次链接主库 runningInfo = masterInfo; } super.start(); }
public void start() throws CanalParseException { if (enableTsdb) { if (tableMetaTSDB == null) { // 初始化 tabl