canal 1.1.4 HA原理 & canal-admin 搭建实例

1 Canal Server 集群模式

1.1 HA设计

canalHA分为两部分,canal servercanal client分别有对应的ha实现:

  • canal server: 为了减少对mysql dump的请求,不同server上的instance要求同一时间只能有一个处于running,其他的处于standby状态。
  • canal client: 为了保证有序性,一份instance同一时间只能由一个canal client进行get/ack/rollback操作,否则客户端接收无法保证有序。

整个HA机制的控制主要是依赖了zookeeper的几个特性。

Canal Server:

大致步骤:

  • canal server要启动某个canal instance时都先向zookeeper进行一次尝试启动判断 (实现:创建EPHEMERAL节点,谁创建成功就允许谁启动)
  • 创建zookeeper节点成功后,对应的canal server就启动对应的canal instance,没有创建成功的canal instance就会处于standby状态
  • 一旦zookeeper发现canal server A创建的节点消失后,立即通知其他的canal server再次进行步骤1的操作,重新选出一个canal server启动instance
  • canal client每次进行connect时,会首先向zookeeper询问当前是谁启动了canal instance,然后和其建立链接,一旦链接不可用,会重新尝试connect
  • Canal Client的方式和canal server方式类似,也是利用zokeeper的抢占EPHEMERAL节点的方式进行控制。

 

1.2 重点配置修改

修改conf/canal.properties,该文件属于canal全局配置文件,用于配置canal工作模式、zk与kafka等信息。

如果使用canal-admin进行集群部署,可只修改conf/canal_local.properties。具体修改内容参见下文。

#zk集群地址

canal.zkServers = 192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181 

 #默认的file-instance不支持HA持久化,需要修改为default

canal.instance.global.spring.xml = classpath:spring/default-instance.xml

2 Canal Admin

2.1 设计理念

canal-admin的核心模型主要有:

1.instance,对应canal-server里的instance,一个最小的订阅mysql的队列

2.server,对应canal-server,一个server里可以包含多个instance

3.集群,对应一组canal-server,组合在一起面向高可用HA的运维

 

简单解释:

1.instance因为是最原始的业务订阅诉求,它会和 server/集群 这两个面向资源服务属性的进行关联,比如instance A绑定到server A上或者集群 A上

2.有了任务和资源的绑定关系后,对应的资源服务就会接收到这个任务配置,在对应的资源上动态加载instance,并提供服务动态加载的过程,有点类似于之前的autoScan机制,只不过基于canal-admin之后可就以变为远程的web操作,而不需要在机器上运维配置文件

3.将server抽象成资源之后,原本canal-server运行所需要的canal.properties/instance.properties配置文件就需要在web ui上进行统一运维,每个server只需要以最基本的启动配置 (比如知道一下canal-admin的manager地址,以及访问配置的账号、密码即可)

2.2 安装部署

1.安装部署canal-admin UI

下载 canal-admin, 访问 release 页面 , 选择需要的包下载, 如以 1.1.4 版本为例

wget https://github.com/alibaba/canal/releases/download/canal-1.1.4/canal.admin-1.1.4.tar.gz

解压缩

mkdir  /canal-admin

tar -zxvf canal.admin-1.1.4.tar.gz  -C /data/jkpt40/lib/canal-admin

解压完成后,进入canal 目录,可以看到如下结构

 配置修改 vi conf/application.yml

 初始化元数据库

# 导入初始化SQL

> source conf/canal_manager.sql

a. 初始化SQL脚本里会默认创建canal_manager的数据库,建议使用root等有超级权限的账号进行初始化b. canal_manager.sql默认会在conf目录下,也可以通过链接下载 canal_manager.sql

启动

 sh bin/start.sh

访问canal-admin  http://127.0.0.1/8090 (如下图所示,替换成你的ip),至此,canal-admin搭建已经完成。

2.安装部署canal-server

准备工作:开启mysql binlog

更改my.cnf配置文件,开启binlog功能

[mysqld] 

log-bin=mysql-bin #添加这一行就ok 

binlog-format=ROW #选择row模式 

server_id=1 #配置mysql replaction需要定义,不能和canal的slaveId重复 

重启mysql,验证配置是否生效

canal的原理是模拟自己为mysql slave,所以这里一定需要做为mysql slave的相关权限

CREATE USER canal IDENTIFIED BY 'canal';    

GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';   -- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ; 

 FLUSH PRIVILEGES;

前置准备工作结束。

登录canal-admin,新建集群

添加/修改集群配置

选择新建的集群,操作-主配置,来修改配置。

点击载入模板来加载默认配置。然后修改以下配置项:

#集群模式下通过zk通信

canal.zkServers =  

#可以配置多个,一个destination生成一个文件夹

canal.destinations =

#集群模式下一定要选择default-instance.xml

#canal.instance.global.spring.xml = classpath:spring/file-instance.xml

canal.instance.global.spring.xml = classpath:spring/default-instance.xml

注:其他使用默认配置就行,不用修改。

支持是MySQL数据库M-M模式,其中一主挂了后自动漂移:

https://github.com/alibaba/canal/wiki/AdminGuide#mysql%E5%A4%9A%E8%8A%82%E7%82%B9%E8%A7%A3%E6%9E%90%E9%85%8D%E7%BD%AEparse%E8%A7%A3%E6%9E%90%E8%87%AA%E5%8A%A8%E5%88%87%E6%8D%A2

canal支持数据表改动继续监听(默认已经支持):

https://github.com/alibaba/canal/wiki/TableMetaTSDB

修改canal server conf文件夹下 canal_local.properties

canal.register.ip=

canal.admin.manager=

#与canal_manaer.canal_user表中passwd一致

canal.admin.passwd=

启动canal server

sh bin/restart.sh local

#启动参数加local,canal会读取canal_local.properties文件

查看注册了此集群的server

新建instance

添加instance配置

#主节点

canal.instance.master.address=172.31.xx.xx:53306

#从节点(没有可以不用配置)

canal.instance.standby.address = 172.31.xx.xx:53306

canal.instance.dbUsername=canal

canal.instance.dbPassword=canal

 

canal.properties全部配置

#################################################

#########              common argument        #############

#################################################

# tcp bind ip

#canal.ip = 172.31.xx.xx

# register ip to zookeeper

#canal.register.ip = 172.31.xx.xx

canal.port = 11111

canal.metrics.pull.port = 11112

# canal instance user/passwd

# canal.user = canal

# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458

# canal admin config

#canal.admin.manager = 127.0.0.1:8089

canal.admin.port = 11110

canal.admin.user = admin

canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441

canal.zkServers = zookeeper01.host:2181,zookeeper02.host:2181,zookeeper03.host:2181

# flush data to zk

canal.zookeeper.flush.period = 1000

canal.withoutNetty = false

# tcp, kafka, RocketMQ

canal.serverMode = tcp

# flush meta cursor/parse position to file

canal.file.data.dir = ${canal.conf.dir}

canal.file.flush.period = 1000

## memory store RingBuffer size, should be Math.pow(2,n)

canal.instance.memory.buffer.size = 16384

## memory store RingBuffer used memory unit size , default 1kb

canal.instance.memory.buffer.memunit = 1024

## meory store gets mode used MEMSIZE or ITEMSIZE

canal.instance.memory.batch.mode = MEMSIZE

canal.instance.memory.rawEntry = true

## detecing config

canal.instance.detecting.enable = true

#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()

canal.instance.detecting.sql = select 1

canal.instance.detecting.interval.time = 3

canal.instance.detecting.retry.threshold = 3

canal.instance.detecting.heartbeatHaEnable = true

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery

canal.instance.transaction.size =  1024

# mysql fallback connected to new master should fallback times

canal.instance.fallbackIntervalInSeconds = 60

# network config

canal.instance.network.receiveBufferSize = 16384

canal.instance.network.sendBufferSize = 16384

canal.instance.network.soTimeout = 30

# binlog filter config

canal.instance.filter.druid.ddl = true

canal.instance.filter.query.dcl = false

canal.instance.filter.query.dml = false

canal.instance.filter.query.ddl = false

canal.instance.filter.table.error = false

canal.instance.filter.rows = false

canal.instance.filter.transaction.entry = false

# binlog format/image check

canal.instance.binlog.format = ROW,STATEMENT,MIXED

canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation

canal.instance.get.ddl.isolation = false

# parallel parser config

canal.instance.parser.parallel = true

## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()

canal.instance.parser.parallelThreadSize = 16

## disruptor ringbuffer size, must be power of 2

canal.instance.parser.parallelBufferSize = 256

# table meta tsdb info

canal.instance.tsdb.enable = true

canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}

canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;

canal.instance.tsdb.dbUsername = canal

canal.instance.tsdb.dbPassword = canal

# dump snapshot interval, default 24 hour

canal.instance.tsdb.snapshot.interval = 24

# purge snapshot expire , default 360 hour(15 days)

canal.instance.tsdb.snapshot.expire = 360

# aliyun ak/sk , support rds/mq

canal.aliyun.accessKey =

canal.aliyun.secretKey =

#################################################

#########              destinations           #############

#################################################

canal.destinations = databasename

# conf root dir

canal.conf.dir = ../conf

# auto scan instance dir add/remove and start/stop instance

canal.auto.scan = true

canal.auto.scan.interval = 5

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml

#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring

canal.instance.global.lazy = false

canal.instance.global.manager.address = ${canal.admin.manager}

#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml

#canal.instance.global.spring.xml = classpath:spring/file-instance.xml

canal.instance.global.spring.xml = classpath:spring/default-instance.xml

 

#################################################

#########                   MQ                  #############

##################################################

canal.mq.servers = 

canal.mq.retries = 0

canal.mq.batchSize = 16384

canal.mq.maxRequestSize = 1048576

canal.mq.lingerMs = 100

canal.mq.bufferMemory = 33554432

canal.mq.canalBatchSize = 50

canal.mq.canalGetTimeout = 100

canal.mq.flatMessage = true

canal.mq.compressionType = none

canal.mq.acks = all

#canal.mq.properties. =

canal.mq.producerGroup = test

# Set this value to "cloud", if you want open message trace feature in aliyun.

canal.mq.accessChannel = local

# aliyun mq namespace

#canal.mq.namespace =

 

##################################################

#########     Kafka Kerberos Info    #############

##################################################

canal.mq.kafka.kerberos.enable = false

canal.mq.kafka.kerberos.krb5FilePath = "../conf/kerberos/krb5.conf"

canal.mq.kafka.kerberos.jaasFilePath = "../conf/kerberos/jaas.conf"

 

instance全部配置

#################################################

## mysql serverId , v1.0.26+ will autoGen

# canal.instance.mysql.slaveId=0

# enable gtid use true/false

canal.instance.gtidon=true

# position info

canal.instance.master.address=172.31.xx.xx53306

canal.instance.master.journal.name=

canal.instance.master.position=

canal.instance.master.timestamp=

canal.instance.master.gtid=

# rds oss binlog

canal.instance.rds.accesskey=

canal.instance.rds.secretkey=

canal.instance.rds.instanceId=

# table meta tsdb info

canal.instance.tsdb.enable=true

#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb

#canal.instance.tsdb.dbUsername=canal

#canal.instance.tsdb.dbPassword=canal

#从节点

canal.instance.standby.address = 172.31.xx.xx:53306

#canal.instance.standby.journal.name =

#canal.instance.standby.position =

#canal.instance.standby.timestamp =

#canal.instance.standby.gtid=

# username/password

canal.instance.dbUsername=canal

canal.instance.dbPassword=canal

canal.instance.connectionCharset = UTF-8

# enable druid Decrypt database password

canal.instance.enableDruid=false

#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==

# table regex

#canal.instance.filter.regex=.*\\..*

canal.instance.filter.regex=db_name.table_name

# table black regex

canal.instance.filter.black.regex=

# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)

#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch

# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)

#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch=

# mq config

canal.mq.topic=example

# dynamic topic route by schema or table regex

#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*

canal.mq.partition=0

# hash partition config

#canal.mq.partitionsNum=3

#canal.mq.partitionHash=test.table:id^name,.*\\..*

#################################################

2.3 验证

关闭目前instance正在运行的serverinstance会自动切换到可用的server。

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值