Redis - 保证数据库与缓存数据一致性 - 如何保证两步都执行成功?

缓存和数据库一致性问题,看这篇就够了,中写道:无论是更新缓存还是删除缓存,只要第二步发生失败,那么就会导致数据库和缓存不一致。

1.重试

2.异步重试

3.订阅binlog日志

        具体来讲就是,我们的业务应用在修改数据时,「只需」修改数据库,无需操作缓存。

那什么时候操作缓存呢?这就和数据库的「变更日志」有关了。

拿 MySQL 举例,当一条数据发生修改时,MySQL 就会产生一条变更日志(Binlog),我们可以订阅这个日志,拿到具体操作的数据,然后再根据这条数据,去删除对应的缓存。

使用canal中间件,订阅数据库变更日志,将变更传递到下游消息队列.

环境准备:

mysql数据库:   1.开启binlog,row模式,server-id = 1不与canal的slaveId重复

                        2.账号授权

canal下载安装配置:

                        1.下载安装

                        2.配置 - rocketMq为例

                                conf/canal.properties

#################################################
######### 		common argument		#############
#################################################
# tcp bind ip
canal.ip = 192.168.43.231
# register ip to zookeeper
#canal.register.ip =
canal.port = 11111
#canal.metrics.pull.port = 11112
# canal instance user/passwd
canal.user = canal
canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458

# canal admin config
#canal.admin.manager = 127.0.0.1:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441
# admin auto register
#canal.admin.register.auto = true
#canal.admin.register.cluster =
#canal.admin.register.name =

#canal.zkServers =
# flush data to zk
#canal.zookeeper.flush.period = 1000
#canal.withoutNetty = false
# tcp, kafka, rocketMQ, rabbitMQ
#需要修改的地方
canal.serverMode = rocketMQ
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true

## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60

# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false
canal.instance.filter.dml.insert = false
canal.instance.filter.dml.update = false
canal.instance.filter.dml.delete = false

# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED 
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation
canal.instance.get.ddl.isolation = false

# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256

# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360

#################################################
######### 		destinations		#############
#################################################
#canal.destinations = example
canal.destinations = example
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5
# set this value to 'true' means that when binlog pos not found, skip to latest.
# WARN: pls keep 'false' in production env, or if you know what you want.
canal.auto.reset.latest.pos.mode = false

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml

##################################################
######### 	      MQ Properties      #############
##################################################
# aliyun ak/sk , support rds/mq  要修改的
canal.aliyun.accessKey = LTAIs4e56kBVE9
canal.aliyun.secretKey = pJPGAtvJKGWsS
canal.aliyun.uid=
canal.mq.servers = 192.168.56.1:9876
canal.mq.flatMessage = true
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
canal.mq.database.hash = true
canal.mq.send.thread.size = 30
canal.mq.build.thread.size = 8

##################################################
######### 		     Kafka 		     #############
##################################################
# kafka.bootstrap.servers = 127.0.0.1:9092
# kafka.acks = all
# kafka.compression.type = none
# kafka.batch.size = 16384
# kafka.linger.ms = 1
# kafka.max.request.size = 1048576
# kafka.buffer.memory = 33554432
# kafka.max.in.flight.requests.per.connection = 1
# kafka.retries = 0

# kafka.kerberos.enable = false
# kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf"
# kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"

##################################################
######### 		    RocketMQ	     #############
##################################################
rocketmq.producer.group = redis-test-producer
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
#需要修改的地方,多个值用分号隔开
rocketmq.namesrv.addr = 192.168.56.1:9876
rocketmq.retry.times.when.send.failed = 0
rocketmq.vip.channel.enabled = false
rocketmq.tag = canal_tag

##################################################
######### 		    RabbitMQ	     #############
##################################################
# rabbitmq.host =
# rabbitmq.virtual.host =
# rabbitmq.exchange =
# rabbitmq.username =
# rabbitmq.password =
# rabbitmq.deliveryMode =

instance.properties

## mysql serverId
canal.instance.mysql.slaveId = 5
#position info,需要改成自己的数据库信息
canal.instance.master.address = 127.0.0.1:3306 
canal.instance.master.journal.name = 
canal.instance.master.position = 
canal.instance.master.timestamp = 
#canal.instance.standby.address = 
#canal.instance.standby.journal.name =
#canal.instance.standby.position = 
#canal.instance.standby.timestamp = 
#username/password,需要改成自己的数据库信息
canal.instance.dbUsername = canal  
canal.instance.dbPassword = canal
canal.instance.defaultDatabaseName =
canal.instance.connectionCharset = UTF-8
#table regex
canal.instance.filter.regex =.*\\..*
canal.mq.topic = canal_topic
canal.mq.partition = 0

                                

        3.启动

    rocket相关

测试:

        <!--canal-->
        <dependency>
            <groupId>com.alibaba.otter</groupId>
            <artifactId>canal.client</artifactId>
            <version>1.1.4</version>
        </dependency>
package com.cloud.redis.consumer.binlog;

rocket监听
@Slf4j
@Component
@RocketMQMessageListener(topic = "canal_topic",
                        selectorExpression = "*",
                        consumerGroup = "canal_consumer",
messageModel = MessageModel.CLUSTERING, //广播...
consumeMode = ConsumeMode.ORDERLY) //同时,顺序
public class MyRocketMQListener implements RocketMQListener {
    @Override
    public void onMessage(Object message) {
        System.out.println("MQListener1..."+message);
    }
}

问题
1.mysql服务报错:MySQL启动失败提示:本地计算机上的Mysql服务启动后停止 - 查看data下.err日志解决2.连接问题:D:\software\canal\canal\logs\user_date - caching_sha2_password Auth failed -select host, user, authentication_string, plugin from userALTER USER 'canal'@'%' IDENTIFIED WITH mysql_native_password BY 'canal';
3.c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> begin to find start position, it will be long time for reset or first position
4. The specified topic is blank - 设置canal.mq.topic
5.Connection refused - canal.serverMode = tcp
6.NoClassDefFoundError:org/apache/rocketmq/client/consumer/DefaultLitePullConsumer -  导包jar
7.Error[UPDATE command denied to user 'canal'@'localhost' for table 'canal_node_server'] - 1. GRANT SELECT, INSERT, UPDATE, DELETE, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; 2.flush privileges; 3.show grants for 'canal'@'%';

QuickStart · alibaba/canal Wiki · GitHub

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值