canal中写入kafka的BUG
kafka的默认最大消息大小是1M, canal作者也知道这个事, 所以往canal扔消息的时候, 会先做一个判断, 如果消息大小>1M, 会报错
但关键是他报错说的要把kafka的配置改大, 实际上是自己做了阉割, 怨不得kafka, 我横来竖去改了n遍kafka的配置也没用
目前我做的是修改了canal的源码:(GITHUB上最新版源码已经加上了这个配置)
vi server/src/main/java/com/alibaba/otter/canal/kafka/CanalKafkaProducer.java
public void init(MQProperties kafkaProperties) {
this.kafkaProperties = kafkaProperties;
Properties properties = new Properties();
properties.put("bootstrap.servers", kafkaProperties.getServers());
properties.put("acks", kafkaProperties.getAcks());
properties.put("compression.type",kafkaProperties.getCompressionType());
properties.put("retries", kafkaProperties.getRetries());
properties.put("batch.size", kafkaProperties.getBatchSize());
properties.put("linger.ms", kafkaProperties.getLingerMs());
properties.put("buffer.memory", kafkaPropert