canal-adapter趟坑实践:canal-server的kafka SASLPLAIN方式鉴权适配

6 篇文章 1 订阅
1 篇文章 0 订阅

前言

canal-server同步到kafka本身是支持Kerberos方式的鉴权的,但是鉴于项目现在使用的kafka集群使用的是SASL/PLAIN的鉴权方式,所以需要对canal-server同步kafka做一下适配改造。

准备

kafka SASL/PLAIN鉴权的搭建
我参考的这篇文章kafka SASL/PLAIN鉴权的搭建

了解如何使用java向以SASL/PLAIN方式鉴权的kafka集群发送和消费消息。

  • 使用系统环境变量方式鉴权,使用该方式会在使用canal集群模式时对连接zk产生影响,单机不会
Properties props = new Properties();
props.put("bootstrap.servers", "127.0.0.1:8123");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

//鉴权相关,将鉴权信息放入jaas.conf文件中,设置系统变量
System.setProperty("java.security.auth.login.config", "E:/work/saslconf/kafka_write_jaas.conf");
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "PLAIN");

Producer<String, String> producer = new KafkaProducer<>(props);
  • 使用配置参数方式鉴权,我使用的是这种方式
Properties props = new Properties();
props.put("bootstrap.servers", "127.0.0.1:8123");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

//鉴权相关,将鉴权信息放在props对象中
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
props.put("sasl.jaas.config",
                        "org.apache.kafka.common.security.plain.PlainLoginModule required username='testkafkaUser' password='testKafkaPwd';");
                        
Producer<String, String> producer = new KafkaProducer<>(props);

改造

了解了kafka生产消费的鉴权方式,接下来就是改造canal-server到kafka的相关代码了,查看源码我们发现相关代码位置

  • com.alibaba.otter.canal.deployer.CanalStarter.java
public synchronized void start() throws Throwable {
        String serverMode = CanalController.getProperty(properties, CanalConstants.CANAL_SERVER_MODE);
        if (serverMode.equalsIgnoreCase("kafka")) {
        
            //这里是同步kafka使用的producer
            canalMQProducer = new CanalKafkaProducer();
        
            
        } else if (serverMode.equalsIgnoreCase("rocketmq")) {
            canalMQProducer = new CanalRocketMQProducer();
        } else if (serverMode.equalsIgnoreCase("rabbitmq")) {
            canalMQProducer = new CanalRabbitMQProducer();
        }

        MQProperties mqProperties = null;
        if (canalMQProducer != null) {
        
            // 这里是同步kafka需要的配置,注意这个buildMQProperties方法
            mqProperties = buildMQProperties(properties);
            
            
            
//buildMQProperties方法部分代码,我们发现CanalConstants这个类是用来读取canal-server的在application.yml中配置的信息的
private static MQProperties buildMQProperties(Properties properties) {
        MQProperties mqProperties = new MQProperties();
        String servers = CanalController.getProperty(properties, CanalConstants.CANAL_MQ_SERVERS);
        if (!StringUtils.isEmpty(servers)) {
            mqProperties.setServers(servers);
        }
        String retires = CanalController.getProperty(properties, CanalConstants.CANAL_MQ_RETRIES);
        if (!StringUtils.isEmpty(retires)) {
            mqProperties.setRetries(Integer.valueOf(retires));
        }
  • com.alibaba.otter.canal.kafka.CanalKafkaProducer.java
//这里是kafka鉴权配置的相关代码
public void init(MQProperties kafkaProperties) {
        super.init(kafkaProperties);

        this.kafkaProperties = kafkaProperties;
        Properties properties = new Properties();
        properties.put("bootstrap.servers", kafkaProperties.getServers());
        properties.put("acks", kafkaProperties.getAcks());
        properties.put("compression.type", kafkaProperties.getCompressionType());
        properties.put("batch.size", kafkaProperties.getBatchSize());
        properties.put("linger.ms", kafkaProperties.getLingerMs());
        properties.put("max.request.size", kafkaProperties.getMaxRequestSize());
        properties.put("buffer.memory", kafkaProperties.getBufferMemory());
        properties.put("key.serializer", StringSerializer.class.getName());
        properties.put("max.in.flight.requests.per.connection", 1);

        if (!kafkaProperties.getProperties().isEmpty()) {
            properties.putAll(kafkaProperties.getProperties());
        }
        properties.put("retries", kafkaProperties.getRetries());
        if (kafkaProperties.isKerberosEnable()) {
            File krb5File = new File(kafkaProperties.getKerberosKrb5FilePath());
            File jaasFile = new File(kafkaProperties.getKerberosJaasFilePath());
            logger.info("kafka鉴权中...jaasFile是否存在:{}",jaasFile.exists());
            if (krb5File.exists() && jaasFile.exists()) {
            	logger.info("kerberos认证中...");
                // 配置kerberos认证,需要使用绝对路径
                System.setProperty("java.security.krb5.conf", krb5File.getAbsolutePath());
                System.setProperty("java.security.auth.login.config", jaasFile.getAbsolutePath());
                System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
                properties.put("security.protocol", "SASL_PLAINTEXT");
                properties.put("sasl.kerberos.service.name", "kafka");

找到了相关的代码,改造就比较容易了,首先我们需要再canal-server的application.yml配置文件中增加两个配置项(即连接kafka的账户信息)

  • application.yml
canal.mq.kafka.plain.linkUser = testkafkaUser
canal.mq.kafka.plain.linkPassword = testKafkaPwd

然后在CanalConstants类中增加这两项配置的常量

  • com.alibaba.otter.canal.deployer.CanalConstants.java
    public static final String CANAL_MQ_KAFKA_PLAIN_LINK_USER = ROOT + "." + "mq.kafka.plain.linkUser";
    public static final String CANAL_MQ_KAFKA_PLAIN_LINK_PASSWORD = ROOT + "." + "mq.kafka.plain.linkPassword";

接着在buildMQProperties方法中增加这两项配置信息,同时在MQProperties类中增加这两个变量并设置get、set方法

  • com.alibaba.otter.canal.deployer.CanalStarter.java
private static MQProperties buildMQProperties(Properties properties) {

     String kafkaPlainLinkUser = CanalController.getProperty(properties,
        		CanalConstants.CANAL_MQ_KAFKA_PLAIN_LINK_USER);
        if (!StringUtils.isEmpty(kafkaPlainLinkUser)) {
        	mqProperties.setPlainLinkUser(kafkaPlainLinkUser);
        }
        
        String kafkaPlainLinkPwd = CanalController.getProperty(properties,
        		CanalConstants.CANAL_MQ_KAFKA_PLAIN_LINK_PASSWORD);
        if (!StringUtils.isEmpty(kafkaPlainLinkPwd)) {
        	mqProperties.setPlainLinkPassword(kafkaPlainLinkPwd);
        }
}

最后就是在CanalKafkaProducer类中完成相关的init实现了

  • com.alibaba.otter.canal.kafka.CanalKafkaProducer.java
else if (!kafkaProperties.getPlainLinkUser().isEmpty() && !kafkaProperties.getPlainLinkPassword().isEmpty()) {
            	logger.info("SASL_PLAINTEXT认证中...(使用局部变量)");
            	properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
            	properties.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
            	properties.put("sasl.jaas.config",
                        "org.apache.kafka.common.security.plain.PlainLoginModule required username='"+kafkaProperties.getPlainLinkUser()+"' password='"+kafkaProperties.getPlainLinkPassword()+"';");
            	
			}

工程目录打开命令行执行以下命令打包即可

mvn clean install -Denv=release

希望这篇内容能够对你有所帮助,更多canal-adapter趟坑记录请关注我的微信公众号!谢谢支持
欢迎关注我的个人微信公众号,一个菜鸟程序猿的技术分享和奔溃日常

一个菜鸟程序猿的技术技术分享和奔溃日常

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值