kafka配置kerberos认证&&java/netcore连接kerberos的kafka

环境配置

安装kdc及配置

  1. 安装kdc

    yum install -y krb5-libs krb5-server krb5-workstation krb5-libs
    
  2. 修改配置文件,将realms以及default_realm取消注释

    vi /etc/krb5.conf
    

    在这里插入图片描述

  3. 创建数据库存放principal

    kdb5_util create -r EXAMPLE.COM -s
    
  4. 使用kadmin.local来添加kafka以及zookeeper的principal (按q可退出kadmin.local)

    cd ~
    kadmin.local
    add_principal kafka/kerberos.example.com@EXAMPLE.COM
    xst -k kafka_krb5.keytab kafka/kerberos.example.com@EXAMPLE.COM
    add_principal zookeeper/kerberos.example.com@EXAMPLE.COM
    xst -k zk_krb5.keytab zookeeper/kerberos.example.com@EXAMPLE.COM
    
  5. 设置kadmin和krb5kdc服务开机启动

    chkconfig krb5kdc on
    chkconfig kadmin on
    service krb5kdc start
    service kadmin start
    
  6. 配置域名映射(/etc/hosts)
    在这里插入图片描述

  7. 验证KDC及其票据信息

    kinit -k -t kafka_krb5.keytab kafka/kerberos.example.com@EXAMPLE.COM
    klist
    

    在这里插入图片描述

安装java环境

java环境不要用自带的openjdk,否则会出现意想不到的情况。

官网下载jdk二进制包,直接解压后配置环境变量即可(/etc/profile):
在这里插入图片描述
检查java版本
在这里插入图片描述
替换JCE
注意和jdk版本对应,我安装的是jdk8.

安装及配置kafka

  1. 下载地址:http://kafka.apache.org/downloads

  2. 解压

    tar xf kafka_2.13-2.8.0.tgz
    
  3. 修改配置

    在config/server.properties文件末尾加入:

    listeners=SASL_PLAINTEXT://kerberos.example.com:9092
    security.inter.broker.protocol=SASL_PLAINTEXT
    sasl.mechanism.inter.broker.protocol=GSSAPI
    sasl.enabled.mechanism=GSSAPI
    sasl.kerberos.service.name=kafka
    

    在config/zookeeper.properties文件末尾加入:

    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
    jaasLoginRenew=3600000
    kerberos.removeHostFromPrincipal=true
    kerberos.removeRealmFromPrincipal=true
    

    在config/consumer.properties文件末尾加入:

    sasl.mechanism=GSSAPI
    security.protocol=SASL_PLAINTEXT
    sasl.kerberos.service.name=kafka
    

    在config/producer.properties文件末尾加入:

    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=GSSAPI
    sasl.kerberos.service.name=kafka
    
  4. 创建认证配置文件
    创建三个jaas配置文件,用于kerberos连接,可放置到任意目录,如:/etc/kafka

    创建kafka_jaas.conf

    vi /etc/kafka/kafka_jaas.conf
    
    KafkaServer {
            com.sun.security.auth.module.Krb5LoginModule required
            useKeyTab=true
            storeKey=true
            useTicketCache=false
            keyTab="/root/kafka_krb5.keytab"
            principal="kafka/kerberos.example.com@EXAMPLE.COM";
    };
    

    创建kafka_client_jaas.conf

    vi /etc/kafka/kafka_client_jaas.conf
    
    KafkaClient {
            com.sun.security.auth.module.Krb5LoginModule required
            useKeyTab=true
            storeKey=true
            useTicketCache=false
            keyTab="/root/kafka_krb5.keytab"
            principal="kafka/kerberos.example.com@EXAMPLE.COM";
    };
    

    创建zookeeper_jaas.conf

    vi /etc/kafka/zookeeper_jaas.conf
    
    Server {
      com.sun.security.auth.module.Krb5LoginModule required debug=true
      useKeyTab=true
      keyTab="/root/zk_krb5.keytab"
      storeKey=true
      useTicketCache=false
      principal="zookeeper/kerberos.example.com@EXAMPLE.COM";
    };
    

使用测试

可以打开四个shell,设置不同的KAFKA_OPTS,主要指定上文创建的jaas认证文件(当然也可以将这个环境变量写入到启动脚本中,本次暂未写入)

zookeeper:

export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf -Dsun.security.krb5.debug=true"

bin/zookeeper-server-start.sh config/zookeeper.properties

kafka:

export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf -Dsun.security.krb5.debug=true"

bin/kafka-server-start.sh config/server.properties

生产者:

export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf"

bin/kafka-console-producer.sh --broker-list 192.168.18.244:9092 --topic topicTest  --producer.config config/producer.properties

消费者:

export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf"

bin/kafka-console-consumer.sh --bootstrap-server 192.168.18.244:9092 --topic topicTest --from-beginning --consumer.config config/consumer.properties

java连接

准备

将kafka服务器上得到的相关配置文件放置到本地(参考环境配置),如下:

在这里插入图片描述

修改jaas.conf中keytab路径:
在这里插入图片描述
主机配置hosts映射:
在这里插入图片描述

代码实现

maven依赖:

 <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.4.0</version>
</dependency>
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.UUID;
import java.util.Date;
import java.util.Properties;

public class TestProducer {


    public static void main(String... args) throws InterruptedException {
        String topic = "swtest";
        System.setProperty("java.security.auth.login.config", "D:\\yangqin\\kafkaTest\\kafka_client_jaas.conf");
        System.setProperty("java.security.krb5.conf", "D:\\yangqin\\kafkaTest\\krb5.conf");
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.18.244:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.kerberos.service.name", "kafka");

        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 10000; i++) {
            String s = UUID.randomUUID().toString() + " " + i + " Test Date: " + new Date();
            System.out.println(s);
            producer.send(new ProducerRecord<>(topic, s));
            Thread.sleep(1000);
        }
    }
}

效果

java控制台输出
在这里插入图片描述
服务端查看消息:

export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf"
bin/kafka-console-consumer.sh --bootstrap-server 192.168.18.244:9092 --topic swtest --from-beginning --consumer.config config/consumer.properties

在这里插入图片描述

netcore连接

假定kerberos安装在服务器,测试程序代码运行在客户端。

准备

将正确的keytab和krb5.conf文件放置到客户端。keytab文件路径由程序指定(本例为:/opt/yangqin/kafka/kafka_krb5.keytab),krb5.conf文件放置到/etc下

依赖

在这里插入图片描述

代码

using Confluent.Kafka;
using System;
using System.IO;
using System.Threading.Tasks;


namespace kafkaTest
{
    class Program
    {
        static async Task Main(string[] args)
        {

            string brokerList = "192.168.18.244:9092";
            string topicName = "netcoreKafka";

            var config = new ProducerConfig { 
                BootstrapServers = brokerList,
                SaslKerberosPrincipal = "kafka/kerberos.example.com@EXAMPLE.COM",
                SaslKerberosServiceName = "kafka",
                SaslKerberosKeytab = "/opt/yangqin/kafka/kafka_krb5.keytab",
                SaslMechanism = SaslMechanism.Gssapi,
                SecurityProtocol = SecurityProtocol.SaslPlaintext

            };

            using (var producer = new ProducerBuilder<string, string>(config).Build())
            {
                Console.WriteLine("\n-----------------------------------------------------------------------");
                Console.WriteLine($"Producer {producer.Name} producing on topic {topicName}.");
                Console.WriteLine("-----------------------------------------------------------------------");
                Console.WriteLine("To create a kafka message with UTF-8 encoded key and value:");
                Console.WriteLine("> key value<Enter>");
                Console.WriteLine("To create a kafka message with a null key and UTF-8 encoded value:");
                Console.WriteLine("> value<enter>");
                Console.WriteLine("Ctrl-C to quit.\n");

                var cancelled = false;
                Console.CancelKeyPress += (_, e) => {
                    e.Cancel = true; // prevent the process from terminating.
                    cancelled = true;
                };

                while (!cancelled)
                {
                    Console.Write("> ");

                    string text;
                    try
                    {
                        text = Console.ReadLine();
                    }
                    catch (IOException)
                    {
                        // IO exception is thrown when ConsoleCancelEventArgs.Cancel == true.
                        break;
                    }
                    if (text == null)
                    {
                        // Console returned null before 
                        // the CancelKeyPress was treated
                        break;
                    }

                    string key = null;
                    string val = text;

                    // split line if both key and value specified.
                    int index = text.IndexOf(" ");
                    if (index != -1)
                    {
                        key = text.Substring(0, index);
                        val = text.Substring(index + 1);
                    }

                    try
                    {
                        // Note: Awaiting the asynchronous produce request below prevents flow of execution
                        // from proceeding until the acknowledgement from the broker is received (at the 
                        // expense of low throughput).
                        var deliveryReport = await producer.ProduceAsync(
                            topicName, new Message<string, string> { Key = key, Value = val });

                        Console.WriteLine($"delivered to: {deliveryReport.TopicPartitionOffset}");
                    }
                    catch (ProduceException<string, string> e)
                    {
                        Console.WriteLine($"failed to deliver message: {e.Message} [{e.Error.Code}]");
                    }
                }

                // Since we are producing synchronously, at this point there will be no messages
                // in-flight and no delivery reports waiting to be acknowledged, so there is no
                // need to call producer.Flush before disposing the producer.
            }
        }
    }
}

常见报错

  1. 该api目前不支持在windows上运行
  2. 运行该代码的服务器需额外安装组件:

centos: yum install cyrus-sasl-gssapi cyrus-sasl-devel
ubuntu: apt-get install libsasl2-modules-gssapi-mit libsasl2-dev
若未安装则报错如下:
Cyrus/libsasl2 is missing a GSSAPI module: make sure the libsasl2-modules-gssapi-mit or cyrus-sasl-gssapi packages are installed
在这里插入图片描述

  1. 客户端需为krb5.conf中所指定的realms 配置host映射
    在这里插入图片描述
    如未配置,则报错如下:
    Cannot contact any KDC for realm ‘EXAMPLE.COM’
    在这里插入图片描述

  2. 客戶端的/etc下需放置:krb5.conf,如未放置则报错如下:Cannot find KDC for realm “EXAMPLE.COM” while getting initial credentials
    在这里插入图片描述

  3. 客户端时间需要与服务器时间同步
    不同步时报错:
    Unspecified GSS failure. Minor code may provide more information (Ticket expired) (after 1ms in state AUTH_REQ)
    在这里插入图片描述

  4. 需要手动将正确的keytab文件拷贝到客户端的指定路径下,程序控制,示例代码指定路径:/opt/yangqin/kafka。
    未拷贝时报错:kinit: Key table file ‘/opt/yangqin/kafka/kafka_krb5.keytab’ not found while getting initial credentials
    在这里插入图片描述

  5. 客户端的krb5.conf中需设置rdns=true
    在这里插入图片描述

否则报错如下:
Server kafka/192.168.18.244@EXAMPLE.COM not found in Kerberos database
在这里插入图片描述
备注:kdc中只存在kafka/kerberos.example.com@EXAMPLE.COM,不存在kafka/192.168.18.244@EXAMPLE.COM,所以报错:not found in Kerberos database。将krb5.con中的rdns设置为true,即可将ip解析为域名。
8. 启动报错:Invalid UID in persistent keyring name
在这里插入图片描述
该错误在docker容器中遇到过:
方案一:在客户端的krb5.conf注释以下字段即可。
在这里插入图片描述
方案二:给容器privileged权限
在这里插入图片描述

验证

客户端发送消息:
在这里插入图片描述

服务器查看消息:

export KAFKA_OPTS=“-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf”
bin/kafka-console-consumer.sh --bootstrap-server 192.168.18.244:9092 --topic cunfluentKafka --from-beginning --consumer.config config/consumer.properties
在这里插入图片描述

  • 3
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 16
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 16
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值