Kakfa-SASL身份验证登陆

公司有这样的需求,HK-kafka集群内网无验证kafka生产,国内服务器走外网消费需要身份验证.


环境配置

编辑 vim $KAFKA_HOME/config server.properties
找到相应的位置添加修改

listeners=SASL_PLAINTEXT://0.0.0.0:9092,PLAINTEXT://192.168.1.xx:19092
security.inter.broker.protocol=SASL_PLAINTEXT 
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
advertised.listeners=SASL_PLAINTEXT://118.184.xx.xxx:9092,PLAINTEXT://192.168.1.xx:19092

注: SASL_PLAINTEXT://118.184.xx.xxx:9092是外网加密端口,PLAINTEXT://192.168.1.xx:19092为内网非加密端口


$KAFKA_HOME/config 中:

新建 kafka_server_jaas.conf

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin123"
    user_admin="admin123"
    user_test="test123";
};

新建 kafka_client_jaas.conf 文件

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="test"
  password="test123";
};

新建kafka_zoo_jaas.conf文件

zookeeper {
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="admin"
        password="admin123";
};

vim consumer.properties添加

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

$KAFKA_HOME/bin/kafka-server-start.sh 中添加

base_dir=$(dirname $0)
if [  "x$KAFKA_OPTS" ]; then
 export KAFKA_OPTS="-Djava.security.auth.login.config=$base_dir/../config/kafka_server_jaas.conf"
fi

zookeeper启动项添加(这步可以忽略)

export KAFKA_OPTS="-Djava.security.auth.login.config=$KAFKA_HOME/config/kafka_zoo_jaas.conf"

启动kafka

cd $KAFKA_HOME;
bin/kafka-server-start.sh  -daemon config/server.properties

测试

无身份认证生成topic(内网ip)

bin/kafka-console-producer.sh --broker-list    192.168.1.xx:19092,192.168.1.xx:19092,192.168.1.xx:19092  --topic  test

身份验证消费topic(外网ip)

bin/kafka-console-consumer.sh  --bootstrap-server  118.184.xx.xxx:9092,118.184.xx.xxx:9092,118.184.xx.xxx:9092 --topic test--from-beginning  --consumer.config config/consumer.properties --new-consumer

JAVA 客户端消费代码

将$KAFKA_HOME/config/kafka_client_jaas.conf拷贝到java项目中
maven配置

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.10.2.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>0.10.2.0</version>
        </dependency>
package kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class KafkaNewConsumerDemo {

    public static void main(String[] args) {
        Properties props=new Properties();

        props.put("bootstrap.servers","118.184.xx.xxx:9092,118.184.xx.xxx:9092,118.184.xx.xxx:9092");
        props.put("group.id","kafka-group");
        props.put("zookeeper.session.timeout.ms", "4000");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
        props.put("auto.offset.reset", "latest");

        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        System.setProperty("java.security.auth.login.config", "config/kafka_client_jaas.conf"); //配置文件路径
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.mechanism", "PLAIN");


        KafkaConsumer<String,String> consumer=new KafkaConsumer<String, String>(props);
        consumer.subscribe(Arrays.asList("test"));
        try {
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(100);
                for (ConsumerRecord<String, String> record : records)
                    System.out.printf("offset = %d,key =%s,value=%s\n", record.offset(), record.key(), record.value());
            }
        }finally {
            consumer.close();
        }

    }

}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值