软件安装环境
组件 | 版本 |
源端OGG | 12.3.0.1.4 |
目标端OGG | 12.3.0.1.2 |
Ambari | HDP-2.6.4.0 |
Kerberos | 1.10.3-10 |
kafka | 0.10.1 |
zookeeper | 3.4.6 |
一. Ambari下开启Kerberos使用kafka
一般对于新加入分布式系统的组件开启Kerberos只需要经过3步:1.生成用户对应的keytab文件;2.编写指定keytab文件位置和登录用户的配置文件jaas.conf;3.在组件启动时加上JVM参数,用于告诉虚拟机采取的认证策略和jaas.conf的文件路径;
1. 创建Kerberos用户和keytab文件
在Ambari下安装kafka和zookeeper后会自动生成keytab文件,访问kafka和zookeeper还需要创建一个客户端对应的keytab文件。
在IPA机器上用以下命令生成客户端keytab文件
[root@freeipa keytabs]#ipa user-add kafkauser@COM
[root@freeipa keytabs]#kadmin.local
xst -norandkey -k /etc/security/keytabs/clientkafka.service.keytab kafkauser@COM
xst /etc/security/keytabs/clientkafka.service.keytab kafkauser@COM
xst -norandkey -k /etc/security/keytabs/clientkafka.service.keytab kafkauser@COM
将生成的keytab文件拷贝到各个主机的/etc/security/keytabs/文件夹下
2. 修改jaas文件(在Ambari中kafka配置项修改)
在Ambari环境下开启Kerberos后会自动在JVM参数中加入配置,所以只需要修改jaas.conf文件即可。
修改kafka_jaas.conf文件
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="{{kafka_keytab_path}}"
storeKey=true
useTicketCache=false
serviceName="{{kafka_bare_jaas_principal}}"
principal="{{kafka_jaas_principal}}";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="{{kafka_keytab_path}}"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="{{kafka_jaas_principal}}";
};
修改kafka_client_jaas.conf文件
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
client=true
serviceName="kafka"
keyTab="/etc/security/keytabs/clientkafka.service.keytab"
principal="kafkauser@COM";
};
3. ACL授权
kafka采用了ACL授权机制,对单独的用户赋予细粒度的topic操作权限,用以保证消息的安全性。Ambari在开启Kerberos后会默认打开kafka的ACL授权。
用户授权
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=dc-server07.com:2181,dc-server11.com:2181,dc-server12.com:2181 --add --allow-principal User:kafkauser --operation All --topic test
生产者ACL授权
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=dc-server07.com:2181,dc-server11.com:2181,dc-server12.com:2181 --allow-principal User:kafkauser --producer --topic=* --add
消费者ACL授权
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=dc-server07.com:2181,dc-server11.com:2181,dc-server12.com:2181 --allow-principal User:kafkauser --consumer --topic=* --group=* --add
4. 命令行测试kafka
Ambari环境下开启Kerberos使用生产者消费者命令行跟普通环境下不同,需要在最后加上–security-protocol PLAINTEXTSASL
Ambari下kafka使用生产者命令行
bin/kafka-console-producer.sh --broker-list dc-server11:6667,dc-server07:6667,dc-server05:6667 --topic test --security-protocol PLAINTEXTSASL
Ambari下kafka使用消费者命令行
bin/kafka-console-consumer.sh --bootstrap-server dc-server11:6667,dc-server07:6667,dc-server05:6667 --topic test --security-protocol PLAINTEXTSASL
5. 客户端Java代码示例
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.util.Arrays;
import java.util.Properties;
public class KafkaConsumerApp {
public static void main(String[] args) {
System.setProperty("java.security.krb5.conf","d:/krb5.conf");
System.setProperty("java.security.auth.login.config","d:/kafka_client_jaas.conf");
Properties properties = new Properties();
properties.put("bootstrap.servers",KafkaProperties.BOOTSTART_SERVERS);
properties.put("group.id",KafkaProperties.GROUP_ID);
properties.put("key.deserializer", StringDeserializer.class.getName());
properties.put("value.deserializer", StringDeserializer.class.getName());
properties.put("request.required.acks", "1");
properties.put("security.protocol", "SASL_PLAINTEXT");
properties.put("security.inter.broker.protocol", "SASL_PLAINTEXT");
KafkaConsumer consumer = new KafkaConsumer(properties);
consumer.subscribe(Arrays.asList("test_hello"));
for(;;){
ConsumerRecords<String,String> list = consumer.poll(100);
for(ConsumerRecord<String,String> record : list){
System.out.println("res:"+record.value());
}
}
}
}
public class KafkaProperties {
public static final String TOPIC = "test_ogg";
public static final String GROUP_ID = "test_group";
public static final String BOOTSTART_SERVERS = "10.121.8.11:6667,10.121.8.5:6667,10.121.8.7:6667";
}
二. Kerberos整合OGG对接kafka数据
1. 创建文件kafka_client_jaas.conf
在ogg根目录下的dirprm/文件夹下新建文件kafka_client_jaas.conf
内容为:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
client=true
serviceName="kafka"
keyTab="/etc/security/keytabs/clientkafka.service.keytab"
principal="kafkauser@COM";
};
2. 编辑文件kafka.props
dirprm/文件夹下编辑kafka.props
在javawriter.bootoptions最后追加
-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/ogg/dirprm/kafka_client_jaas.conf
3. 编辑文件custom_kafka_producer.properties
dirprm/文件夹下编辑custom_kafka_producer.properties,在最后追加
security.protocol=PLAINTEXTSASL
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka