SpringBoot项目连接,有Kerberos认证的Kafka

在连接Kerberos认证kafka之前,需要了解Kerberos协议

二、什么是Kerberos协议

Kerberos是一种计算机网络认证协议 ,其设计目标是通过密钥系统为网络中通信的客户机(Client)/服务器(Server)应用程序提供严格的身份验证服务,确保通信双方身份的真实性和安全性。不同于其他网络服务,Kerberos协议中不是所有的客户端向想要访问的网络服务发起请求,他就能建立连接然后进行加密通信,而是在发起服务请求后必须先进行一系列的身份认证,包括客户端和服务端两方的双向认证,只有当通信双方都认证通过对方身份之后,才可以互相建立起连接,进行网络通信。即Kerberos协议的侧重在于认证通信双方的身份,客户端需要确认即将访问的网络服务就是自己所想要访问的服务而不是一个伪造的服务器,而服务端需要确认这个客户端是一个身份真实,安全可靠的客户端,而不是一个想要进行恶意网络攻击的用户。

三、Kerberos协议角色组成
Kerberos协议中存在三个角色,分别是:

客户端(Client):发送请求的一方
服务端(Server):接收请求的一方
密钥分发中心(Key distribution KDC)

一,首先需要准备三个文件

(user.keytab,krb5.conf,jass.conf)

其中user.keytab和krb5.conf是两个认证文件,需要厂商提供,就是你连接谁的kafka,让谁提供

jass.conf文件需要自己在本地创建

jass.conf文件内容如下,具体路径和域名需要换成自己的:

debug: true

fusioninsight:
  kafka:
    bootstrap-servers: 10.80.10.3:21007,10.80.10.181:21007,10.80.10.52:21007
    security:
      protocol: SASL_PLAINTEXT
    kerberos:
      domain:
        name: hadoop.798687_97_4a2b_9510_00359f31c5ec.com
    sasl:
      kerberos:
        service:
          name: kafka

其中kerberos.domain.name:hadoop.798687_97_4a2b_9510_00359f31c5ec.com

hadoop.798687_97_4a2b_9510_00359f31c5ec.com需要根据现场提供给你的域名

二、文件准备好后可以将三个配置文件,放在自己项目中,也可以放在服务器的某个目录下,只要确保项目启动后能读取到即可

我的目录结构如下:

pom依赖:

我用的是华为云的Kafka依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>kafka-sample-01</artifactId>
	<version>2.3.1.RELEASE</version>
	<packaging>jar</packaging>

	<name>kafka-sample-01</name>
	<description>Kafka Sample 1</description>

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.2.0.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>

		<dependency>
			<groupId>org.springframework.kafka</groupId>
			<artifactId>spring-kafka</artifactId>
			<exclusions>
				<exclusion>
					<groupId>org.apache.kafka</groupId>
					<artifactId>kafka-clients</artifactId>
				</exclusion>
			</exclusions>
		</dependency>

		<dependency>
			<groupId>org.apache.kafka</groupId>
			<artifactId>kafka-clients</artifactId>
			<version>2.4.0-hw-ei-302002</version>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>



		<!-- 华为 组件 kafka  start -->
<!--		<dependency>-->
<!--			<groupId>com.huawei</groupId>-->
<!--			<artifactId>kafka-clients</artifactId>-->
<!--			<version>2.4.0</version>-->
<!--			<scope>system</scope>-->
<!--			<systemPath>${project.basedir}/lib/kafka-clients-2.4.0-hw-ei-302002.jar</systemPath>-->
<!--		</dependency>-->
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

	<repositories>

		<repository>
			<id>huaweicloudsdk</id>
			<url>https://mirrors.huaweicloud.com/repository/maven/huaweicloudsdk/</url>
			<releases><enabled>true</enabled></releases>
			<snapshots><enabled>true</enabled></snapshots>
		</repository>

		<repository>
			<id>central</id>
			<name>Mavn Centreal</name>
			<url>https://repo1.maven.org/maven2/</url>
		</repository>

	</repositories>
</project>

然后再SpringBoot项目启动类如下:

package com.example;

import com.common.Foo1;


import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaAdmin;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
import org.springframework.kafka.listener.SeekToCurrentErrorHandler;
import org.springframework.kafka.support.converter.RecordMessageConverter;
import org.springframework.kafka.support.converter.StringJsonMessageConverter;
import org.springframework.util.backoff.FixedBackOff;

import java.io.File;
import java.util.HashMap;
import java.util.Map;

/**
 * @author
 */
@SpringBootApplication
public class Application {

    private final Logger logger = LoggerFactory.getLogger(Application.class);

    @Value("${fusioninsight.kafka.bootstrap-servers}")
    public String boostrapServers;

	@Value("${fusioninsight.kafka.security.protocol}")
	public String securityProtocol;

	@Value("${fusioninsight.kafka.kerberos.domain.name}")
	public String kerberosDomainName;

	@Value("${fusioninsight.kafka.sasl.kerberos.service.name}")
	public String kerberosServiceName;

    public static void main(String[] args) {
//        String filePath = System.getProperty("user.dir") + File.separator + "src" + File.separator + "main"
//        String filePath = "D:\\Java\\workspace\\20231123MOSPT4eB\\sample-01\\src\\main\\resources\\";
        String filePath = "/home/yxxt/";
        System.setProperty("java.security.auth.login.config", filePath + "jaas.conf");
        System.setProperty("java.security.krb5.conf", filePath + "krb5.conf");
        SpringApplication.run(Application.class, args);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
        ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
        ConsumerFactory<Object, Object> kafkaConsumerFactory, KafkaTemplate<String, String> template) {
        System.out.println(boostrapServers);
        ConcurrentKafkaListenerContainerFactory<Object, Object> factory
            = new ConcurrentKafkaListenerContainerFactory<>();
        configurer.configure(factory, kafkaConsumerFactory);
        factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template),
            new FixedBackOff(0L, 2))); // dead-letter after 3 tries
        return factory;
    }

    @Bean
    public RecordMessageConverter converter() {
        return new StringJsonMessageConverter();
    }

    // 指定消费监听,该topic有消息时立刻消费
    @KafkaListener(id = "fooGroup1", topics = "topic_ypgk")
    public void listen(ConsumerRecord<String, String> record) {
        System.out.println("监听到了消息-----");
        logger.info("Received:消息监听成功! " );
        System.out.println("监听到了-----");
        System.out.println(record);
//        if (foo.getFoo().startsWith("fail")) {
//            // 触发83行的 ErrorHandler,将异常数据写入 topic名称+.DLT的新topic中
//            throw new RuntimeException("failed");
//        }
    }

    // 创建topic,指定分区数、副本数
//    @Bean
//    public NewTopic topic() {
//        return new NewTopic("topic1", 1, (short) 1);
//    }

    @Bean
    public KafkaAdmin kafkaAdmin() {
        Map<String, Object> configs = new HashMap<>();
        configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, boostrapServers);
        configs.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, securityProtocol);
        configs.put("sasl.kerberos.service.name", kerberosServiceName);
        configs.put("kerberos.domain.name", kerberosDomainName);
        return new KafkaAdmin(configs);
    }

    @Bean
    public ConsumerFactory<Object, Object> consumerFactory() {
        Map<String, Object> configs = new HashMap<>();
        configs.put("security.protocol", securityProtocol);
        configs.put("kerberos.domain.name", kerberosDomainName);
        configs.put("bootstrap.servers", boostrapServers);
        configs.put("sasl.kerberos.service.name", kerberosServiceName);
        configs.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        configs.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new DefaultKafkaConsumerFactory<>(configs);
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        Map<String, Object> configs = new HashMap<>();
        configs.put("security.protocol", securityProtocol);
        configs.put("kerberos.domain.name", kerberosDomainName);
        configs.put("bootstrap.servers", boostrapServers);
        configs.put("sasl.kerberos.service.name", kerberosServiceName);
        configs.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        configs.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        ProducerFactory<String, String> producerFactory = new DefaultKafkaProducerFactory<>(configs);
        return new KafkaTemplate<>(producerFactory);
    }
}


生产者:通过发送请求进行向主题里发送消息

package com.example;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RestController;

import com.common.Foo1;

/**
 * @author haosuwei
 *
 */
@RestController
public class Controller {

	@Autowired
	private KafkaTemplate<String, String> template;

	@PostMapping(path = "/send/foo/{what}")
	public void sendFoo(@PathVariable String what) {
		Foo1 foo1 = new Foo1(what);
		this.template.send("topic1", foo1.toString());
	}

}

运行成功,就可以监听到主题消息了

  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
要实现Spring Boot整合Kerberos认证来使用Kafka,你可以按照以下步骤进行操作: 1. 在启动类中添加Kafka相关配置。你可以使用`@EnableKafka`注解来启用Kafka,同时在`application.properties`文件中配置Kafka连接信息。 2. 创建一个`kafka_client_jaas.conf`文件,其中配置Kafka的客户端认证信息。例如,你可以使用以下配置: ``` KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required username="bob" password="bob-pwd"; }; ``` 将此文件放置在合适的位置,例如`C:/Users/JustryDeng/Desktop/kerberos/kafka_client_jaas.conf`。 3. 在启动类中,通过设置系统环境属性来指定`java.security.auth.login.config`参数为`kafka_client_jaas.conf`文件的路径。例如: ``` private static void systemPropertiesConfig(){ System.setProperty("java.security.auth.login.config", "C:/Users/JustryDeng/Desktop/kerberos/kafka_client_jaas.conf"); } ``` 这样,Kafka客户端将会使用指定的认证配置进行连接认证。 通过以上步骤,你可以实现Spring Boot整合Kerberos认证来使用Kafka。请确保按照指定的路径创建和配置`kafka_client_jaas.conf`文件,并在启动类中设置正确的系统环境属性。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [SpringBoot整合并简单使用Kerberos认证Kafka](https://blog.csdn.net/justry_deng/article/details/88387898)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值