springboot整合kafka2.5(单机模式)

 

 

操作系统:centos6.5
kafka:2.12-2.5.0
zookeeper:kafka高版本中自带的zookeeper

首先下载kafka,横线部分是源码,还需要编译,直接下载下面编译后的压缩包

下载地址:http://kafka.apache.org/downloads

解压 : cd/usr/local/kafka && tar -zxvf kafka_2.12-2.5.0.tgz

修改配置文件: vi /usr/local/kafka/kafka_2.12-2.5.0/config/server.properties

因为不是搭建集群,所以broker.id=0没有修改,如果是集群则需要修改

############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/usr/local/kafka/log/kafka 自定义的目录




############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=47.94.99.1:2181  服务器ip

修改 zookeeper.properties

# the directory where the snapshot is stored.
dataDir=/usr/local/kafka/zookeeper 自定义文件夹
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=100 初始是0
# Disable the adminserver by default to avoid port conflicts.
# Set the port to something non-conflicting if choosing to enable this
admin.enableServer=false
# admin.serverPort=8080

以下为新增

dataLogDir=/usr/local/kafka/log/zookeeper 自定义文件夹
   
tickTime=2000
 
initLimit=10
 
syncLimit=5

如果启动时报错“failed; error=‘Cannot allocate memory’ (errno=12)”其实是kafka容器报内存不足异常,这里需要更改bin目录下kafka-server-start.sh的配置

sudo vi kafka-server-start.sh
# 找到这一行export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" 
# 改为   export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

zookeeper-server-start.sh的也需要改

开放端口9092或者将防火墙关闭

netstat -ntlp //查看当前所有tcp端口


切到kafka的bin目录下

[root@iZ2ze8zi7ua634j1oqu1rtZ bin]#

启动zookeeper
./zookeeper-server-start.sh /usr/local/kafka/kafka_2.12-2.5.0/config/zookeeper.properties &

启动kafka
./kafka-server-start.sh /usr/local/kafka/kafka_2.12-2.5.0/config/server.properties &

创建名为hello的topic
./kafka-topics.sh --create --zookeeper 47.94.99.1:2181 --replication-factor 1 --partitions 1 --topic hello

生产者生产
./kafka-console-producer.sh --broker-list 47.94.99.1:9092 --topic hello

消费者消费(需要新开一个shell窗口)
./kafka-console-consumer.sh --bootstrap-server 47.94.99.1:9092 --topic hello --from-beginning

springboot整合kafka

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.5.8.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.kafka</groupId>
	<artifactId>kafka-demo</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>kafka-demo</name>
	<description>Demo project for Spring Boot</description>

	<properties>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
			<exclusions>
				<exclusion>
					<groupId>org.junit.vintage</groupId>
					<artifactId>junit-vintage-engine</artifactId>
				</exclusion>
			</exclusions>
		</dependency>

		<dependency>
			<groupId>com.google.code.gson</groupId>
			<artifactId>gson</artifactId>
			<version>2.8.2</version>
		</dependency>

		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<optional>true</optional>
		</dependency>
		<!-- kafka -->
		<dependency>
			<groupId>org.apache.kafka</groupId>
			<artifactId>kafka-streams</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.kafka</groupId>
			<artifactId>spring-kafka</artifactId>
		</dependency>

	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

application.properties 

server.port=8088

#spring.datasource.jdbc-url=jdbc:mysql://localhost:3306/kafka_demo?useUnicode=true&characterEncoding=utf-8&useSSL=false
#spring.datasource.username=root
#spring.datasource.password=root
#spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#first.datasource.type=com.alibaba.druid.pool.DruidDataSource


#============== kafka ===================
# 指定kafka 代理地址,可以多个
spring.kafka.bootstrap-servers=服务器ip:9092

#=======================  provider  =======================
spring.kafka.producer.retries=0
# 每次批量发送消息的数量
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
#下面三行是加了权限认证sasl/plain时使用
#spring.kafka.producer.properties.sasl.mechanism=PLAIN
#spring.kafka.producer.properties.security.protocol=SASL_PLAINTEXT
#spring.kafka.producer.properties.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="xxx" password="xxx";
# 指定消息key和消息体的序列化方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

#=======================  consumer  =======================
# 指定默认消费者group id
spring.kafka.consumer.group-id=test-hello-group
#监听的topics
spring.topics=esbtest
#spring.topics=hello,topic1
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
#下面三行是加了权限认证sasl/plain时使用
#spring.kafka.consumer.properties.sasl.mechanism=PLAIN
#spring.kafka.consumer.properties.security.protocol=SASL_PLAINTEXT
#spring.kafka.consumer.properties.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="xxx" password="xxx";
# 指定消息key和消息体的反序列化方式
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

生产者代码

import com.alibaba.fastjson.JSONObject;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;

/**
 * @author zk
 * @date 2020/6/24 10:33
 */
@Component
public class KafkaProducer {

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    private static final Logger LOGGER = LogManager.getLogger(KafkaProducer.class);

    public void send(String topic, Object object) {
        kafkaTemplate.send(topic, JSONObject.toJSONString(object));
        LOGGER.info("发送消息 -----  message = {}", JSONObject.toJSONString(object));
    }
}

消费者

import com.alibaba.fastjson.JSONObject;
import com.chinastock.trademanager.jdz.common.CacheMap;
import com.chinastock.trademanager.jdz.model.JdzLineUpReduceInfoEntity;
import com.chinastock.trademanager.jdz.model.JdzLineUpTradeInfoEntity;
import com.chinastock.trademanager.jdz.service.JdzLineUpService;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

import java.util.*;

/**
 * @author zk
 * @date 2020/6/24 13:25
 */
@Component
@PropertySource("/application-dev.properties")
public class KafkaConsumer {
    
    @Autowired
    private KafkaProducer kafkaProducer;

    private static final Logger LOGGER = LogManager.getLogger(KafkaConsumer.class);

    @KafkaListener(topics = "#{'${spring.topics}'.split(',')}")
    public void listen(ConsumerRecord<?, ?> record) {

        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        Map<String, List<JdzLineUpReduceInfoEntity>> cacheMap = CacheMap.dataMap;

        if (kafkaMessage.isPresent()) {
            String message = kafkaMessage.get().toString();
            LOGGER.info("message = " + message);

            //业务逻辑,数据入库等
        }
    }
}

在实际使用中遇到几个问题需要注意

1.springboot和kafka的版本一定要对应好,否则会出各种各样问题,如:NetWorkClient: Bootstrap broker ip:8092 disconnect等

2.注意权限问题,如:TopicAuthorizationException: Not authorized to access topics等,我是和另一个部门对接,开始各种连不上,报错,后来才知道他们加了权限,给了我配置方式和账号密码,更坑的是他们生产和消费者都加了权限,开始还没告诉我,又是好一阵摸索

将自己的探索过程记录下来,也希望能帮到需要的人

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值