kafka学习(7)生产者和消费者代码

本文介绍了如何使用Java实现Kafka的生产者和消费者。在创建Spring Boot项目并添加Kafka依赖后,通过示例展示了发送消息的流程,强调了版本匹配的重要性。文章还讨论了消息序列化、主题创建、消费者启动以及消息分区导致的顺序问题。最后提到了消费丢失数据的风险,并给出了消费者代码示例。
摘要由CSDN通过智能技术生成

首先,我们打开kafka的api页面,里面都有详细的样例

http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer

消息生产这要发送消息,需要经过三次握手,第一次获取连接,第二次发送消息,第三次关闭连接。

下面我们通过构建一个例子,让生产者通过JAVA代码,发送一条消息。

创建spring boot项目,在jar包依赖中,加入kafka的依赖包。

		<dependency>
			<groupId>org.apache.kafka</groupId>
			<artifactId>kafka-clients</artifactId>
			<version>2.2.0</version>
		</dependency>

需要注意的是,依赖的jar包版本,要与我们部署的kafka版本一致,不然容易发生问题还不好解决。

下面我简单发送一个例子:

package com.example.producer;

import java.util.Properties;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class Test {
	public static void main(String[] args) {
		Properties props = new Properties();
		// 与kafka连接
		props.setProperty("bootstrap.servers", "master:9092,slave01:9092,slave02:9092");
		KafkaProducer<String, String> producer = new KafkaProducer<>(props);
		// 发送
		producer.send(new ProducerRecord<String, String>("mytopic2", "hello"));
		// 关闭
		producer.close();
	}
}

运行之后,会报错:

Exception in thread "main" org.apache.kafka.common.config.ConfigException: Missing required configuration "key.serializer" which has no default value.
	at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
	at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
	at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
	at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:75)
	at org.apache.kafka.clients.producer.ProducerConfig.<init>(ProducerConfig.java:396)
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:326)
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
	at com.example.producer.Test.main(Test.java:13)

报错原因是系统之间的传递数据,一般都是将数据序列化的。增加如下配置:

props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

然后我们执行以下,发现还是报错:

00:42:42.927 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [master:9092, slave01:9092, slave02:9092]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

00:42:43.048 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
00:42:43.056 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
00:42:45.617 [main] WARN org.apache.kafka.clients.ClientUtils - Couldn't resolve server master:9092 from bootstrap.servers as DNS resolution failed for master
00:42:48.170 [main] WARN org.apache.kafka.clients.ClientUtils - Couldn't resolve server slave01:9092 from bootstrap.servers as DNS resolution failed for slave01
00:42:50.721 [main] WARN org.apache.kafka.clients.ClientUtils - Couldn't resolve server slave02:9092 from bootstrap.servers as DNS resolution failed for slave02
00:42:50.723 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms.
00:42:50.726 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Kafka producer has been closed
Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka producer
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:430)
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
	at com.example.producer.Test.main(Test.java:16)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:90)
	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:49)
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:408)
	... 2 more

 原因很简单,推消息失败了,我这里的报错的原因,是因为mytopic2这个主题,还没有创

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值