文档参考:
http://www.cnblogs.com/fxjwind/p/3803878.html以及官方文档。这位大牛的博客里挺多干货。学习了。
producer发送的是kv数据 无论Producer或KeyedMessage都是<String, String>的泛型,这里是指key和value的类型
直接上代码实现:
import kafka.javaapi.producer.Producer;//这里要注意。官方文档里是kafka.producer.Producer。send那里是会报错的。换成这个才行
import kafka.producer.ProducerConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.Properties;
@Configuration
public class KafkaProducerConfig {
@Bean
public Producer getProducer(){
Properties props = new Properties();
props.put("metadata.broker.list", "localhost:9092"); //本机broker(kafka监听地址)
props.put("serializer.class", "kafka.serializer.StringEncoder");
//props.put("partitioner.class", "example.producer.SimplePartitioner"); //可以不设置。如果设置了必须给出实现
props.put("request.required.acks", "1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
return producer;
}
}
官方文档里是kafka.producer.Producer。send那里是会报错的。换成这个才行
import kafka.producer.ProducerConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.Properties;
@Configuration
public class KafkaProducerConfig {
@Bean
public Producer getProducer(){
Properties props = new Properties();
props.put("metadata.broker.list", "localhost:9092"); //本机broker(kafka监听地址)
props.put("serializer.class", "kafka.serializer.StringEncoder");
//props.put("partitioner.class", "example.producer.SimplePartitioner"); //可以不设置。如果设置了必须给出实现
props.put("request.required.acks", "1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
return producer;
}
}
然后通过controller来调用producer发送消息:
@RequestMapping(value = "/kafkaTest", method = RequestMethod.GET)
public String test4() {
KeyedMessage<String, String> data = new KeyedMessage<String, String>("test", "hello", "I am producer"); //指定topic,key,value
producer.send(data);
return "OK";
}
同时使用上一篇介绍的consumer接口来处理消息:
可以看到消费者获取了消息。
另外。kafka与其他基于push的消息队列的异同等,我觉得这篇里分析的有些道理,可以参考下。就不再赘述了。
http://www.cnblogs.com/fxjwind/archive/2013/03/19/2969655.html
另外:关于新旧接口,stackoverflow 上有个解释:
Kafka starting 0.8.2.x exposed a new set of API's to work with kafka, older being Producer
which works with KeyedMessage[K,V]
where the new API is KafkaProducer
with ProducerRecord[K,V]
:
As of the 0.8.2 release we encourage all new development to use the new Java producer.
This client is production tested and generally both faster and more fully featured than the previous Scala client.
You should preferably be using the new supported version.
Which properties should I configure, take into account to achieve optimal, high heavy writes performance, for high scale application?
This is a very broad question, which depends a lot on the architecture of your software.
It varies with scale, amount of producers, amount of consumers, etc.. There are many things to be taken into account.
I would suggest going through the documentation and reading up the sections talking about Kafka's architecture and
design to get a better picture of how it works internally.
Generally speaking, from my experience you'll need to balance the replication
factor of your data, along with retention times and number of partitions each queue goes into.
If you have more specific questions down the road, you should definitely post a question.
“I would suggest going through the documentation and reading up the sections talking about Kafka's architecture “
虽然觉得我英语还行,但是官文实在太难看了。晚些时候我会实现一下新版的接口。