Kafka生产者代码:往topic里面写东西
import java.util.Properties;
import scala.collection.Seq;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class ProducerDemo {
public static void main(String []args) throws InterruptedException {
//配置文件
Properties props =new Properties();
//zzookeeper服务器列表
props.put("zk.connect", "192.168.146.100:2181,192.168.146.101:2181,192.168.146.102:2181");
//borker 列表
props.put("metadata.broker.list", "192.168.146.100:9092,192.168.146.101:9092,192.168.146.102:9092");
//消息序列化机制 数据类型为String就需要用kafka.serializer.StringEncoder,数据类型为别的就要用别的,系统没有的自己写一个序列化
props.put("serializer.class", "kafka.serializer.StringEncoder");
//把 props重新封装成 生产者对象
ProducerConfig config = new ProducerConfig(props);
//数据类型为String
Producer<String, String> producer = new Producer<String, String>(config);
//发送100条信息
for (int i = 1; i <= 100; i++) {
Thread.sleep(500);
//发送消息 test1 为已经创建的topic
KeyedMessage<String, String> messages=new KeyedMessage<String, String>("test1",
"This is test1 Num:" + i );
producer.send( messages );
}
}
}
错误记录:导入priducer包得时候导入 import kafka.javaapi.producer.Producer,我导入得时候为import kafka.producer.Producer; 提示send错误,利用提示改正还是运行错误
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.ClassCastException: kafka.producer.KeyedMessage cannot be cast to scala.collection.Seq
at ProducerDemo.main(ProducerDemo.java:26)
消息实时查看信息:
Kafka消费者代码:
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;
public class ConsumerDemo {
public static void main(String []agrs){
Properties props =new Properties();
props.put("zookeeper.connect", "192.168.146.100:2181,192.168.146.101:2181,192.168.146.102:2181");
//给消费者分组,这里分了一组 也可以不分
props.put("group.id", "1");
//读的信息,从开头开始读
props.put("auto.offset.reset","smallest");
//将props封装为消费者的配置对象
ConsumerConfig config=new ConsumerConfig(props);
//拿到一个消费者客户端
ConsumerConnector consumer =Consumer.createJavaConsumerConnector(config);
Map<String,Integer>topicCountMap=new HashMap<String,Integer>();
// topic名称 ,第二个参数 是线程数量
topicCountMap.put("test1", 2);
//可以传入好多topic
//topicCountMap.put("test2", 1);
//获得消息流
Map<String,List<KafkaStream<byte[],byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);//需要传一个Map参数
//get map里面的topic -test1 ,拿出来是test1里面的消息,为什么是t<KafkaStream<byte[],byte[]>,两个元素,因为卡夫卡KafkaStream包含消息和元数据。 比如一些消息管理数据
List<KafkaStream<byte[],byte[]>>streams=consumerMap.get("test1");
for(final KafkaStream<byte[],byte[]> kafkaStream : streams){
new Thread(new Runnable(){
@Override
public void run() {
//遍历流kafkaStream对象
for(MessageAndMetadata<byte[], byte[]> mm : kafkaStream){
//mm消息拿出来是序列化的,必须用String格式转换
String msg = new String(mm.message());
System.out.println(msg);
}
}
}).start();
/* 关于Runnable的另一种写法
Runnable runnble=new Runnable( ) {
@Override
public void run() {
for(MessageAndMetadata<byte[], byte[]> mm : kafkaStream){
String msg = new String(mm.message());
System.out.println(msg);
}
}} ;
Thread thread =new Thread(runnble);
thread.start();
*/
信息:
在生产者发送
信息实时显示
Runnable()是接口在上面的写法并不是实例化,看起来是直接new一个接口,实际上是匿名内部类
同等与这种写法,实例化了一个Runnable接口子类的实例。
Thread t=new Thread(new MyRunnable());
public class MyRunnable implements Runnable{
@Override
public void run() {
//具体实现
}
}