实践kafka生产者消息

kafka原理就不过多介绍了,直接操作过程

首先kafka 依赖zookeeper 

去kafka官网下载 tag.zip包 官网地址  http://kafka.apache.org/downloads.html 我下载的是kafka_2.10-0.9.0.0.tgz,千万不要下载src的


然后解压,这里面包含kafka服务  和 zookeeper服务


目录结构和tomcat差不多  几个文件夹 bin config log lib之类的


先后启动 kafka服务 和 zookeeper服务   windows下 启动的时候 需要加上 个kafka 需要加上server.properties配置文件   kafka-server-start.bat ..\..\config\server.properties

 zookeeper 加上keekeeper.properties配置文件  里面存放着是 port端口号  文件存放路径 等等


接下来

producer端代码 

package com.kafka;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

import java.util.Properties;

/**
 * Created by liuliqiang on 2016/3/1.
 */
public class ProducerTest {
        private final Producer<String, String> producer;
        public final static String TOPIC = "TEST-TOPIC";

        private ProducerTest(){
            Properties props = new Properties();
            //此处配置的是kafka的端口
            props.put("metadata.broker.list", "127.0.0.1:9092");

            //配置value的序列化类
            props.put("serializer.class", "kafka.serializer.StringEncoder");
            //配置key的序列化类
            props.put("key.serializer.class", "kafka.serializer.StringEncoder");
            props.put("producer.type", "async");
            props.put("compression.codec", "1");

            producer = new Producer<String, String>(new ProducerConfig(props));
        }

        void produce() {
            int messageNo = 1000;
            final int COUNT = 10000;

            while (messageNo < COUNT) {
                String key = String.valueOf(messageNo);
                String data = "hello kafka message " + key;
                producer.send(new KeyedMessage<String, String>(TOPIC, key ,data));
                System.out.println(data);
                messageNo ++;
            }
        }

        public static void main( String[] args )
        {
            new ProducerTest().produce();
        }
}


consumer端代码

package com.kafka;

import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;

import java.util.*;

/**
 * Created by liuliqiang on 2016/3/1.
 */
public class ConsumerTest {

    private final ConsumerConnector consumer;

    private ConsumerTest() {
        Properties props = new Properties();
        //zookeeper 配置
        props.put("zookeeper.connect", "127.0.0.1:2181");

        //group 代表一个消费组
        props.put("group.id", "jd-group");

        //zk连接超时
        props.put("zookeeper.session.timeout.ms", "4000");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
        props.put("auto.offset.reset", "smallest");
        //序列化类
        props.put("serializer.class", "kafka.serializer.StringEncoder");

        ConsumerConfig config = new ConsumerConfig(props);

        consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config);
    }

    void consume() {
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(ProducerTest.TOPIC, new Integer(1));

        StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
        StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties());

        Map<String, List<KafkaStream<String, String>>> consumerMap =
                consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder);
        KafkaStream<String, String> stream = consumerMap.get(ProducerTest.TOPIC).get(0);
        ConsumerIterator<String, String> it = stream.iterator();
        while (it.hasNext()){
            System.out.println(it.next().message());
        }

    }

    public static void main(String[] args) {
        new ConsumerTest().consume();
    }
}


然后 先启动消费者端 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值