kafka本地测试环境搭建

Kafka(二):环境搭建&测试

需求

由于共有云的kafka集群只对测试机(阡陌机器等)开放,本地是无法访问的,所以为了开发方便搭建一套kafka的测试环境是有必要的

软件

  • kafka_2.11-0.10.0.1

步骤

根据开发环境创建好相配置文件,开发启动各个组件

本地zk启动

nohup bin/zookeeper-server-start.sh config/zookeeper.properties &

启动broker节点

JMX_PORT=9997 bin/kafka-server-start.sh config/server-1.properties &
JMX_PORT=9998 bin/kafka-server-start.sh config/server-2.properties &
JMX_PORT=9999 bin/kafka-server-start.sh config/server.properties &

创建topic(如果已经存在就无需创建)

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic test

查看当前topic列表

bin/kafka-topics.sh --list --zookeeper localhost:2181

启动生产者

bin/kafka-console-producer.sh --broker-list localhost:9092,localhost:9093,localhost:9094 --topic doctorq

启动消费者

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic doctorq

演示

模拟kafka发送日志信息

因为huatuo接收到的数据并非简单的String格式,而是复杂的二进制格式,所以我们要在本地能够模拟出这样的数据

序列化的格式

{
    "type": "record",
    "name": "Event","namespace":"com.iwaimai.huatuo.log",
    "fields": [{
   "name": "headers",
   "type": {
       "type": "map",
       "values": "string"
   }
    }, {
   "name": "body",
   "type": "string"
    }]
}

代码

package com.iwaimai.huatuo.kafka;

import junit.framework.TestCase;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.io.BinaryEncoder;
import org.apache.avro.io.DatumWriter;
import org.apache.avro.io.EncoderFactory;
import org.apache.avro.specific.SpecificDatumWriter;
import org.junit.Test;

import java.io.ByteArrayOutputStream;
import java.io.File;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Properties;

/**
 * Created by doctorq on 16/9/22.
 */
public class KafkaProducer extends TestCase {
    private byte[] serializedBytes;
    private GenericRecord payload;
    private DatumWriter<GenericRecord> writer;
    private BinaryEncoder encoder;
    private ByteArrayOutputStream out;
    private Producer<String, byte[]> producer;

    public void setUp() throws Exception {
        // 设置配置属性
        Properties props = new Properties();
        props.put("metadata.broker.list","localhost:9092,localhost:9093,localhost:9094");
        props.put("serializer.class", "kafka.serializer.DefaultEncoder");
        // key.serializer.class默认为serializer.class
        // 触发acknowledgement机制,否则是fire and forget,可能会引起数据丢失
        // 值为0,1,-1,可以参考
        // http://kafka.apache.org/08/configuration.html
        props.put("request.required.acks", "1");
        ProducerConfig config = new ProducerConfig(props);
        producer = new Producer<String, byte[]>(config);
        Schema schema = new Schema.Parser().parse(new File("src/test/resources/test-schema.avsc"));
        payload = new GenericData.Record(schema);
        writer = new SpecificDatumWriter<GenericRecord>(schema);
        out = new ByteArrayOutputStream();
        encoder = EncoderFactory.get().binaryEncoder(out, null);
    }

    /**
     * 模拟发送Nginx日志
     * @throws Exception
     */
    @Test
    public void testNginxProducer() throws Exception {
        Map headers = new LinkedHashMap<String, String>();
        headers.put("timestamp", "2016-9-22 16:8:26");
        headers.put("module", "NGINX");
        headers.put("filename", "access_log.2016092216");
        headers.put("topic", "pids-0000000038");
        headers.put("host", "10.194.216.46");
        headers.put("lineoffset", "33653");
        headers.put("inode", "46274724");
        payload.put("headers", headers);
        payload.put("body", "10.194.219.31 - - [22/Sep/2016:16:08:26 +0800] \"POST /marketing/getshopactivity HTTP/1.1\" 200 518 \"-\" \"-\" \"RAL/2.0.8.6 (internal request)\" 0.021 506573372 - 10.194.217.47 unix:/home/map/odp_cater/var/php-cgi.sock 10.194.217.47 \"-\" waimai waimai 5065733720802800138092216 1474531706.602 0.021 - 10.194.219.31 logid=506573370 spanid=0.8 force_sampling=- status=200 host=10.194.217.47 server_addr=10.194.217.47 server_port=8086 client_addr=10.194.219.31 request=\"POST /marketing/getshopactivity HTTP/1.1\" msec=1474531706.602 request_time=0.021 content_tracing=-");
        System.out.println("Original Message : "+ payload);
    }

    /**
     * 模拟发送RAL日志
     * @throws Exception
     */
    @Test
    public void testRalProducer() throws Exception {
        Map headers = new LinkedHashMap<String, String>();
        headers.put("timestamp", "2016-9-23 10:53:14");
        headers.put("module", "RAL");
        headers.put("filename", "ral-worker.log.2016092310");
        headers.put("topic", "pids-0000000043");
        headers.put("host", "10.195.181.17");
        headers.put("lineoffset", "2557660");
        headers.put("inode", "144277667");

        payload.put("headers", headers);
        payload.put("body", "NOTICE: 09-23 10:53:14: ral-worker * 4153 [php_ral.cpp:1437][logid=3194356520 worker_id=10488 optime=1474599194.518921 product=waimai subsys=waimai module=marketing user_ip= local_ip=10.195.181.17 local_port=8086 msg=ral_write_log log_type=E_SUM caller=redis_c_marketing from=/home/map/odp_cater/php/phplib/wm/service/RedisBns.php:131 spanid=0.9.26 method=get conv=redis prot=redis retry=0%2F1 remote_ip=10.194.218.13%3A7490 idc=nj cost=0.181 talk=0.181 write=0 connect=0 read=0.181 req_start_time=1474599194.5186 err_no=0 err_info=OK req_data=a%3A1%3A%7Bi%3A0%3Bs%3A30%3A%22wm%3Amkt%3Abgt%3Afuse%3A201609%3A195%3A0%3A2%22%3B%7D]");
        System.out.println("Original Message : "+ payload);
    }

    public void tearDown() throws Exception {
        writer.write(payload, encoder);
        encoder.flush();
        out.close();
        serializedBytes = out.toByteArray();
        KeyedMessage<String, byte[]> message = new KeyedMessage<String, byte[]>("doctorq", serializedBytes);
        producer.send(message);
        producer.close();
    }

}

演示发送

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值