kafka实践-分区器-拦截器-消费者-生产者-kafakStream

模拟消费者组

三台机器中选择两台设置相同的消费者组

  • 设置相同的组id给两台机器
group.id=nove
  • 开启一个生产者
bin/kafka-console-producer.sh \
--broker-list hadoop101:9092 --topic second
  • 指定配置文件开启两个消费者
bin/kafka-console-consumer.sh --bootstrap-server hadoop101:9092 --topic second --from-beginning --consumer.config config/consumer.properties
  • 此时:对于生产者产生的信息只有一个客户端可以接受到

生产者api

send方法中存在了不掉用回调函数与调用回调函数的情况。

  • 调用回调函数
    查看源码,模拟使用。
package com.uu.kafkaProject;

import com.sun.xml.internal.ws.wsdl.writer.document.Import;
import org.apache.kafka.clients.producer.*;
import java.util.Properties;

/**
 * Created by IBM on 2020/3/11.
 */
public class KafkaProducter {
    public static void main(String[] args) {
        Properties props = new Properties();
        // Kafka服务端的主机名和端口号
        props.put("bootstrap.servers", "hadoop103:9092");
        // 等待所有副本节点的应答
        props.put("acks", "all");
        // 消息发送最大尝试次数
        props.put("retries", 0);
        // 一批消息处理大小
        props.put("batch.size", 16384);
        // 请求延时
        props.put("linger.ms", 1);
        // 发送缓存区内存大小
        props.put("buffer.memory", 33554432);
        // key序列化
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        // value序列化
        props.put("value.serializer", "org.apache.kafka.common.serialization.IntegerSerializer");

        Producer<String, Integer> producer = new KafkaProducer(props);
        //创建回调对象
        Callback callback = new Callback() {
            public void onCompletion(RecordMetadata metadata, Exception exception) {
                    System.out.println("偏移量"+metadata.offset());
                    if(exception != null){
                        System.out.println(exception.getMessage());
                    }

            }
        };
        for (int i = 0; i < 50; i++) {
            //主题,key,value,对应的类型
            try {
                Thread.sleep(10000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            producer.send(new ProducerRecord<String, Integer>("second",Integer.toString(i), new Integer(i)),callback);
        }


        producer.close();


    }

}

分区器的使用

核心在于:实现Partitioner接口。
实现分区的方法。
在生产者代码中进行配置的引入

分区器
package com.uu.kafkaProject;

import org.apache.kafka.clients.producer.Partitioner;
import org.apache.kafka.common.Cluster;

import java.util.Map;

/**
 * Created by IBM on 2020/3/11.
 */
public class MyKafkaPartition implements Partitioner {

    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
        return 0;
    }

    public void close() {

    }

    public void configure(Map<String, ?> configs) {

    }
}

生产者

配置中添加属性

props.put("partitioner.class", "com.uu.kafkaProject.MyKafkaPartition");

消费者

此代码指定了两个topic来进行消费

package com.uu.kafkaProject;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.List;
import java.util.Properties;

/**
 * Created by IBM on 2020/3/11.
 */
public class KafkaConsumerExe {
    public static void main(String[] args) {
        Properties props = new Properties();
        // 定义kakfa 服务的地址,不需要将所有broker指定上
        props.put("bootstrap.servers", "hadoop102:9092");
        // 制定consumer group
        props.put("group.id", "test");
        // 是否自动确认offset
        props.put("enable.auto.commit", "true");
        // 自动确认offset的时间间隔
        props.put("auto.commit.interval.ms", "1000");
        // key的序列化类
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        // value的序列化类
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        // 定义consumer
        KafkaConsumer kafkaConsumer = new KafkaConsumer(props);

        List<String> strings = Arrays.asList("first", "second");

        kafkaConsumer.subscribe(strings);

        while (true) {
            // 读取数据,读取超时时间为100ms
            ConsumerRecords<String, String> records = kafkaConsumer.poll(100);

            for (ConsumerRecord<String, String> record : records)
                System.out.println("偏移量:" + record.offset() + "key:" + record.key() + "value:" + record.value()+"topic"+record.topic());

        }
    }
}

拦截器

其实就是新建了一个发送的对象
新写了一个回调逻辑
新写了关闭方法

package com.uu.kafkaProject;

import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

import java.util.Map;

/**
 * Created by IBM on 2020/3/12.
 */
public class KafkaInterocter implements ProducerInterceptor {

    public ProducerRecord onSend(ProducerRecord record) {

        // 此处可以对发送的对象进行修改
        ProducerRecord producerRecord = new ProducerRecord(record.topic(), record.partition(), record.timestamp(), record.key(),
                System.currentTimeMillis() + "," + record.value().toString());


        return producerRecord;
    }
    //回调方法
    public void onAcknowledgement(RecordMetadata metadata, Exception exception) {

    }
    //关闭的方法
    public void close() {

    }

    public void configure(Map<String, ?> configs) {

    }
}

使用

新建数组进行添加

		List<String> interceptors = new ArrayList<>();
		interceptors.add("com.uu.kafkaProject.KafkaInterocter"); 
		interceptors.add("com.uu.kafkaProject.KafkaInterocter1"); 
		props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, interceptors);

kafakStream

从Stream入手,需要一个拓扑模型与kafak配置
拓扑模型将sink与source连接
channel重新写一个处理流程

依赖

注意版本与kafka版本统一

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-streams</artifactId>
            <version>0.11.0.0</version>
        </dependency>
Stream类
package com.uu.kafkaProject;

import java.util.Properties;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorSupplier;
import org.apache.kafka.streams.processor.TopologyBuilder;

/**
 * Created by IBM on 2020/3/12.
 */
public class KafkaStreamApp {
    public static void main(String[] args) {
        // 定义输入的topic
        String from = "first";
        // 定义输出的topic
        String to = "second";

        // 设置参数
        Properties settings = new Properties();
        settings.put(StreamsConfig.APPLICATION_ID_CONFIG, "myKakfaStream");
        settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop102:9092");

        StreamsConfig config = new StreamsConfig(settings);

        TopologyBuilder topologyBuilder = new TopologyBuilder();

        ProcessorSupplier processorSupplier = new ProcessorSupplier<byte[], byte[]>() {
            public Processor<byte[], byte[]> get() {
                return new KafakCleaner();
            }
        };


        //添加kafak的source
        topologyBuilder.addSource("SOURCE", from);
        //处理流程
        topologyBuilder.addProcessor("pro",processorSupplier,"SOURCE");

        //添加sink
        topologyBuilder.addSink("sink",to,"pro");

        // 创建kafka stream
        KafkaStreams streams = new KafkaStreams(topologyBuilder, config);
        streams.start();


    }
}

重写的处理流程类
package com.uu.kafkaProject;

import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;

/**
 * Created by IBM on 2020/3/12.
 */
public class KafakCleaner implements Processor<byte[], byte[]> {

    private ProcessorContext context;


    public void init(ProcessorContext context) {
        this.context = context;

    }

    public void process(byte[] key, byte[] value) {
        String s = key.toString()+"拦截";
        String s1 = value.toString()+"拦截";

        System.out.println(s + "..." + s1);
        context.forward(s.getBytes(),s1.getBytes());


    }

    public void punctuate(long timestamp) {

    }

    public void close() {

    }
}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值