kafka的二次封装

kafka是一个优秀的分布式发布订阅系统,我们可以很轻易地实现使用kafka Java API做发布消息或者订阅消息的功能。

//producer
public class ProducerApi {

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.16.150:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer",
                  "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                  "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<>(props);
        for (int i = 0; i < 100; i++) {
            producer.send(new ProducerRecord<String, String>(
                      "t1", Integer.toString(i), Integer.toString(i)));
        }
        producer.close();
    }
}
//consumer
public class ConsumerAOC {
    public static void main(String[] args) {
        final Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.16.150:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer",
                  "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer",
                  "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("t1"));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000);
            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset = %d, key = %s, value = %s%n", 
                                  record.offset(), record.key(), record.value());
        }

    }
}

但是,在一个公司内部,一般有着数个甚至成百上千的系统需要使用这些kafka API,所以通常不会像上面的接口那样使用kafka,而是最做统一的封装,封装的目的是:

1. 将kafka的配置统一放配置文件或配置中心

2. 接口会更加友好,更加便于使用。

下面我为大家介绍一种封装手法,封装完成后,kafka可以这样使用

//producer
public class SpringPublisherDemo {
    @Autowired
    StringPublisher stringPublisher;

    public void emit(int count) throws Exception{
        for (int i=0; i<count; i++) {
            stringPublisher.emit("test", "just a test " + i*11);
            Thread.sleep(200);
        }
    }
}
//consumer
@EventConfigLoader(consumer = StringConsumer.class)
public class SpringConsumerDemo implements EventListener {
    @Override
    public void onEvent(Event event) {
        try {
            List<Record> records = event.getRecords();
            for (Record record : records) {
                System.out.println("receive message <"+record.getKey()+", "+record.getValue()+">");
            }
        } catch (DeserializerException e) {

        }
    }
}

这样使用kafka接口,简直优雅极了,尤其是consumer,使用者只管实现接收到消息后的业务逻辑onEvent函数即可。下面我们一一介绍实现流程。

一. 配置封装

xxx:
  event:
    kafka:
      bootstrap-servers: "localhost:9091,localhost:9092,localhost:9093"
    topic: "test01"
    publisher:
      key-serializer: "com.xxx.center.event.serializer.StringSerializer"
      val-serializer: "com.xxx.center.event.serializer.StringSerializer"
    subscriber:
      auto-commit: "enable"
      key-deserializer: "com.xxx.center.event.serializer.StringDeserializer"
      val-deserializer: "com.xxx.center.event.serializer.StringDeserializer"

1.1 自定义注解@PublisherConfiguration,用于描述Publisher接口

@Inherited
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface PublisherConfiguration {
    String topic() default "";
    String partitioner() default "auto";
    String keySerializer() default "com.xxx.center.event.serializer.StringSerializer";
    String valSerializer() default "com.xxx.center.event.serializer.StringSerializer";
    String config() default "";
    int retries() default 3;
    int batchSize() default 16384;
    int lingerMs() default 1;
    long bufferMemory() default 33554432l;
    int serializerSize() default 512;
}

1.2 自定义注解@ConsuerConfiguration,用于描述Consumer接口

@Inherited
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface SubscriberConfiguration {
    String topic() default "";
    String group() default "";
    String keyDeserializer() default "com.xxx.center.event.serializer.StringDeserializer";
    String valDeserializer() default "com.xxx.center.event.serializer.StringDeserializer";
    String autoCommit() default "enable";
    String autoOffset() default "latest";
}

1.3 自定义注解@EventConfigLoader,用于描述一个Consumer Demo类

@Inherited
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Configuration
public @interface EventConfigLoader {
    Class consumer();        //指定Consumer接口
}

1.4 定义Prodiver和Consumer接口。这个工作,虽然简单,但是非常重要,这个接口的定义,基本上就约定好了prodiver和consumer通信的基本配置。

public interface Publisher<K, V> {
    Future<Context> emit(Record<K, V> record);
    Future<Context> emit(K key, V value);
}

public interface Consumer<K, V>{
    List<Record> poll() throws DeserializerException;
    List<Record> poll(Duration timeout) throws DeserializerException;
    void close();
}


@PublisherConfiguration(
        topic = "test002"
)
public interface StringPublisher extends Publisher<String, String> {
}

@SubscriberConfiguration(
        topic = "test002"
)
public interface StringConsumer extends Consumer<String, String> {
}

二. 使用cglib动态代理为StringPublisher和StringConsumer的Bean代理工厂类,以及对应的业务实现类

public class PublisherProxyFactory<T> implements FactoryBean<T>, MethodInterceptor {
    private Class<T> interfaceClass;
    private PublisherDelegate publisherDelegate;
    private Object object;

    public PublisherProxyFactory(Class<T> interfaceClass) {
        this.interfaceClass = interfaceClass;
    }

    @Override
    public Object intercept(Object object, Method method, Object[] args, MethodProxy proxy)throws Throwable{
        if("emit".equalsIgnoreCase(method.getName())) {
            return this.publisherDelegate.emit(object, method, args, proxy);
        } else {
            return proxy.invokeSuper(object, args);
        }
    }

    public Object createEventPublisher(Topic topic){
        if (object == null) {
            Enhancer en = new Enhancer();
            en.setSuperclass(interfaceClass);
            en.setCallback(this);

            Properties properties = getPublisherProperties();
            this.publisherDelegate = new PublisherDelegateDefault(properties);
            object = en.create();
        }

        return object;
    }

    public Object createEventPublisher(){
        return this.createEventPublisher(null);
    }

    @Override
    public T getObject() throws Exception {
        return (T) createEventPublisher();
    }

    @Override
    public Class<?> getObjectType() {
        return interfaceClass;
    }

    @Override
    public boolean isSingleton() {
        return true;
    }
}
public class SubscriberProxyFactory<T> implements FactoryBean<T>, MethodInterceptor {
    private Class<T> interfaceClass;
    private SubscriberDelegate subscriberDelegate;
    private Object object;

    public SubscriberProxyFactory(Class<T> interfaceClass) {
        this.interfaceClass = interfaceClass;
    }

    public Object createEventSubscriber() {
        return createEventSubscriber(null);
    }

    public Object createEventSubscriber(Topic topic){
        try {
            if (object == null) {
                Enhancer en = new Enhancer();
                en.setSuperclass(interfaceClass);
                en.setCallback(this);
                object = en.create();

                Properties properties = getConsumerProperties();
                this.subscriberDelegate = new SubscriberDelegateDefault(properties);
            }
        } catch (Exception e) {
            logger.error("Failed to create subscriber delegate: {}", e.getMessage());
        }

        return object;
    }

    @Override
    public T getObject() throws Exception {
        return (T) createEventSubscriber();
    }

    @Override
    public Class<?> getObjectType() {
        return interfaceClass;
    }

    @Override
    public boolean isSingleton() {
        return true;
    }

    @Override
    public Object intercept(Object object, Method method, Object[] args, MethodProxy proxy)throws Throwable{
        if ("run".equals(method.getName())) {
            this.subscriberDelegate.run(args);
            return 1;
        } else if ("poll".equals(method.getName())) {
            return this.subscriberDelegate.poll(args);
        } else {
            return proxy.invokeSuper(object, args);
        }
    }
}
public class PublisherDelegateDefault{
    private Properties properties;

    public PublisherDelegateDefault(Properties properties){
        this.properties = properties;
    }

    public KafkaProducer createKafkaProducer(Object object) throws IOException{
        return new KafkaProducer(props);
    }

    ......
}
public interface Record<K, V> {
    long getPublishTime();
    K getKey();
    V getValue();
}

public class AbstractRecord<K, V> implements Record {
    private int partition;
    private long publishTime;
    private K key;
    private V value;

    public int getPartition() {
        return partition;
    }

    public void setPartition(int partition) {
        this.partition = partition;
    }

    public long getPublishTime() {
        return publishTime;
    }

    public void setPublishTime(long publishTime) {
        this.publishTime = publishTime;
    }

    public K getKey() {
        return key;
    }

    public void setKey(K key) {
        this.key = key;
    }

    public V getValue() {
        return value;
    }

    public void setValue(V value) {
        this.value = value;
    }
}

public class DefaultRecord extends AbstractRecord {
}


public class SubscriberDelegateDefault() {
    private Properties properties;
    private KafkaConsumer consumer;

    public SubscriberDelegateDefault(Properties properties) {
        this.properties = properties;
    }

    public void run() {
        consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(Arrays.asList(props.get("topic")));
    }

    public void poll() {
        ConsumerRecords<?, ?> records = consumer.poll(100);
        for (ConsumerRecord consumerRecord : records) {
            DefaultRecord record = new DefaultRecord();
            Object key = consumerRecord.key();
            Object value = consumerRecord.value();

            Object deserializerKey = key == null ? null :
                        KafkaSerializerProxyFactory.deserialize(props.getCustomKeyDeserializer(),
                                (Bytes) key,
                                (Class<?>) actualTypeArguments[0]);
            Object deserializerValue = value == null ? null :
                        KafkaSerializerProxyFactory.deserialize(props.getCustomValDeserializer(),
                                (Bytes) value,
                                (Class<?>) actualTypeArguments[1]);

             record.setKey(deserializerKey);
             record.setValue(deserializerValue);
             record.setPartition(consumerRecord.partition());
             record.setPublishTime(consumerRecord.timestamp());
             list.add(record);
        }
        return records;
    }

    public void close() {
        consumer.close();
    }
}

三. 为consumer创建一个reactor线程池模型来实时拉取消息。

对于publisher,demo程序直接依赖引入对应的bean即可使用。而consumer是被动工作的,必须要有一个后台线程定时去拉取消息。同时为了提供性能,最好有一个线程池去工作,最后,这个线程池直接采用reactor模型来工作。由于这个reactor模型较为复杂,这里不再展开,仅贴上最为重要的代码

    int subscribe() {
        List<Record> recordList = null;
        String className = consumer.getClass().getSimpleName();
        if (className.contains("$")) {
            className = className.substring(0, className.indexOf("$"));
        }

        logger.debug("Begin to poll records of topic<{}>.", className);

        try {
            recordList = consumer.poll(Duration.ofSeconds(2));
        }catch (DeserializerException e) {
            logger.error("Exception when poll events: {}", e.getMessage());
            return 0;
        }

        if(recordList.isEmpty()){
            return 0;
        }

        logger.info("Receive {} records of topic<{}>.", recordList.size(), className);

        DefaultContext context = new DefaultContext();
        context.setConsumer(consumer);

        DefaultEvent event = new DefaultEvent();
        event.setRecords(recordList);
        event.setContext(context);

        for (Listener listener : listeners) {
            listener.onEvent(event);    //这里执行业务逻辑代码
        }

        logger.debug("Completed to subsciber of topic<{}>.", subscriber.getClass().getSimpleName());
        return recordList.size();
    }

四. 向spring框架加入自定义的bean扫描程序

这个扫描程序要扫苗所有的Publisher和Consumer的接口,并用动态代理为之创建bean。

public class ClassPathEventScanner extends ClassPathBeanDefinitionScanner {

    public ClassPathEventScanner(BeanDefinitionRegistry registry) {
        super(registry, true);
    }

    public void registerFilters(){
        addIncludeFilter(new AnnotationTypeFilter(PublisherConfiguration.class));
        addIncludeFilter(new AnnotationTypeFilter(SubscriberConfiguration.class));
        addIncludeFilter(new AnnotationTypeFilter(SenderConfiguration.class));

        addIncludeFilter(new TypeFilter(){
            @Override
            public boolean match(MetadataReader metadataReader, MetadataReaderFactory metadataReaderFactory) throws IOException {
                return Arrays.stream(metadataReader.getClassMetadata().getInterfaceNames()).anyMatch(s ->
                        Subscriber.class.getName().equals(s) ||
                        Sender.class.getName().equals(s) ||
                        Publisher.class.getName().equals(s));
            }
        });
    }

    @Override
    protected boolean isCandidateComponent(AnnotatedBeanDefinition beanDefinition) {
        return true;
    }

    @Override
    public Set<BeanDefinitionHolder> doScan(String... basePackages) {
        Set<BeanDefinitionHolder> beanDefinitions = super.doScan(basePackages);
        if (beanDefinitions.isEmpty()) {
            LOGGER.info("No Event interface was found in '" + Arrays.toString(basePackages) + "' package. Please check your configuration.");
        } else {
            processBeanDefinitions(beanDefinitions);
        }

        return beanDefinitions;
    }

    private void processBeanDefinitions(Set<BeanDefinitionHolder> beanDefinitions) {
        GenericBeanDefinition definition;
        HashMap<String, BeanDefinitionHolder> elBeanDefinitionHolders = new HashMap<String, BeanDefinitionHolder>();

        //优先扫码subscriber和publisher,因为eventlistener要依赖subscriber
        for (BeanDefinitionHolder holder : beanDefinitions) {
            definition = (GenericBeanDefinition) holder.getBeanDefinition();
            String beanClassName = definition.getBeanClassName();
            try {
                Class clazz = Class.forName(beanClassName);
                for(Class interfaceClass : clazz.getInterfaces()){
                    if(interfaceClass.equals(Publisher.class)){
                        definition.getConstructorArgumentValues().addGenericArgumentValue(beanClassName);
                        definition.setBeanClass(PublisherProxyFactory.class);
                        definition.setLazyInit(false);
                        continue;
                    }else if(interfaceClass.equals(Subscriber.class)){
                        definition.getConstructorArgumentValues().addGenericArgumentValue(beanClassName);
                        definition.setBeanClass(SubscriberProxyFactory.class);
                        definition.setLazyInit(false);
                        continue;
                    }
                    else if (interfaceClass.equals(Sender.class)){
                        definition.getConstructorArgumentValues().addGenericArgumentValue(beanClassName);
                        definition.setBeanClass(SenderProxyFactory.class);
                        definition.setLazyInit(false);
                        continue;
                    }
                }
            } catch (ClassNotFoundException e) {
                LOGGER.debug("Can't found class from name: {}", beanClassName, e);
            }
        }
    }
}
public class EventScannerRegistrar implements BeanFactoryAware, ImportBeanDefinitionRegistrar, ResourceLoaderAware {

    private BeanFactory beanFactory;
    private ResourceLoader resourceLoader;

    public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
        this.beanFactory = beanFactory;
    }

    public void setResourceLoader(ResourceLoader resourceLoader) {
        this.resourceLoader = resourceLoader;
    }


    public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) {

        AnnotationAttributes annoAttrs = AnnotationAttributes.fromMap(importingClassMetadata.getAnnotationAttributes(EventScan.class.getName()));
        if(annoAttrs==null){
            annoAttrs = AnnotationAttributes.fromMap(importingClassMetadata.getAnnotationAttributes(ComponentScan.class.getName()));
        }

        ClassPathEventScanner scanner = new ClassPathEventScanner(registry);
        if (this.resourceLoader != null) {
            scanner.setResourceLoader(this.resourceLoader);
        }

        List<String> packages = AutoConfigurationPackages.get(this.beanFactory);

        if(annoAttrs!=null){
            for (String pkg : annoAttrs.getStringArray("value")) {
                if (StringUtils.hasText(pkg)) {
                    packages.add(pkg);
                }
            }
            for (String pkg : annoAttrs.getStringArray("basePackages")) {
                if (StringUtils.hasText(pkg)) {
                    packages.add(pkg);
                }
            }
            for (Class<?> clazz : annoAttrs.getClassArray("basePackageClasses")) {
                packages.add(ClassUtils.getPackageName(clazz));
            }
        }

        // 此处应当使用特殊格式的LOGGER,确保格式化
        LOGGER.info("**************************************************");
        LOGGER.info("** Starting scan event publisher and subscriber **");
        LOGGER.info("**************************************************");
        scanner.registerFilters();
        scanner.doScan(StringUtils.toStringArray(packages));

    }
}

EventScannerRegistrar 需要添加到spring.factories中(作为启动类存在),因为只有这个函数被启动类加载,才会按部就班地把各种producer和consumer的bean都创建好。

五. ConsumerDemo类的启动类EventStater,这个启动类要为Consumer启动reactor线程池

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import({EventScannerRegistrar.class, EventStarter.class})
public @interface EventScan {
    String[] value() default {};
    String[] basePackages() default {};
    Class<?>[] basePackageClasses() default {};
}

public class EventStarter implements ApplicationContextAware, CommandLineRunner {
    private static ApplicationContext appContext;

    @Override
    public void setApplicationContext(ApplicationContext context) {
        appContext = context;
    }

    @Override
    public void run(String...strings) {
        if (!checkEnable()) {
            logger.info("Event switch is disable");
            return;
        }

        //遍历所有注入的consumer的listener
        String[] consumerDemoBeanDefinitionNames = appContext.getBeanNamesForAnnotation(EventConfigLoader.class);

        int poolSize = consumerDemoBeanDefinitionNames.length;

        if(poolSize == 0) {
            return;
        }

        AccepterReactor reactor = AccepterReactor.getInstance(poolSize);

        for (String name: subscriberDemoBeanDefinitionNames) {
            Object eventListener = appContext.getBean(name);

            Class eventListenerClass = eventListener.getClass();
            EventConfigLoader anno = (EventConfigLoader)eventListenerClass.getAnnotation(EventConfigLoader.class);

            Consumer consumer = (Consumer) appContext.getBean(anno.consumer());
            reactor.put(consumer, consumer);
            reactor.add(consumer, (EventListener)eventListener);
            consumer.run(eventListenerClass.getName());
        }

        reactor.start();        //启动线程池,至此,系统就可以工作了
    }
}

至此,一个基本的kafka的封装框架基本成型。要注意的是,本文并没有把配置的读取,以及序列器和反序列器的代码,reactor线程池的代码,还有其它很多实现的细节增加进来。

  • 9
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值