设计模式-类交互-发布订阅

发布订阅

这种事情,NIO,消息队列 最熟了。

1. 订阅者 订阅事件

2. 发布者 发布事件

3. 订阅者接收事件

Spring事件

先看Spring 事件,典型的发布订阅模式,用于组件间通知。

https://docs.spring.io/spring-framework/docs/5.3.27/reference/html/core.html#context-functionality-events

发布者 Publisher -消息 Event->  监听者 Listener,用Spring实现就是

1. 定义Event extends ApplicationEvent

2. @Autowired ApplicationEventPublisher#publishEvent(new Event());

3. 配置类 方法上标注 @EventListener(classes = {Event.class})

看实现

spring-aop-5.3.22.jar

功能:执行完目标对象方法后发布一个自定义事件并用监听器监听

// from org.springframework.aop.framework.ReflectiveMethodInvocation#proceed
/**
 * {@link MethodInterceptor Interceptor} that publishes an
 * {@code ApplicationEvent} to all {@code ApplicationListeners}
 * registered with an {@code ApplicationEventPublisher} after each
 * <i>successful</i> method invocation.
 *
 * <p>Note that this interceptor is only capable of publishing <i>stateless</i>
 * events configured via the
 * {@link #setApplicationEventClass "applicationEventClass"} property.
 *
 * @author Dmitriy Kopylenko
 * @author Juergen Hoeller
 * @author Rick Evans
 * @see #setApplicationEventClass
 * @see org.springframework.context.ApplicationEvent
 * @see org.springframework.context.ApplicationListener
 * @see org.springframework.context.ApplicationEventPublisher
 * @see org.springframework.context.ApplicationContext
 */
public class EventPublicationInterceptor
		implements MethodInterceptor, ApplicationEventPublisherAware, InitializingBean {

    // 目标方法执行后发布事件
	public Object invoke(MethodInvocation invocation) throws Throwable {
        // 执行目标方法
		Object retVal = invocation.proceed();

		Assert.state(this.applicationEventClassConstructor != null, "No ApplicationEvent class set");
        // 构建事件体,入参 是目标方法
		ApplicationEvent event = (ApplicationEvent)
				this.applicationEventClassConstructor.newInstance(invocation.getThis());

		Assert.state(this.applicationEventPublisher != null, "No ApplicationEventPublisher available");
        // 方法拦截链 存在事件,发布事件
		this.applicationEventPublisher.publishEvent(event);

		return retVal;
	}
}

// from getApplicationEventMulticaster().multicastEvent(applicationEvent, eventType);

// org.springframework.context.event.SimpleApplicationEventMulticaster#multicastEvent(org.springframework.context.ApplicationEvent, org.springframework.core.ResolvableType)
// 广播事件
public void multicastEvent(final ApplicationEvent event, @Nullable ResolvableType eventType) {
	ResolvableType type = (eventType != null ? eventType : resolveDefaultEventType(event));
	Executor executor = getTaskExecutor();
    // 获取监听者
	for (ApplicationListener<?> listener : getApplicationListeners(event, type)) {
		if (executor != null) {
			executor.execute(() -> invokeListener(listener, event));
		}
		else {
            // 调用监听者
			invokeListener(listener, event);
		}
	}
}

// 	listener存储	
public final Set<ApplicationListener<?>> applicationListeners = new LinkedHashSet<>();
// listener 从容器中取 
ApplicationListener<?> listener = beanFactory.getBean(listenerBeanName,ApplicationListener.class);


// org.springframework.context.event.SimpleApplicationEventMulticaster#doInvokeListener
listener.onApplicationEvent(event);


// org.springframework.context.event.ApplicationListenerMethodAdapter#processEvent
/**
 * Process the specified {@link ApplicationEvent}, checking if the condition
 * matches and handling a non-null result, if any.
 */
public void processEvent(ApplicationEvent event) {
	Object[] args = resolveArguments(event);
	if (shouldHandle(event, args)) {
        // listener 接收到事件,执行自己的方法
		Object result = doInvoke(args);
		if (result != null) {
			handleResult(result);
		}
		else {
			logger.trace("No result object given - no result to handle");
		}
	}
}

NIO-Netty

Java封装了一些网络相关的组件,Netty对这些组件做了进一步封装。

1. Channel 数据载体,即 Channel#read / Channel#writeAndFlush

2. 服务器上的 Channel 监听listen 客户端连接,暴露监听端口PORT。Channel#bind(java.net.SocketAddress, io.netty.channel.ChannelPromise),客户端 拿 IP:PORT 连接到服务器。

3. Channel 关注网络事件 Channel.SelectionKey.#interestOps(eventType);eventType in (监听端口OP_CONNECT,接收网络连接OP_ACCEPT,读OP_READ,写OP_WRITE)

4. Channel 注册(register) 到  Selector 上。Selector 就可以 阻塞 where(true)  获取多个 Channel 的事件。这样,一个线程就可以 Selecor 所有事件,这些事件可以由多个客户端产生。

5.  事件到达时,Channel#pipeline<ChannelHandler> 处理事件

看实现

netty-all-5.0.0.Alpha2.jar

io.netty.channel.nio.NioEventLoop#run 线程内
if (hasTasks()) {
    // 获取channel事件 
    selectNow();
}
// 处理channel事件
processSelectedKeys();
// 运行定时任务,如 监听
this.runAllTasks(ioTime * (long)(100 - ioRatio) / (long)ioRatio);


/**
 * Poll all tasks from the task queue and run them via {@link Runnable#run()} method.  This method stops running
 * the tasks in the task queue and returns if it ran longer than {@code timeoutNanos}.
 */
protected boolean runAllTasks(long timeoutNanos) {
    // 获取定时任务 task = (Runnable)this.taskQueue.poll();
    fetchFromScheduledTaskQueue();
    for (;;) {
        try {
            // 运行任务
            task.run();
        } catch (Throwable t) {
            logger.warn("A task raised an exception.", t);
        }
        runTasks ++;
        // Check timeout every 64 tasks because nanoTime() is relatively expensive.
        // XXX: Hard-coded value - will make it configurable if it is really a problem.
        if ((runTasks & 0x3F) == 0) {
            lastExecutionTime = ScheduledFutureTask.nanoTime();
            if (lastExecutionTime >= deadline) {
                break;
            }
        }
        task = pollTask();
        if (task == null) {
            lastExecutionTime = ScheduledFutureTask.nanoTime();
            break;
        }
    }
    this.lastExecutionTime = lastExecutionTime;
    return true;
}


/**
 * Register the {@link Channel} of the {@link ChannelPromise} and notify
 * the {@link ChannelFuture} once the registration was complete.
 * <p>
 * It's only safe to submit a new task to the {@link EventLoop} from within a
 * {@link ChannelHandler} once the {@link ChannelPromise} succeeded. Otherwise
 * the task may or may not be rejected.
 * </p>
 */
void register(EventLoop eventLoop, ChannelPromise promise){
	eventLoop.execute(new OneTimeTask() {
		@Override
		public void run() {
            // 一个注册的任务
			register0(promise);
		}
	});
}

// 触发 客户端连接
if (firstRegistration && isActive()) {
    pipeline.fireChannelActive();
}


// io.netty.channel.nio.NioEventLoop#processSelectedKeysPlain
private void processSelectedKeysPlain(Set<SelectionKey> selectedKeys) {
    Iterator<SelectionKey> i = selectedKeys.iterator();
    for (;;) {
        final SelectionKey k = i.next();
        // 获取所有客户端事件
        final Object a = k.attachment();
        i.remove();
        // read/write/connect/accept 事件
        if (a instanceof AbstractNioChannel) {
            processSelectedKey(k, (AbstractNioChannel) a);
        } else {
            @SuppressWarnings("unchecked")
            NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
            // Invoked when the {@link SelectableChannel} has been selected by the {@link Selector}.
            processSelectedKey(k, task);
        }
    }
}

// io.netty.channel.nio.NioEventLoop#processSelectedKey(java.nio.channels.SelectionKey, io.netty.channel.nio.AbstractNioChannel)
private static void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
    try {
        int readyOps = k.readyOps();
        // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
        // to a spin loop
        if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
            // read 
            unsafe.read();
            if (!ch.isOpen()) {
                // Connection already closed - no need to handle write.
                return;
            }
        }
        if ((readyOps & SelectionKey.OP_WRITE) != 0) {
            // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
            ch.unsafe().forceFlush();
        }
        if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
            // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
            // See https://github.com/netty/netty/issues/924
            int ops = k.interestOps();
            ops &= ~SelectionKey.OP_CONNECT;
            k.interestOps(ops);

            unsafe.finishConnect();
        }
    } catch (CancelledKeyException ignored) {
        unsafe.close(unsafe.voidPromise());
    }
}

// io.netty.channel.nio.AbstractNioByteChannel.NioByteUnsafe#read
public final void read() {
	// 获取事件配置
    final ChannelConfig config = config();
    if (!config.isAutoRead() && !isReadPending()) {
        // ChannelConfig.setAutoRead(false) was called in the meantime
        removeReadOp();
        return;
    }
	// 获取事件处理器
    final ChannelPipeline pipeline = pipeline();
    final ByteBufAllocator allocator = config.getAllocator();
    final int maxMessagesPerRead = config.getMaxMessagesPerRead();
    RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();

    ByteBuf byteBuf = null;
    int messages = 0;
    boolean close = false;
    try {
        int totalReadAmount = 0;
        boolean readPendingReset = false;
        do {
			// 获取数据缓存
            byteBuf = allocHandle.allocate(allocator);
            int writable = byteBuf.writableBytes();
            int localReadAmount = doReadBytes(byteBuf);
			// 触发事件读
            pipeline.fireChannelRead(byteBuf);
            byteBuf = null;

            if (totalReadAmount >= Integer.MAX_VALUE - localReadAmount) {
                // Avoid overflow.
                totalReadAmount = Integer.MAX_VALUE;
                break;
            }

            totalReadAmount += localReadAmount;

            // stop reading
            if (!config.isAutoRead()) {
                break;
            }

            if (localReadAmount < writable) {
                // Read less than what the buffer can hold,
                // which might mean we drained the recv buffer completely.
                break;
            }
        } while (++ messages < maxMessagesPerRead);
		// 触发读完成
        pipeline.fireChannelReadComplete();
        allocHandle.record(totalReadAmount);

        if (close) {
			// 关闭事件读
            closeOnRead(pipeline);
            close = false;
        }
    } catch (Throwable t) {
        handleReadException(pipeline, byteBuf, t, close);
    } finally {
        // Check if there is a readPending which was not processed yet.
        // This could be for two reasons:
        // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
        // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
        //
        // See https://github.com/netty/netty/issues/2254
        if (!config.isAutoRead() && !isReadPending()) {
            removeReadOp();
        }
    }
}


io.netty.channel.ChannelHandlerInvokerUtil#invokeChannelReadNow
public static void invokeChannelReadNow(final ChannelHandlerContext ctx, final Object msg) {
    try {
        ((AbstractChannelHandlerContext) ctx).invokedThisChannelRead = true;
        // 调用我们自定义的读处理器
        ctx.handler().channelRead(ctx, msg);
    } catch (Throwable t) {
        notifyHandlerException(ctx, t);
    }
}

消息队列-Redission

Redis 是一个数据容器,写数据 就是 发布事件,读数据 就是 接收事件。

消息多了,缓存在哪,队列。Netty也是把事件缓存在队列 PriorityQueue<ScheduledFutureTask<?>> / LinkedBlockingQueue<Runnable> 里的。

看 常用的缓存容器:RDelayedQueue 

1. 两个List,一个([customer_key_queue] 业务定义key list)存待take数据,一个(redisson_delay_queue_[customer_key_queue])存已 offer 数据

2. 一个 zset(redisson_delay_queue_timeout_[customer_key_queue]),score 存过期时间,一次取 100条过期数据,存入customer_key_queue,并从 redisson_delay_queue_[customer_key_queue],redisson_delay_queue_timeout_[customer_key_queue] 中 删除

3. pub/sub:

redisson-3.19.3.jar

// 构造器:过期消息转移入队
protected RedissonDelayedQueue(QueueTransferService queueTransferService, Codec codec, final CommandAsyncExecutor commandExecutor, String name)
    super(codec, commandExecutor, name);
    // Topic Name
    channelName = prefixName("redisson_delay_queue_channel", getRawName());
    queueName = prefixName("redisson_delay_queue", getRawName());
    timeoutSetName = prefixName("redisson_delay_queue_timeout", getRawName());
    // 创建 task, 
    QueueTransferTask task = new QueueTransferTask(commandExecutor.getConnectionManager()) {
        
        @Override
        protected RFuture<Long> pushTaskAsync() {
            // lua 脚本
            return commandExecutor.evalWriteAsync(getRawName(), LongCodec.INSTANCE, RedisCommands.EVAL_LONG,
                    // 取出 zset 里 score = [0,当前时间的] 所有任务(任务已过期),一次最多100条
                    "local expiredValues = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); "
                  + "if #expiredValues > 0 then "
                        // 有数据则处理
                      + "for i, v in ipairs(expiredValues) do "
                          + "local randomId, value = struct.unpack('dLc0', v);"
                            // 存入 已过期数据的list,用于 take
                          + "redis.call('rpush', KEYS[1], value);"
                            // offer 的数据list -1
                          + "redis.call('lrem', KEYS[3], 1, v);"
                      + "end; "
                        // zset 中删除过期数据
                      + "redis.call('zrem', KEYS[2], unpack(expiredValues));"
                  + "end; "
                    // get startTime from scheduler queue head task 下一次最早过期的一条任务
                  + "local v = redis.call('zrange', KEYS[2], 0, 0, 'WITHSCORES'); "
                  + "if v[1] ~= nil then "
                    // 取到任务,下一次最早过期时间
                     + "return v[2]; "
                  + "end "
                  + "return nil;",
                  Arrays.asList(getRawName(), timeoutSetName, queueName),
                  System.currentTimeMillis(), 100);
        }
        
        @Override
        protected RTopic getTopic() {
            // 订阅Topic
            return RedissonTopic.createRaw(LongCodec.INSTANCE, commandExecutor, channelName);
        }
    };
    
    // 定时调度任务
    queueTransferService.schedule(queueName, task);
    
    this.queueTransferService = queueTransferService;
}

// org.redisson.QueueTransferTask#start
public void start() {
    //  RedissonDelayedQueue构造器 获取构造器里的Topic
    RTopic schedulerTopic = getTopic();
    // redisson_delay_queue_[customer_key_queue] 订阅事件后,执行一次 过期消息转移入队
    statusListenerId = schedulerTopic.addListener(new BaseStatusListener() {
        @Override
        public void onSubscribe(String channel) {
            
            pushTask();
        }
    });
    
    // redisson_delay_queue_[customer_key_queue] 存入数据后, 定时 执行 过期消息转移入队
    messageListenerId = schedulerTopic.addListener(Long.class, new MessageListener<Long>() {
        @Override
        public void onMessage(CharSequence channel, Long startTime) {
            // 任务调度
            scheduleTask(startTime);
        }
    });
}

// org.redisson.QueueTransferTask#pushTask
private void pushTask() {
    // 回调 RedissonDelayedQueue构造器 返回最早过期时间
    RFuture<Long> startTimeFuture = pushTaskAsync();
    startTimeFuture.whenComplete((res, e) -> {
        if (e != null) {
            if (e instanceof RedissonShutdownException) {
                return;
            }
            log.error(e.getMessage(), e);
            scheduleTask(System.currentTimeMillis() + 5 * 1000L);
            return;
        }
        
        if (res != null) {
            // 启动定时任务,定时 执行此方法,将数据存入待消费列表,返回下次将执行数据
            scheduleTask(res);
        }
    });
}


/**
 * Inserts element into this queue with 
 * specified transfer delay to destination queue.
 * 
 * @param e the element to add
 * @param delay for transition
 * @param timeUnit for delay
 */
public void offer(V e, long delay, TimeUnit timeUnit) {
    get(offerAsync(e, delay, timeUnit));
}
@Override
public RFuture<Void> offerAsync(V e, long delay, TimeUnit timeUnit) {
    if (delay < 0) {
        throw new IllegalArgumentException("Delay can't be negative");
    }
    
    long delayInMs = timeUnit.toMillis(delay);
    long timeout = System.currentTimeMillis() + delayInMs;
 
    long randomId = ThreadLocalRandom.current().nextLong();
    return commandExecutor.evalWriteNoRetryAsync(getRawName(), codec, RedisCommands.EVAL_VOID,
            "local value = struct.pack('dLc0', tonumber(ARGV[2]), string.len(ARGV[3]), ARGV[3]);" 
            // 存数据,向 zset里存一份
          + "redis.call('zadd', KEYS[2], ARGV[1], value);"
            // 存数据,向 list里存一份
          + "redis.call('rpush', KEYS[3], value);"
          // if new object added to queue head when publish its startTime 
          // to all scheduler workers 
           // 数据存入 set成功
          + "local v = redis.call('zrange', KEYS[2], 0, 0); "
          + "if v[1] == value then "
                // 发布 存入事件,onMessageListener会监听到
             + "redis.call('publish', KEYS[4], ARGV[1]); "
          + "end;",
          Arrays.asList(getRawName(), timeoutSetName, queueName, channelName),
          timeout, randomId, encode(e));
}

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值