Log4j直接发送数据到Flume + Kafka (方式二)

该文是接着上篇 Log4j直接发送数据到Flume + Kafka (方式一) 所提到的问题进行展开记录的.

主要方式就是通过继承log4j的Appenders, 采用flume rpc动态连接flume服务去发送logs信息.

  • flume sdk包:
<dependency>
 <groupId>org.apache.flume</groupId>
 <artifactId>flume-ng-sdk</artifactId>
 <version>1.7.0</version>
</dependency>
  • 动态连接flume的代码:
public class FindAsyncLog4j2Appender extends AbstractAppender {

    //dosomething...

    private void connect(){

        Properties props = new Properties();

        if(isSingleSink(hosts)) {
            props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS, "h1");
            props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS_PREFIX + "h1", hosts);
            props.setProperty(RpcClientConfigurationConstants.CONFIG_CONNECT_TIMEOUT, String.valueOf(timeout));
            props.setProperty(RpcClientConfigurationConstants.CONFIG_REQUEST_TIMEOUT, String.valueOf(timeout));
        } else {
            props = getProperties(hosts, timeout);
        }

        try {
            rpcClient = RpcClientFactory.getInstance(props);
            if(!isStarted()){
                start();
            }
        } catch (FlumeException e) {
            String errormsg = "RPC client creation failed! " + e.getMessage();
            LOGGER.error(errormsg);
            throw e;
        }
    }
    /**
     * Create an AsyncAppender.
     *
     * @param blocking         True if the Appender should wait when the queue is full. The default is true.
     * @param shutdownTimeout  How many milliseconds the Appender should wait to flush outstanding log events
     *                         in the queue on shutdown. The default is zero which means to wait forever.
     * @param size             The size of the event queue. The default is 128.
     * @param name             The name of the Appender.
     * @param filter           The Filter or null.
     * @return The AsyncAppender.
     */
    @PluginFactory
    public static FindAsyncLog4j2Appender createAppender(
            @PluginAttribute(value = "blocking", defaultBoolean = true) boolean blocking,
            @PluginAttribute(value = "shutdownTimeout") long shutdownTimeout,
            @PluginAttribute(value = "bufferSize") int size,
            @PluginAttribute("name") final String name,
            @PluginAttribute("hosts") String hosts,
            @PluginAttribute(value = "timeout") Long timeout,
            @PluginElement("Filter") Filter filter,
            @PluginAttribute("application") String application,

            @PluginElement("Layout") Layout<? extends Serializable> layout) {

        if (name == null ) {
            LOGGER.error("No name provided for FindAsyncLog4j2Appender");
            return null;
        }
        if (hosts == null ) {
            LOGGER.error("No host provided for FindAsyncLog4j2Appender");
            return null;
        }

        if (application == null) {
            LOGGER.error("No application provided for FindAsyncLog4j2Appender");
            return null;
        }

        if (layout == null) {
            layout = PatternLayout.createDefaultLayout();
        }

        return new FindAsyncLog4j2Appender(name, filter, layout, size, blocking,
                shutdownTimeout, hosts, timeout, application);
    }

   //dosomething...

}

 

  • 添加appender

业务程序中初始化FindAsyncLog4j2Appender, 并addAppender到log4j 配置中去.

FindAsyncLog4j2Appender appender = FindAsyncLog4j2Appender
                .createAppender( false, 0, 1024*10, appenderName, flumeHost,
                         3000L, null,"testApp", layout);
        appender.start();
        config.addAppender(appender);
        AppenderRef ref = AppenderRef.createAppenderRef(appenderName, null, null);
        AppenderRef[] refs = new AppenderRef[]{ref};

        LoggerConfig loggerConfig = LoggerConfig.createLogger(false, Level.INFO, loggerName,
                "true", refs, null, config, null);
        loggerConfig.addAppender(appender, null, null);
        config.addLogger(loggerName, loggerConfig);

 

启动业务程序, 在kafka 消费topic中查看结果. 启动步骤请查看上一篇章:Log4j直接发送数据到Flume + Kafka (方式一)

如果手动停止flume服务, 不影响业务程序正常工作, 再次启动flume服务后, 程序会自动重新连接到flume 并发送logs, 如下图:

 

该业务程序demo代码请参看 https://github.com/spring410/springbootlog4jflume

 

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值