storm_入门02学习笔记----【storm原理、storm整个hdfs和mysql、storm定时器使用、日志监控告警项目的流程和业务处理逻辑】

2 篇文章 0 订阅
1 篇文章 0 订阅

storm_入门02学习笔记

1、目标
  • 1、掌握storm任务提交和执行过程
  • 2、掌握storm整合hdfs和mysql
  • 3、掌握storm定时器使用
  • 4、掌握日志监控告警项目的流程和业务处理逻辑
2、storm内部原理和任务提交

在这里插入图片描述

(1)客户端提交topology到nimbus主节点
(2)nimbus主节点接受到客户端的任务信息,然后保存到本地目录,后期把任务的分配信息写入到zk集群中
(3)zk保存任务的划分信息,包括后期再哪些supervisor节点启动对应的worker进程来运行对应的task
(4)supervisor定时向zk进行通信,获取得到属于自己的任务信息
(5)supervisor找到活着的nimbus老大,然后把对应的task涉及到的jar包和代码拷贝到自己的机器,然后运行

在这里插入图片描述

  • storm的本地目录树

在这里插入图片描述

  • zk本地目录树

在这里插入图片描述

3、storm整合hdfs

在这里插入图片描述

  • 数据处理流程:

    (1)通过RandomSpout实时产生一些订单数据,发送给下游的bolt进行处理
    (2)通过构建一个CountMoneyBolt实时统计所有订单数据的总金额,最后把订单数据发送下游的其他的bolt
    (3)构建HdfsBolt把实时流入的订单数据写入到hdfs上
    (4)基于hive构建外部分区表,关联订单数据,后期对于这些订单数据进行离线分析处理

  • 代码开发

    • 1、引入依赖

      <dependency>
          <groupId>org.apache.storm</groupId>
          <artifactId>storm-hdfs</artifactId>
          <version>1.1.1</version>
      </dependency>
      
    • 2、RandomOrderSpout

      package cn.itcast.hdfs;
      
      import cn.itcast.realBoard.domain.PaymentInfo;
      import org.apache.storm.spout.SpoutOutputCollector;
      import org.apache.storm.task.TopologyContext;
      import org.apache.storm.topology.OutputFieldsDeclarer;
      import org.apache.storm.topology.base.BaseRichSpout;
      import org.apache.storm.tuple.Fields;
      import org.apache.storm.tuple.Values;
      
      import java.util.Map;
      import java.util.Random;
      
      //todo:随机产生大量的订单数据,然后发送给下游bolt去处理
      public class RandomOrderSpout extends BaseRichSpout {
            private SpoutOutputCollector collector;
            private PaymentInfo paymentInfo;
      
          /**
           * 初始化方法,只会被调用一次
           * @param conf
           * @param context
           * @param collector
           */
          @Override
          public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
                this.collector=collector;
                this.paymentInfo=new PaymentInfo();
          }
      
          @Override
          public void nextTuple() {
               //随机产生一个订单
              String order = paymentInfo.random();
              collector.emit(new Values(order));
      
          }
      
          @Override
          public void declareOutputFields(OutputFieldsDeclarer declarer) {
             declarer.declare(new Fields("order"));
          }
      }
      
      
    • 3、CountMoenyBolt

      package cn.itcast.hdfs;
      
      import cn.itcast.realBoard.domain.PaymentInfo;
      import com.alibaba.fastjson.JSONObject;
      import org.apache.storm.topology.BasicOutputCollector;
      import org.apache.storm.topology.OutputFieldsDeclarer;
      import org.apache.storm.topology.base.BaseBasicBolt;
      import org.apache.storm.tuple.Fields;
      import org.apache.storm.tuple.Tuple;
      import org.apache.storm.tuple.Values;
      
      import java.util.HashMap;
      
      //todo:接受spout发送的订单数据。实时订单金额的实时统计,把订单数据发送给下游的bolt
      public class CountMoneyBolt extends BaseBasicBolt {
          private HashMap<String,Long> map=new HashMap<String,Long>();
      
          @Override
          public void execute(Tuple input, BasicOutputCollector collector) {
              String orderJson = input.getStringByField("order");
              JSONObject jsonObject = new JSONObject();
              PaymentInfo paymentInfo = jsonObject.parseObject(orderJson, PaymentInfo.class);
              long price = paymentInfo.getPayPrice();
      
              if(!map.containsKey("totalPrice")){
                   map.put("totalPrice",price);
              }else{
                  map.put("totalPrice",price+map.get("totalPrice"));
              }
              System.out.println(map);
      
              collector.emit(new Values(orderJson));
      
          }
      
          @Override
          public void declareOutputFields(OutputFieldsDeclarer declarer) {
              declarer.declare(new Fields("orderJson"));
          }
      }
      
      
    • 4、StormHdfsTopology

      package cn.itcast.hdfs;
      
      import org.apache.storm.Config;
      import org.apache.storm.LocalCluster;
      import org.apache.storm.StormSubmitter;
      import org.apache.storm.generated.AlreadyAliveException;
      import org.apache.storm.generated.AuthorizationException;
      import org.apache.storm.generated.InvalidTopologyException;
      import org.apache.storm.hdfs.bolt.HdfsBolt;
      import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
      import org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat;
      import org.apache.storm.hdfs.bolt.format.FileNameFormat;
      import org.apache.storm.hdfs.bolt.format.RecordFormat;
      import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
      import org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy;
      import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
      import org.apache.storm.hdfs.bolt.sync.SyncPolicy;
      import org.apache.storm.topology.TopologyBuilder;
      
      public class StormHdfsTopology {
          public static void main(String[] args) throws InvalidTopologyException, AuthorizationException, AlreadyAliveException {
              TopologyBuilder topologyBuilder = new TopologyBuilder();
              topologyBuilder.setSpout("randomOrderSpout",new RandomOrderSpout());
              topologyBuilder.setBolt("countMoneyBolt",new CountMoneyBolt()).shuffleGrouping("randomOrderSpout");
      
              // use "|" instead of "," for field delimiter
              //指定字段的分割符是|
              RecordFormat format = new DelimitedRecordFormat()
                      .withFieldDelimiter("|");
      
      // sync the filesystem after every 1k tuples
              //数据量条数到达1千条批量写入
              SyncPolicy syncPolicy = new CountSyncPolicy(1000);
      
      // rotate files when they reach 5MB
              //数据量大小到达5m批量写入
              FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, FileSizeRotationPolicy.Units.MB);
      
              //指定数据保存在hdfs上的目录名称
              FileNameFormat fileNameFormat = new DefaultFileNameFormat()
                      .withPath("/storm-data/");
      
              // 构建hdfsBolt,指定namenode地址和策略
              HdfsBolt hdfsBolt = new HdfsBolt()
                      .withFsUrl("hdfs://node1:9000")
                      .withFileNameFormat(fileNameFormat)
                      .withRecordFormat(format)
                      .withRotationPolicy(rotationPolicy)
                      .withSyncPolicy(syncPolicy);
      
      
              topologyBuilder.setBolt("hdfsBolt",hdfsBolt).shuffleGrouping("countMoneyBolt");
      
              Config config = new Config();
              if(args !=null && args.length>0){
                  //集群提交
                  StormSubmitter.submitTopology(args[0],config,topologyBuilder.createTopology());
              }else {
                  //本地运行
                  LocalCluster localCluster = new LocalCluster();
                  localCluster.submitTopology("storm-hdfs",config,topologyBuilder.createTopology());
      
              }
      
      
          }
      }
      
4、storm的ack机制
4.1 storm的ack机制是什么

​ storm是一个实时处理的框架,后期需要对数据的处理,处理的时候需要保证数据不丢失,或者是说数据需要把完全处理成功,以及数据处理失败之后,有对应的处理策略。
​ ack机制可以保证数据被处理成功或者是数据处理失败之后有对应的策略,比如说,某一条数据处理失败了,这个时候把处理失败的数据重新发送下,再次处理。

4.2 ack机制的原理

在这里插入图片描述

每一次数据发送和接受都会把一个成功的标识写入到某一个内存区域,然后后期再这块内存区域中就有大量的标识,

最后使用一个算法: 异或算法
相同为0
不同为1
0|0=0
0|1=1
1|0=1
1|1=0

最后异或的结果如果为0,就表示该数据在每一个阶段都是处理成功的,如果后期结果不为0 就是表示处理失败。

4.3 如何开启ack机制
  • 1、在spout中处理逻辑

    //在发送数据的时候需要指定一个messageId,后期可以通过这个messageId来定位到那条数据处理成功或者是那条数据处理失败。
                 //这里为了后期定位到失败的数据是什么,可以简单处理,就把数据看成是messageId
                collector.emit(new Values(line),line);
    
    数据处理成功之后会调用ack方法:
        public void ack(Object msgId) {
            //该方法会在数据处理成功之后被调用
            System.out.println("哪一条数据处理成功:"+msgId);
    
        }
        
        
    数据处理失败之后会调用fail方法:
        public void fail(Object msgId) {
            //该方法会在数据处理失败之后被调用
            System.out.println("处理失败的数据:"+msgId);
    
            //把失败的数据重新发送
            collector.emit(new Values(msgId),msgId);
        }
    
    
    
  • 2、在bolt中的逻辑

    • BaseBasicBolt

      它会把我们每一数据处理成功之后,它会自己去调用ack方法 或者数据处理失败会调用fail方法

      后期就不需要自己去实现ack或者是fail方法

    • BaseRichBolt

      需要自己实现成功后调用ack方法
      collector.ack(input)

      失败后调用fail方法
      collector.fail(input)

    • 另外需要注意的,当spout 触发fail 动作时,不会自动重发失败的tuple,需要 spout 自己重新获取数据,手动重新再发送一次

  • 3 ack 机制即 spout 发送的每一条消息

    • 在规定的时间内,spout 收到 Acker 的 ack 响应,即认为该tuple 被后续 bolt 成功处理
    • 在规定的时间内,没有收到Acker的ack响应 tuple,就触发fail动作,即认为该tuple处理失败
    • 或者收到Acker发送的fail响应tuple,也认为失败,触发fail动作
4.4 关闭ack机制

(1)在spout中发送数据的时候,不指定messageId
(2)可以设置ack线程数为0
config.setNumAckers(0);

5、storm的定时器和mysql整合
  • 1、引入依赖

           <dependency>
                <groupId>org.apache.storm</groupId>
                <artifactId>storm-jdbc</artifactId>
                <version>1.1.1</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
            <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
                <version>5.1.38</version>
            </dependency>
            <dependency>
                <groupId>com.google.collections</groupId>
                <artifactId>google-collections</artifactId>
                <version>1.0</version>
            </dependency>
    
  • 2、代码开发

    • 1、RandomSpout

      package cn.itcast.tickAndMysql;
      
      import org.apache.storm.spout.SpoutOutputCollector;
      import org.apache.storm.task.TopologyContext;
      import org.apache.storm.topology.OutputFieldsDeclarer;
      import org.apache.storm.topology.base.BaseRichSpout;
      import org.apache.storm.tuple.Fields;
      import org.apache.storm.tuple.Values;
      
      import java.util.Map;
      import java.util.Random;
      
      //todo:随机产生数据,发送给下游
      public class RandomSpout extends BaseRichSpout {
          private SpoutOutputCollector collector;
          private Random random;
          private String[] user;
      
          @Override
          public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
              this.collector=collector;
              this.random=new Random();
              this.user=new String[]{"1 zhangsan 30","2 lisi 40"};
          }
      
          @Override
          public void nextTuple() {
              try {
                  //随机产生数据
                  int index = random.nextInt(user.length);
                  String line = user[index];
                  collector.emit(new Values(line));
                  Thread.sleep(1000);
              } catch (InterruptedException e) {
                  e.printStackTrace();
              }
          }
      
          @Override
          public void declareOutputFields(OutputFieldsDeclarer declarer) {
              declarer.declare(new Fields("line"));
          }
      }
      
      
    • 2、TickBolt

      package cn.itcast.tickAndMysql;
      
      import org.apache.storm.Config;
      import org.apache.storm.Constants;
      import org.apache.storm.topology.BasicOutputCollector;
      import org.apache.storm.topology.OutputFieldsDeclarer;
      import org.apache.storm.topology.base.BaseBasicBolt;
      import org.apache.storm.tuple.Fields;
      import org.apache.storm.tuple.Tuple;
      import org.apache.storm.tuple.Values;
      
      import java.text.SimpleDateFormat;
      import java.util.Date;
      import java.util.Map;
      
      //todo:接受spout的数据,定时每隔5s打印系统的时间,同时把解析数据,最后把结果数据传递给下游bolt
      public class TickBolt  extends BaseBasicBolt{
      
          @Override
          public Map<String, Object> getComponentConfiguration() {
              //可以实现发送系统级别的tuple
              Config config = new Config();
                  //每隔5s发送一个系统级别的tuple
               config.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS,5);
      
              return config;
          }
      
          @Override
          public void execute(Tuple input, BasicOutputCollector collector) {
               //判断一下数据到底是来自于spout或者是系统
              if(input.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID) && input.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID)){
      
                  //实现每隔5s打印下系统时间
                  Date date = new Date();
                  SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
                  String dateFormat = format.format(date);
                  System.out.println(dateFormat);
      
              }else{
                  String user = input.getStringByField("line");
                  String[] split = user.split(" ");
                  String id = split[0];
                  String name = split[1];
                  String age = split[2];
      
                  //发送数据的时候,需要跟表中的字段的结果类型一致
                  collector.emit(new Values(id,name,age));
              }
      
          }
      
          @Override
          public void declareOutputFields(OutputFieldsDeclarer declarer) {
      
              //声明字段的时候,需要跟mysql表中的字段名称一致
              declarer.declare(new Fields("id","name","age"));
          }
      }
      
      
    • 3、驱动主类代码开发

      package cn.itcast.tickAndMysql;
      
      import com.google.common.collect.Maps;
      import com.sun.org.apache.xpath.internal.operations.Or;
      import org.apache.storm.Config;
      import org.apache.storm.LocalCluster;
      import org.apache.storm.StormSubmitter;
      import org.apache.storm.generated.AlreadyAliveException;
      import org.apache.storm.generated.AuthorizationException;
      import org.apache.storm.generated.InvalidTopologyException;
      import org.apache.storm.generated.StormTopology;
      import org.apache.storm.jdbc.bolt.JdbcInsertBolt;
      import org.apache.storm.jdbc.common.ConnectionProvider;
      import org.apache.storm.jdbc.common.HikariCPConnectionProvider;
      import org.apache.storm.jdbc.mapper.JdbcMapper;
      import org.apache.storm.jdbc.mapper.SimpleJdbcMapper;
      import org.apache.storm.topology.TopologyBuilder;
      
      import java.util.Map;
      
      public class TickMysqlTopology {
          public static void main(String[] args) throws InvalidTopologyException, AuthorizationException, AlreadyAliveException {
              TopologyBuilder topologyBuilder = new TopologyBuilder();
              topologyBuilder.setSpout("randomSpout",new RandomSpout());
              topologyBuilder.setBolt("tickBolt",new TickBolt()).shuffleGrouping("randomSpout");
      
              //构建mysqlBolt
              Map hikariConfigMap = Maps.newHashMap();
              hikariConfigMap.put("dataSourceClassName","com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
              hikariConfigMap.put("dataSource.url", "jdbc:mysql://node-1/test");
              hikariConfigMap.put("dataSource.user","root");
              hikariConfigMap.put("dataSource.password","123");
              ConnectionProvider connectionProvider = new HikariCPConnectionProvider(hikariConfigMap);
      
              String tableName = "person";
              JdbcMapper simpleJdbcMapper = new SimpleJdbcMapper(tableName, connectionProvider);
      
              JdbcInsertBolt mysqlBolt = new JdbcInsertBolt(connectionProvider, simpleJdbcMapper)
                      .withTableName("person")
                      .withQueryTimeoutSecs(90);
      //        Or
      //        JdbcInsertBolt userPersistanceBolt = new JdbcInsertBolt(connectionProvider, simpleJdbcMapper)
      //                .withInsertQuery("insert into user values (?,?)")
      //                .withQueryTimeoutSecs(30);
      
              topologyBuilder.setBolt("mysqlBolt",mysqlBolt).shuffleGrouping("tickBolt");
              Config config = new Config();
              StormTopology stormTopology = topologyBuilder.createTopology();
      
              if(args!=null && args.length>0){
                  StormSubmitter.submitTopology(args[0],config,stormTopology);
              }else{
                  LocalCluster localCluster = new LocalCluster();
                  localCluster.submitTopology("storm-mysql",config,stormTopology);
              }
      
          }
      }
      
      

tips : 首先需要创建好库和表 , 且表的字段类型要与传进去的保持一致

6、日志监控告警系统

在这里插入图片描述

6.1 flume自定义拦截器开发
package cn.itcast.flume;

import org.apache.commons.lang.StringUtils;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.List;

//todo:功能需求:实现在每条日志前面加上一个appId来做唯一标识
public class FlumeInterceptorApp implements Interceptor {
    //属性
    private String appId;

    public FlumeInterceptorApp(String appId){
        this.appId = appId;
    }

    public Event intercept(Event event){
        String message = null;
        try {
            message = new String(event.getBody(), "utf-8");
        } catch (UnsupportedEncodingException e) {
            message = new String(event.getBody());
        }

          // error java.lang.TypeNotPresentException    ---> 1\001error java.lang.TypeNotPresentException 
        if (StringUtils.isNotBlank(message)) {
            message =  this.appId + "\001" + message;
            event.setBody(message.getBytes());
            return event;
        }

        return event;
    }

    public List<Event> intercept(List<Event> list){
        List resultList = new ArrayList();
        for (Event event : list) {
            Event r = intercept(event);
            if (r != null) {
                resultList.add(r);
            }
        }
        return resultList;
    }

    public void close()
    {
    }

    public void initialize(){

    }

    public static class AppInterceptorBuilder implements Interceptor.Builder{

        private String appId;

        public Interceptor build() {
            return new FlumeInterceptorApp(this.appId);
        }

        public void configure(Context context)
        {
            this.appId = context.getString("appId", "default");
            System.out.println("appId:" + this.appId);
        }

        /**
         a1.sources = r1
         a1.channels = c1
         a1.sinks = k1

         a1.sources.r1.type = exec
         a1.sources.r1.command = tail -F /export/data/flume/click_log/error.log
         a1.sources.r1.channels = c1
         a1.sources.r1.interceptors = i1
         a1.sources.r1.interceptors.i1.type = cn.itcast.realtime.flume.AppInterceptor$AppInterceptorBuilder
         a1.sources.r1.interceptors.i1.appId = 1

         a1.channels.c1.type=memory
         a1.channels.c1.capacity=10000
         a1.channels.c1.transactionCapacity=100

         a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
         a1.sinks.k1.topic = log_monitor
         a1.sinks.k1.brokerList = kafka01:9092
         a1.sinks.k1.requiredAcks = 1
         a1.sinks.k1.batchSize = 20
         a1.sinks.k1.channel = c1


         */
    }
}

6.2 flume的配置和运行
a1.sources = r1
a1.channels = c1
a1.sinks = k1

a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /export/data/flume/click_log/error.log
a1.sources.r1.channels = c1
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = cn.itcast.flume.FlumeInterceptorApp$AppInterceptorBuilder
a1.sources.r1.interceptors.i1.appId = 1

a1.channels.c1.type=memory
a1.channels.c1.capacity=10000
a1.channels.c1.transactionCapacity=100

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = log_monitor
a1.sinks.k1.brokerList = node-1:9092,node-2:9092,node-3:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
6.3 代码开发
  • 1、LogMonitorTopology

    package cn.itcast.logMonitor;
    
    import org.apache.storm.Config;
    import org.apache.storm.LocalCluster;
    import org.apache.storm.StormSubmitter;
    import org.apache.storm.generated.AlreadyAliveException;
    import org.apache.storm.generated.AuthorizationException;
    import org.apache.storm.generated.InvalidTopologyException;
    import org.apache.storm.generated.StormTopology;
    import org.apache.storm.kafka.spout.KafkaSpout;
    import org.apache.storm.kafka.spout.KafkaSpoutConfig;
    import org.apache.storm.topology.TopologyBuilder;
    
    public class LogMonitorTopology {
        public static void main(String[] args) throws InvalidTopologyException, AuthorizationException, AlreadyAliveException {
            TopologyBuilder topologyBuilder = new TopologyBuilder();
    
            KafkaSpoutConfig.Builder<String, String> builder = KafkaSpoutConfig.builder("node-1:9092,node-2:9092,node-3:9092", "log_monitor");
            builder.setGroupId("logMonitor");
            builder.setFirstPollOffsetStrategy(KafkaSpoutConfig.FirstPollOffsetStrategy.UNCOMMITTED_LATEST);
            KafkaSpoutConfig<String, String> kafkaSpoutConfig = builder.build();
    
            KafkaSpout<String, String> kafkaSpout = new KafkaSpout<String, String>(kafkaSpoutConfig);
            //组织kafkaSpout
            topologyBuilder.setSpout("kafkaSpout",kafkaSpout);
            topologyBuilder.setBolt("monitorMysqlBolt",new MonitorMysqlBolt()).shuffleGrouping("kafkaSpout");
            topologyBuilder.setBolt("processDataBolt",new ProcessDataBolt()).shuffleGrouping("monitorMysqlBolt");
            topologyBuilder.setBolt("notifyPeopleBolt",new NotifyPeopleBolt()).shuffleGrouping("processDataBolt");
            topologyBuilder.setBolt("saveDataBolt",new SaveDataBolt()).shuffleGrouping("notifyPeopleBolt");
    
            Config config = new Config();
            StormTopology stormTopology = topologyBuilder.createTopology();
    
            if(args!=null && args.length>0){
                StormSubmitter.submitTopology(args[0],config,stormTopology);
            }else{
                LocalCluster localCluster = new LocalCluster();
                localCluster.submitTopology("logMonitor",config,stormTopology);
            }
    
        }
    }
    
    
  • 2、MonitorMysqlBolt

    package cn.itcast.logMonitor;
    
    import cn.itcast.logMonitor.utils.CommonUtils;
    import org.apache.storm.Config;
    import org.apache.storm.Constants;
    import org.apache.storm.task.TopologyContext;
    import org.apache.storm.topology.BasicOutputCollector;
    import org.apache.storm.topology.OutputFieldsDeclarer;
    import org.apache.storm.topology.base.BaseBasicBolt;
    import org.apache.storm.tuple.Fields;
    import org.apache.storm.tuple.Tuple;
    import org.apache.storm.tuple.Values;
    
    import java.util.Map;
    
    //todo:需要接受kafkaSpout的数据,然后定时的从mysql表中把对应的数据库中规则进行同步
    public class MonitorMysqlBolt extends BaseBasicBolt{
           private CommonUtils commonUtils;
        /**
         * 初始化的方法
         * @param stormConf
         * @param context
         */
        @Override
        public void prepare(Map stormConf, TopologyContext context) {
            this.commonUtils=new CommonUtils();
        }
    
        @Override
        public Map<String, Object> getComponentConfiguration() {
            Config config = new Config();
            config.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS,5);
            return config;
        }
    
        public void execute(Tuple input, BasicOutputCollector collector) {
            if(input.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID) && input.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID)){
                //系统发送的数据
                //需要把mysql表中的数据进行同步
                //同步数据库中业务系统信息    把数据库中app表中的数据加载进来,最后把结果保存在一个map集合
                commonUtils.monitorApp();
                //同步数据库中的业务系统对应的规则
                commonUtils.monitorRule();
                //同步数据库中的用户信息
                commonUtils.monitorUser();
    
            }else{
                //kafkaSpout发送的数据
                String logs = input.getString(4);
                //把数据发送给下游
                collector.emit(new Values(logs));
    
            }
    
        }
    
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
           declarer.declare(new Fields("logs"));
        }
    }
    
    
  • 3、ProcessDataBolt

    package cn.itcast.logMonitor;
    
    import cn.itcast.logMonitor.utils.CommonUtils;
    import org.apache.storm.topology.BasicOutputCollector;
    import org.apache.storm.topology.OutputFieldsDeclarer;
    import org.apache.storm.topology.base.BaseBasicBolt;
    import org.apache.storm.tuple.Fields;
    import org.apache.storm.tuple.Tuple;
    import org.apache.storm.tuple.Values;
    
    //todo:接受上游的数据,然后解析,最后进行匹配,看看是否满足了对应的规则,后面把满足规则的数据发送
    public class ProcessDataBolt extends BaseBasicBolt{
        public void execute(Tuple input, BasicOutputCollector collector) {
            //1\001error:Java.lang.NegativeArraySizeException
            String logs = input.getStringByField("logs");
            String[] split = logs.split("\001");
            String appId = split[0];
            String error = split[1];
    
            //获取得到数据,然后通过规则库进行匹配
            String rules = CommonUtils.checkRules(appId, error);
    
            if(!"".equals(rules)) {
                //把错误数据和匹配的规则信息发送给下游
                collector.emit(new Values(logs, rules));
    
            }
        }
    
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
           declarer.declare(new Fields("errorLogs","rules"));
        }
    }
    
    
  • 4、NotifyPeopleBolt

    package cn.itcast.logMonitor;
    
    import cn.itcast.logMonitor.utils.CommonUtils;
    import org.apache.storm.topology.BasicOutputCollector;
    import org.apache.storm.topology.OutputFieldsDeclarer;
    import org.apache.storm.topology.base.BaseBasicBolt;
    import org.apache.storm.tuple.Fields;
    import org.apache.storm.tuple.Tuple;
    import org.apache.storm.tuple.Values;
    
    //todo:找到该业务系统的负责人,给他们以短信或者是邮件的方式进行告警
    public class NotifyPeopleBolt  extends BaseBasicBolt{
        public void execute(Tuple input, BasicOutputCollector collector) {
            String errorLogs = input.getStringByField("errorLogs");
            String rules = input.getStringByField("rules");
    
            //需要找到业务系统的负责人,然后向他们发送信息
            CommonUtils.notifyPeople(rules,errorLogs);
    
            //保存异常信息
            collector.emit(new Values(rules,errorLogs));
    
        }
    
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
           declarer.declare(new Fields("appRules","errorLogs"));
        }
    }
    
    
  • 4、SaveDataBolt

    package cn.itcast.logMonitor;
    
    import cn.itcast.logMonitor.utils.CommonUtils;
    import org.apache.storm.topology.BasicOutputCollector;
    import org.apache.storm.topology.OutputFieldsDeclarer;
    import org.apache.storm.topology.base.BaseBasicBolt;
    import org.apache.storm.tuple.Tuple;
    
    //todo:接受上游的数据。最后把异常信息进行保存
    public class SaveDataBolt  extends BaseBasicBolt{
        public void execute(Tuple input, BasicOutputCollector collector) {
            String appRules = input.getStringByField("appRules");
            String errorLogs = input.getStringByField("errorLogs");
    
            //保存异常信息到mysql表中
            CommonUtils.insertToDb(appRules,errorLogs);
        }
    
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
    
        }
    }
    
    

kafkaSpout 的更多说明:
https://github.com/apache/storm/blob/caeaf255b7c20009d36c39bc2999c205082c63aa/docs/storm-kafka-client.md

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值