Storm ack容错机制案例

参考:
http://blog.csdn.net/suifeng3051/article/details/41682441
https://www.cnblogs.com/intsmaze/p/5918087.html
http://www.aboutyun.com/thread-9526-1-1.html


改写wordcount程序,实现splot ack、fail机制:

/**
 *  参考自:
 *  https://www.cnblogs.com/intsmaze/p/5918087.html
 *  http://www.aboutyun.com/thread-9526-1-1.html
 */
public class ReliableSpout extends BaseRichSpout {
    // key:messageId,Data
    private HashMap<String, String> waitAck = new HashMap<String, String>();

    public static final String FILE_PATH = "D:\\1.log";
    private SpoutOutputCollector collector;
    private BufferedReader bufferedReader;

    public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
        this.collector = collector;
        try {
            this.bufferedReader = new BufferedReader(new FileReader(new File(FILE_PATH)));
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
    }

    public void nextTuple() {
        try {
            String line = bufferedReader.readLine();
            if (StringUtils.isNotBlank(line)) {
                List<Object> arrayList = new ArrayList<Object>();
                arrayList.add(line);
//                collector.emit(arrayList);
                // 指定messageId,开启ackfail机制,指定messageid acker 数量至少为1 config.setNumAckers(1);
                String messageId = UUID.randomUUID().toString().replaceAll("-", "");
                collector.emit(arrayList, messageId);
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
        //消息源可以发射多条消息流stream。多条消息流可以理解为多中类型的数据。
        outputFieldsDeclarer.declare(new Fields("juzi"));
    }

    public void ack(Object msgId) {
        System.out.println("消息处理成功:" + msgId);
        System.out.println("删除缓存中的数据...");
        waitAck.remove(msgId);
    }

    public void fail(Object msgId) {
        System.out.println("消息处理失败:" + msgId);
        System.out.println("重新发送失败的信息...");
        //重发如果不开启ackfail机制,那么spout的map对象中的该数据不会被删除的。
        collector.emit(new Values(waitAck.get(msgId)),msgId);
    }
}

当使用messageid时,numberAckers数量至少为1

public class StormTopologyDriver {

    public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException {
        TopologyBuilder topologyBuilder = new TopologyBuilder();
        topologyBuilder.setSpout("mySpout", new ReliableSpout(), 2);
        topologyBuilder.setBolt("bolt1", new MySplitBolt(), 4).shuffleGrouping("mySpout");
        topologyBuilder.setBolt("bolt2", new MyWordCountAndPrintBolt(), 2).shuffleGrouping("bolt1");

        Config config = new Config();
        config.setNumWorkers(2);
        StormTopology stormTopology = topologyBuilder.createTopology();

        config.setDebug(false);

        config.setNumAckers(2);
        LocalCluster localCluster = new LocalCluster();
        localCluster.submitTopology("wordcount", config, stormTopology);
    }
}

改写wordcount程序,实现bolt ack、fail机制:

/**
 * Map --->word,1
 */
public class MySplitBolt extends BaseRichBolt {
    private OutputCollector collector;

    //初始化方法,只调用一次
    @Override
    public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
        this.collector = collector;
    }

    @Override
    public void execute(Tuple tuple) {
        String juzi =  tuple.getStringByField("juzi" );
        String[] strings = juzi.split(" ");
        for (String word : strings) {
            collector.emit(tuple,new Values(word, 1));
        }
        this. collector.ack(tuple);
    }

    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word", "num"));
    }


}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值