Storm1.0版本任务调度策略实现源码分析

本文深入分析了Storm1.0版本的任务调度策略,包括DefaultScheduler的工作原理,详细介绍了Topology提交过程,以及任务如何在集群中被公平分配。在DefaultScheduler中,storm会考虑集群状态、可用slot和executor需求,进行任务与worker的匹配,以优化资源利用。然而,对于特定场景,如单一worker的topology连续提交,可能会导致资源分配不均衡。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、任务调度策略

     当我们将topology提交到storm集群的时候,任务是怎样分配的呢,这就需要理解storm的任务调度策略,这里主要给大家分享默认的调度策略DefaultScheduler,在storm的1.1.0版本已经支持4种调度策略,分别是DefaultScheduler,IsolationScheduler,MultitenantScheduler,ResourceAwareScheduler

二、Topology的提交过程

  在理解默认的调度策略之前,先看一下我们提交一个topology到集群的整个流程图。

   主要分为几步:
    1、非本地模式下,客户端通过thrift调用nimbus接口,来上传代码到nimbus并触发提交操作.
    2、nimbus进行任务分配,并将信息同步到zookeeper.
    3、supervisor定期获取任务分配信息,如果topology代码缺失,会从nimbus下载代码,并根据任务分配信息,同步worker.
   4、worker根据分配的tasks信息,启动多个executor线程,同时实例化spout、bolt、acker等组件,此时,等待所有connections(worker和其它机器通讯的网络连接)启动完毕,storm集群即进入工作状态。
    5、除非显示调用kill topology,否则spout、bolt等组件会一直运行。 

   下面我们来看一下整个topolgoy提交过程的源代码

     Main方法里面的提交代码

StormSubmitter.submitTopology("one-work",config,builder.createTopology());

    然后调用下面方法

   

public static void submitTopologyAs(String name, Map stormConf, StormTopology topology, SubmitOptions opts, ProgressListener progressListener, String asUser)
            throws AlreadyAliveException, InvalidTopologyException, AuthorizationException, IllegalArgumentException {

     //配置文件必须能够被Json序列化
        if(!Utils.isValidConf(stormConf)) {
            throw new IllegalArgumentException("Storm conf is not valid. Must be json-serializable");
        }
        stormConf = new HashMap(stormConf);
     //将命令行的参数加入stormConf
        stormConf.putAll(Utils.readCommandLineOpts());
     //先加载defaults.yaml, 然后再加载storm.yaml
        Map conf = Utils.readStormConfig();
        conf.putAll(stormConf);
      //设置zookeeper的相关权限
        stormConf.putAll(prepareZookeeperAuthentication(conf));
        validateConfs(conf, topology);
        Map<String,String> passedCreds = new HashMap<>();
        if (opts != null) {
            Credentials tmpCreds = opts.get_creds();
            if (tmpCreds != null) {
                passedCreds = tmpCreds.get_creds();
            }
        }
        Map<String,String> fullCreds = populateCredentials(conf, passedCreds);
        if (!fullCreds.isEmpty()) {
            if (opts == null) {
                opts = new SubmitOptions(TopologyInitialStatus.ACTIVE);
            }
            opts.set_creds(new Credentials(fullCreds));
        }
        try {
           //本地模式
            if(localNimbus!=null) {
                LOG.info("Submitting topology " + name + " in local mode");
                if(opts!=null) {
                    localNimbus.submitTopologyWithOpts(name, stormConf, topology, opts);
                } else {
                    // this is for backwards compatibility
                    localNimbus.submitTopology(name, stormConf, topology);
                }
                LOG.info("Finished submitting topology: " +  name);
            //这里重点分析将topology提交到集群模式
            } else {
           //将配置信息转为json字符串
                String serConf = JSONValue.toJSONString(stormConf);
            //校验集群中topology-name是否已经存在
                if(topologyNameExists(conf, name, asUser)) {
                    throw new RuntimeException("Topology with name `" + name + "` already exists on cluster");
                }
         //将jar包上传至nimbus,这个时候topology还没有正在跑起来,只是将jar提交到了nimbus,等待后续的任务调度
                String jar = submitJarAs(conf, System.getProperty("storm.jar"), progressListener, asUser);
                try (
        //获取Nimbus client对象
     NimbusClient client = NimbusClient.getConfiguredClientAs(conf, asUser)){
                    LOG.info("Submitting topology " + name + " in distributed mode with conf " + serConf);
       //调用submitTopologyWithOpts正式向nimbus提交拓扑,其实所谓的提交拓扑,就是将拓扑的配置信息通过thrift发送到thrift server,并把jar包上传到nimbus,等待nimbus的后续处//理,此时拓扑并未真正起来,直至recv_submitTopology获得成功的返回信息为止
                    if (opts != null) {
                        client.getClient().submitTopologyWithOpts(name, jar, serConf, topology, opts);
                    } else {
                        // this is for backwards compatibility
                        client.getClient().submitTopology(name, jar, serConf, topology);
                    }
                    LOG.info("Finished submitting topology: " + name);
                } catch (InvalidTopologyException e) {
                    LOG.warn("Topology submission exception: " + e.get_msg());
                    throw e;
                } catch (AlreadyAliveException e) {
                    LOG.warn("Topology already alive exception", e);
                    throw e;
                }
            }
        } catch(TException e) {
            throw new RuntimeException(e);
        }
        invokeSubmitterHook(name, asUser, conf, topology);
    }

     继续调用

  
public static String submitJarAs(Map conf, String localJar, ProgressListener listener, String asUser) {
        if (localJar == null) {
            throw new RuntimeException("Must submit topologies using the 'storm' client script so that StormSubmitter knows which jar to upload.");
        }
       //如果获取了nimbus client
        try (NimbusClient client = NimbusClient.getConfiguredClientAs(conf, asUser)) {
           //获取topology-jar对应的存放地址
            String uploadLocation = client.getClient().beginFileUpload();
            LOG.info("Uploading topology jar " + localJar + " to assigned location: " + uploadLocation);
            BufferFileInputStream is = new BufferFileInputStream(localJar, THRIFT_CHUNK_SIZE_BYTES);
            long totalSize = new File(localJar).length();
            if (listener != null) {
                listener.onStart(localJar, uploadLocation, totalSize);
            }
            long bytesUploaded = 0;
            while(true) {
                byte[] toSubmit = is.read();
                bytesUploaded += toSubmit.length;
                if (listener != null) {
                    listener.onProgress(localJar, uploadLocation, bytesUploaded, totalSize);
                }
                if(toSubmit.length==0) break;
                  //一块一块的提交jar
                client.getClient().uploadChunk(uploadLocation, ByteBuffer.wrap(toSubmit));
            }
            //完成jar包提交
            client.getClient().finishFileUpload(uploadLocation);
            if (listener != null) {
                listener.onCompleted(localJar, uploadLocation, totalSize);
            }
            LOG.info("Successfully uploaded topology jar to assigned location: " + uploadLocation);
           //返回存放jar的位置
            return uploadLocation;
        } catch(Exception e) {
            throw new RuntimeException(e);
        }
    }
     继续调用
   
public void submitTopology(String name, String uploadedJarLocation, String jsonConf, StormTopology topology) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException, org.apache.thrift.TException
    {
     //发送topology相关信息到nimbus
      send_submitTopology(name, uploadedJarLocation, jsonConf, topology);
   //接收返回结果
     recv_submitTopology();
   }
     继续调用:
   
   
 public void send_submitTopology(String name, String uploadedJarLocation, String jsonConf, StormTopology topology) throws org.apache.thrift.TException{
      submitTopology_args args = new submitTopology_args();
      args.set_name(name);
      args.set_uploadedJarLocation(uploadedJarLocation);
      args.set_jsonConf(jsonConf);
     args.set_topology(topology);
      sendBase("submitTopology", args);
    }
   继续调用:
  
   
public void recv_submitTopology() throws AlreadyAliveException, InvalidTopologyException, AuthorizationException, org.apache.thrift.TException
    {
      submitTopology_result result = new submitTopology_result();
      receiveBase(result, "submitTopology");
      if (result.e != null) {
        throw result.e;
      }
      if (result.ite != null) {
        throw result.ite;
      }
      if (result.aze != null) {
        throw result.aze;
      }
      return;
}
 

三、任务分配

      在上面我们已经将topology提交到到nimbus了,下一步就是任务分配,strom默认4种分配策略。

    DefaultScheduler策略,DefaultScheduler其实主要有几步

   1、首先是获取当前集群中需要进行任务分配的topology

   2、获取整个集群可用的slot

   3、获取当前topology需要分配的executor信息

   4、计算当前集群可释放的slot

   5、统计可释放的solt和空闲的solt

   6、执行topology分配

   下面我们用一个列子来说明

       比如初始状态下,集群的状态如下:2个supervisor,每个supervisor有4个可用的端口,这里我已A,B分别代表2个supervisor,那么初始情况下整个集群可用的端口地址就是:

   A-6700,A-6701,A-6703,A-6704,B-6700,B-6701,B6702,B-6703。

   Step1:现在我提交一个topology到集群,这个拓扑我给他分配2个worker端口,6个executor线程,每个线程默认运行一个任务就是6个task。当我们提交这个拓扑的时候,首先集群会将可用的solts进行排序如上可用端口的顺序,然后计算线程和任务的对应关系,这里都是6个,格式为[start-task-id end-task-id]就[1,1][2,2][3,3],[4,4],[5,5],[6,6]然后分配到2个worker上,那么每个worker分别跑3个线程即分配状态为[3,3]。

综上:分配的结果为:

    [1,1],[2,2],[3,3] --->worker1

    [4,4],[5,5],[6,6] --->worker2 

   而非常重要的是storm为了合理利用资源,在将可用slots排序后,依次选择worker来运行任务,也就是worker1对应A--6700,worker2对应B--6700。

下面我们来看一下storm集群的日志文件

首先提交topology

然后看一下nimbus.log日志

 

2017-04-09 22:00:12.502 o.a.s.d.common [INFO] Started statistics report plugin...
2017-04-09 22:00:12.575 o.a.s.d.nimbus [INFO] Starting nimbus server for storm version '1.0.0'
2017-04-09 22:03:13.661 o.a.s.d.nimbus [INFO] Uploading file from client to /bigdata/storm/datas/nimbus/inbox/stormjar-f16a2908-869a-418d-a589-ff6c7968724f.jar
2017-04-09 22:03:16.163 o.a.s.d.nimbus [INFO] Finished uploading file from client: /bigdata/storm/datas/nimbus/inbox/stormjar-f16a2908-869a-418d-a589-ff6c7968724f.jar
2017-04-09 22:03:16.328 o.a.s.d.nimbus [INFO] Received topology submission for testTopologySubmit with conf {"topology.max.task.parallelism" nil, "topology.submitter.principal" "", "topology.acker.executors" nil, "topology.eventlogger.executors" 0, "topology.workers" 2, "topology.debug" false, "storm.zookeeper.superACL" nil, "topology.users" (), "topology.submitter.user" "root", "topology.kryo.register" nil, "topology.kryo.decorators" (), "storm.id" "testTopologySubmit-1-1491800596", "topology.name" "testTopologySubmit"}
2017-04-09 22:03:16.335 o.a.s.d.nimbus [INFO] uploadedJar /bigdata/storm/datas/nimbus/inbox/stormjar-f16a2908-869a-418d-a589-ff6c7968724f.jar

     获取集群可用的solts:


    可以看到分配到了slave1和slave2的6700端口

      slave1--132机器

 

    slave2-134机器


      Step2:现在整个集群还有A-6701,A-6702,A-6703,B-6701,B-6702,B-6703,现在假如我要提交一个新的topology,然后只有1个worker,那么它会分配到A-6701,那么如果后面每次都提交只需要一个worker的topology,那么会导致A机器端口已经被分配完了,而B机器还有3个可用的端口,所有storm的任务调度也不是很公平的,A机器已经满载了,B机器还有3个可用端口。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值