openstack nova-scheduler 模块分析

                          图 1-1 openstack nova-scheduler 整体模块关联

由上图加上按照从上往下的思路来描述:

1. rpcapi.py : 继承自nova.openstack.common.rpc.proxy.RpcProxy,用于其他模块调用,所以是单独的模块。

2. driver.py: scheduler的基类,定义schduler操作使用到的方法。

3. chance.py : 主要包括类ChanceScheduler,是scheduler的一种实现,具体逻辑在_scheduler。

代码:

def _schedule(self, context, topic, request_spec, filter_properties):
        """Picks a host that is up at random."""

        elevated = context.elevated()
        hosts = self.hosts_up(elevated, topic)
        if not hosts:
            msg = _("Is the appropriate service running?")
            raise exception.NoValidHost(reason=msg)

        hosts = self._filter_hosts(request_spec, hosts, filter_properties)
        if not hosts:
            msg = _("Could not find another compute")
            raise exception.NoValidHost(reason=msg)

        return hosts[int(random.random() * len(hosts))]  #######从此处可以看出条件筛选过得host选择条件是随即算法。


4. filter_scheduler.py:主要包括类FilterSchduler,是scheduler的另一种重要实现,来看具体逻辑:

代码:

def _schedule(self, context, topic, request_spec, filter_properties,
                  instance_uuids=None):
        """Returns a list of hosts that meet the required specs,
        ordered by their fitness.
        """
        elevated = context.elevated()
        if topic != "compute":
            msg = _("Scheduler only understands Compute nodes (for now)")
            raise NotImplementedError(msg)

        instance_properties = request_spec['instance_properties']
        instance_type = request_spec.get("instance_type", None)

        cost_functions = self.get_cost_functions()
        config_options = self._get_configuration_options()

        # check retry policy.  Rather ugly use of instance_uuids[0]...
        # but if we've exceeded max retries... then we really only
        # have a single instance.
        properties = instance_properties.copy()
        if instance_uuids:
            properties['uuid'] = instance_uuids[0]
        self._populate_retry(filter_properties, properties)

        filter_properties.update({'context': context,
                                  'request_spec': request_spec,
                                  'config_options': config_options,
                                  'instance_type': instance_type})

        self.populate_filter_properties(request_spec,
                                        filter_properties)

        # Find our local list of acceptable hosts by repeatedly
        # filtering and weighing our options. Each time we choose a
        # host, we virtually consume resources on it so subsequent
        # selections can adjust accordingly.

        # unfiltered_hosts_dict is {host : ZoneManager.HostInfo()}
        unfiltered_hosts_dict = self.host_manager.get_all_host_states(   ####此为调用host_manager的出处,主要来从数据库中获取host的当前信息。
                elevated, topic)

        # Note: remember, we are using an iterator here. So only
        # traverse this list once. This can bite you if the hosts
        # are being scanned in a filter or weighing function.
        hosts = unfiltered_hosts_dict.itervalues()

        selected_hosts = []
        if instance_uuids:
            num_instances = len(instance_uuids)
        else:
            num_instances = request_spec.get('num_instances', 1)
        for num in xrange(num_instances):
            # Filter local hosts based on requirements ...
            hosts = self.host_manager.filter_hosts(hosts,       ####使用host_manager调用Filters模块集中的good_filters筛选出满足资源请求的host
                    filter_properties)
            if not hosts:
                # Can't get any more locally.
                break

            LOG.debug(_("Filtered %(hosts)s") % locals())

            # weighted_host = WeightedHost() ... the best
            # host for the job.
            # TODO(comstud): filter_properties will also be used for
            # weighing and I plan fold weighing into the host manager
            # in a future patch.  I'll address the naming of this
            # variable at that time.
            weighted_host = least_cost.weighted_sum(cost_functions, ####此为调用least_cost的出处,使用其中的host加权算法,后面有介绍。
                    hosts, filter_properties)
            LOG.debug(_("Weighted %(weighted_host)s") % locals())
            selected_hosts.append(weighted_host)

            # Now consume the resources so the filter/weights
            # will change for the next instance.
            weighted_host.host_state.consume_from_instance(
                    instance_properties)

        selected_hosts.sort(key=operator.attrgetter('weight')) ### 按照weight对hosts排序,返回List对象。
        return selected_hosts


总结: scheduler逻辑主要包括以下:

      a 从request_spec中获取资源请求,并更新在filter_properties中。

     b  在host_manager中使用good_filters以filter_properties为条件,筛选满足条件的hosts

     c  使用least_cost的加权算法,完成host List的排序。

5. multi.py:基于compute_*_driver,将几类scheduler对应集中action。

    例如: schedule_run_instance -->FilterScheduler

               schedule_prep_resize -->FilterScheduler

               schedule_create_volum --> ChanceScheduler

6. manager.py : 包含muti.py,作为外界的统一接口。


备注:由于scheduler在nova instance启动,resize,change volume过程中处于中间过程,所以存在需要rpc的publish and consumer。


转发: 请标注http://blog.csdn.net/soft_lawrency/article/details/8509939


Good Luck

Lawrency Meng


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值