neutron 安全组代码实现(一)

最近在学习neutron 安全组相关逻辑,所以梳理一下neutron关于安全组的具体代码实现,下面是neutron创建安全组时的代码,总体来说所有service的创建流程是一样的
  1. 创建安全组
#neutron --debug security-group-create sg-debug-can-delete
curl -g -i -X POST http://10.x.x.x:9696/v2.0/security-groups -H "Accept: application/json" -H "Content-Type: application/json" -H "User-Agent: python-neutronclient" -H "X-Auth-Token: {SHA256}3655d2b6e755fbbb194d185fda9d339c642ab064dba48da834e1b2660e140123" -d '{"security_group": {"name": "sg-debug-can-delete"}}'
  1. neutron/api/v2/base.py

安全组创建create请求由controller处理,调用create函数,self._notifier.info 这里先init 了neutron-lib库中的rpc.py中的NOTIFIER,然后调用oslo_message发送了一个info的通知 security_group.create.start, 调用_create 获取body以及action,加载policy ,obj_creator 最终是执行了 plugin neutron.plugins.ml2.plugin.Ml2Plugin 的函数 create_security_group,create_security_group来自实现的类SecurityGroupDbMixin中的,所以do_create中最终执行了SecurityGroupDbMixin的create_security_group函数,db中创建安全组obj,再notify,并同时将obj 安全组信息http 返回

 def create(self, request, body=None, **kwargs):
        self._notifier.info(request.context,
                            self._resource + '.create.start',
                            body)
        return self._create(request, body, **kwargs)
 @db_api.retry_db_errors
 def _create(self, request, body, **kwargs):
        """Creates a new instance of the requested entity."""
        parent_id = kwargs.get(self._parent_id_name)
        body = Controller.prepare_request_body(request.context,
                                               body, True,
                                               self._resource, self._attr_info,
                                               allow_bulk=self._allow_bulk)
        action = self._plugin_handlers[self.CREATE]
        # Check authz
        if self._collection in body:
            # Have to account for bulk create
            items = body[self._collection]
        else:
            items = [body]
        # Ensure policy engine is initialized
        policy.init()
        # Store requested resource amounts grouping them by tenant
        # This won't work with multiple resources. However because of the
        # current structure of this controller there will hardly be more than
        # one resource for which reservations are being made
        request_deltas = collections.defaultdict(int)
        for item in items:
            self._validate_network_tenant_ownership(request,
                                                    item[self._resource])
            # For ext resources policy check, we support two types, such as
            # parent_id is in request body, another type is parent_id is in
            # request url, which we can get from kwargs.
            self._set_parent_id_into_ext_resources_request(
                request, item[self._resource], parent_id)
            policy.enforce(request.context,
                           action,
                           item[self._resource],
                           pluralized=self._collection)
            if 'tenant_id' not in item[self._resource]:
                # no tenant_id - no quota check
                continue
            tenant_id = item[self._resource]['tenant_id']
            request_deltas[tenant_id] += 1
        # Quota enforcement
        reservations = []
        try:
            for (tenant, delta) in request_deltas.items():
                reservation = quota.QUOTAS.make_reservation(
                    request.context,
                    tenant,
                    {self._resource: delta},
                    self._plugin)
                reservations.append(reservation)
        except exceptions.QuotaResourceUnknown as e:
            # We don't want to quota this resource
            LOG.debug(e)

        def notify(create_result):
            # Ensure usage trackers for all resources affected by this API
            # operation are marked as dirty
            with db_api.CONTEXT_WRITER.using(request.context):
                # Commit the reservation(s)
                for reservation in reservations:
                    quota.QUOTAS.commit_reservation(
                        request.context, reservation.reservation_id)
                resource_registry.set_resources_dirty(request.context)

            notifier_method = self._resource + '.create.end'
            self._notifier.info(request.context,
                                notifier_method,
                                create_result)
            registry.publish(self._resource, events.BEFORE_RESPONSE, self,
                             payload=events.APIEventPayload(
                                 request.context, notifier_method, action,
                                 request_body=body,
                                 states=({}, create_result,),
                                 collection_name=self._collection))
            return create_result

        def do_create(body, bulk=False, emulated=False):
            kwargs = {self._parent_id_name: parent_id} if parent_id else {}
            if bulk and not emulated:
                obj_creator = getattr(self._plugin, "%s_bulk" % action)
            else:
                #这里获取了ML2plugin的create_security_group属性
                obj_creator = getattr(self._plugin, action)
            try:
                if emulated:
                    return self._emulate_bulk_create(obj_creator, request,
                                                     body, parent_id)
                else:
                    if self._collection in body:
                        # This is weird but fixing it requires changes to the
                        # plugin interface
                        kwargs.update({self._collection: body})
                    else:
                        kwargs.update({self._resource: body})
                    #这里调用了create_security_group函数
                    return obj_creator(request.context, **kwargs)
            except Exception:
                # In case of failure the plugin will always raise an
                # exception. Cancel the reservation
                with excutils.save_and_reraise_exception():
                    for reservation in reservations:
                        quota.QUOTAS.cancel_reservation(
                            request.context, reservation.reservation_id)
		
        if self._collection in body and self._native_bulk:
            # plugin does atomic bulk create operations
            objs = do_create(body, bulk=True)
            # Use first element of list to discriminate attributes which
            # should be removed because of authZ policies
            fields_to_strip = self._exclude_attributes_by_policy(
                request.context, objs[0])
            return notify({self._collection: [self._filter_attributes(
                obj, fields_to_strip=fields_to_strip)
                for obj in objs]})
        else:
            if self._collection in body:
                # Emulate atomic bulk behavior
                objs = do_create(body, bulk=True, emulated=True)
                return notify({self._collection: objs})
            else:
                #走到这里创建了db记录,notify并 http 返回信息
                obj = do_create(body)
                return notify({self._resource: self._view(request.context,
                                                          obj)})
  1. neutron/db/securitygroups_db.py

db中创建安全组以及安全组egress规则, 创建完后 AFTER_CREATE,通过notify发送rpc通知, 执行 neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event–9223372036799819083’

 @db_api.retry_if_session_inactive()
 def create_security_group(self, context, security_group, default_sg=False):
        """Create security group.

        If default_sg is true that means we are a default security group for
        a given tenant if it does not exist.
        """
        s = security_group['security_group']
        kwargs = {
            'context': context,
            'security_group': s,
            'is_default': default_sg,
        }
        #创建前rpc通知订阅service
        self._registry_notify(resources.SECURITY_GROUP, events.BEFORE_CREATE,
                              exc_cls=ext_sg.SecurityGroupConflict,
                              payload=events.DBEventPayload(
                                  context, metadata={'is_default': default_sg},
                                  request_body=security_group,
                                  desired_state=s))

        tenant_id = s['tenant_id']

        with db_api.CONTEXT_WRITER.using(context):
            sg = sg_obj.SecurityGroup(
                context, id=s.get('id') or uuidutils.generate_uuid(),
                description=s['description'], project_id=tenant_id,
                name=s['name'], is_default=default_sg)
            sg.create()

            for ethertype in ext_sg.sg_supported_ethertypes:
                egress_rule = sg_obj.SecurityGroupRule(
                    context, id=uuidutils.generate_uuid(),
                    project_id=tenant_id, security_group_id=sg.id,
                    direction='egress', ethertype=ethertype)
                egress_rule.create()
                sg.rules.append(egress_rule)
            sg.obj_reset_changes(['rules'])

            # fetch sg from db to load the sg rules with sg model.
            sg = sg_obj.SecurityGroup.get_object(context, id=sg.id)
            secgroup_dict = self._make_security_group_dict(sg)
            kwargs['security_group'] = secgroup_dict
            self._registry_notify(resources.SECURITY_GROUP,
                                  events.PRECOMMIT_CREATE,
                                  exc_cls=ext_sg.SecurityGroupConflict,
                                  **kwargs)

        registry.notify(resources.SECURITY_GROUP, events.AFTER_CREATE, self,
                        **kwargs)
        return secgroup_dict
  1. neutron_lib/callbacks/manager.py

    因为event是events.BEFORE_RESPONSE,所以最终执行了self._notify_loop,最终执行了callback

    @db_utils.reraise_as_retryrequest
        def notify(self, resource, event, trigger, **kwargs):
            """Notify all subscribed callback(s).
    
            Dispatch the resource's event to the subscribed callbacks.
    
            :param resource: The resource for the event.
            :param event: The event.
            :param trigger: The trigger. A reference to the sender of the event.
            :param kwargs: (deprecated) Unstructured key/value pairs to invoke
                the callback with. Using event objects with publish() is preferred.
            :raises CallbackFailure: CallbackFailure is raised if the underlying
                callback has errors.
            """
            errors = self._notify_loop(resource, event, trigger, **kwargs)
            if errors:
                if event.startswith(events.BEFORE):
                    abort_event = event.replace(
                        events.BEFORE, events.ABORT)
                    self._notify_loop(resource, abort_event, trigger, **kwargs)
    
                    raise exceptions.CallbackFailure(errors=errors)
    
                if event.startswith(events.PRECOMMIT):
                    raise exceptions.CallbackFailure(errors=errors)
    
        def clear(self):
            """Brings the manager to a clean slate."""
            self._callbacks = collections.defaultdict(dict)
            self._index = collections.defaultdict(dict)
    
        def _notify_loop(self, resource, event, trigger, **kwargs):
            """The notification loop."""
            errors = []
            # NOTE(yamahata): Since callback may unsubscribe it,
            # convert iterator to list to avoid runtime error.
            callbacks = list(itertools.chain(
                *[pri_callbacks.items() for (priority, pri_callbacks)
                  in self._callbacks[resource].get(event, [])]))
            LOG.debug("Notify callbacks %s for %s, %s",
                      [c[0] for c in callbacks], resource, event)
            # TODO(armax): consider using a GreenPile
            for callback_id, callback in callbacks:
                try:
                    callback(resource, event, trigger, **kwargs)
                except Exception as e:
                    abortable_event = (
                        event.startswith(events.BEFORE) or
                        event.startswith(events.PRECOMMIT)
                    )
                    if not abortable_event:
                        LOG.exception("Error during notification for "
                                      "%(callback)s %(resource)s, %(event)s",
                                      {'callback': callback_id,
                                       'resource': resource, 'event': event})
                    else:
                        LOG.debug("Callback %(callback)s raised %(error)s",
                                  {'callback': callback_id, 'error': e})
                    errors.append(exceptions.NotificationError(callback_id, e))
            return errors
    
  2. neutron/plugins/ml2/ovo_rpc.py

ObjectChangeHandler初始化时subscribe了AFTER_CREATE,AFTER_UPDATE,AFTER_DELETE的event,最后启用了一个线程将event分发下去

     def handle_event(self, resource, event, trigger,
                     context, *args, **kwargs):
        """Callback handler for resource change that pushes change to RPC.

        We always retrieve the latest state and ignore what was in the
        payload to ensure that we don't get any stale data.
        """
        if self._is_session_semantic_violated(context, resource, event):
            return
        resource_id = self._extract_resource_id(kwargs)
        # we preserve the context so we can trace a receive on the agent back
        # to the server-side event that triggered it
        self._resources_to_push[resource_id] = context.to_dict()
        # spawn worker so we don't block main AFTER_UPDATE thread
        self.fts.append(self._worker_pool.submit(self.dispatch_events))

    @lockutils.synchronized('event-dispatch')
    def dispatch_events(self):
        # this is guarded by a lock to ensure we don't get too many concurrent
        # dispatchers hitting the database simultaneously.
        to_dispatch, self._resources_to_push = self._resources_to_push, {}
        # TODO(kevinbenton): now that we are batching these, convert to a
        # single get_objects call for all of them
        for resource_id, context_dict in to_dispatch.items():
            context = n_ctx.Context.from_dict(context_dict)
            # attempt to get regardless of event type so concurrent delete
            # after create/update is the same code-path as a delete event
            with db_api.get_context_manager().independent.reader.using(
                    context):
                obj = self._obj_class.get_object(context, id=resource_id)
            # CREATE events are always treated as UPDATE events to ensure
            # listeners are written to handle out-of-order messages
            if obj is None:
                rpc_event = rpc_events.DELETED
                # construct a fake object with the right ID so we can
                # have a payload for the delete message.
                obj = self._obj_class(id=resource_id)
            else:
                rpc_event = rpc_events.UPDATED
            self._resource_push_api.push(context, [obj], rpc_event)
  1. neutron/api/rpc/handlers/resources_rpc.py

最终发送cast广播出去了

class ResourcesPushRpcApi(object): 
   def push(self, context, resource_list, event_type):
        """Push an event and list of resources to agents, batched per type.
        When a list of different resource types is passed to this method,
        the push will be sent as separate individual list pushes, one per
        resource type.
        """

        resources_by_type = self._classify_resources_by_type(resource_list)
        LOG.debug(
            "Pushing event %s for resources: %s", event_type,
            {t: ["ID=%s,revision_number=%s" % (
                     getattr(obj, 'id', None),
                     getattr(obj, 'revision_number', None))
                 for obj in resources_by_type[t]]
             for t in resources_by_type})
        for resource_type, type_resources in resources_by_type.items():
            self._push(context, resource_type, type_resources, event_type)

    def _push(self, context, resource_type, resource_list, event_type):
        """Push an event and list of resources of the same type to agents."""
        _validate_resource_type(resource_type)

        for version in version_manager.get_resource_versions(resource_type):
            cctxt = self._prepare_object_fanout_context(
                resource_list[0], version, rpc_version='1.1')

            dehydrated_resources = [
                resource.obj_to_primitive(target_version=version)
                for resource in resource_list]

            cctxt.cast(context, 'push',
                       resource_list=dehydrated_resources,
                       event_type=event_type)

日志:

Pushing event updated for resources: {'SecurityGroup': ['ID=47991532-81e1-4454-a010-7d1d45c07db1,revision_number=1']} push /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py:243

CAST unique_id: 0d594a5224974f5aa28788fb340e3c90 FANOUT topic 'neutron-vo-SecurityGroup-1.1' _send /var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:617


Exchange neutron-vo-SecurityGroup-1.1_fanout(fanout) with routing key None
  • 22
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
neutron-server 通过调用 neutron.agent.rpc.PluginApi 类的 create_subnet 方法将子网信息发送给 DHCP Agent,具体代码如下: 1. 在 neutron/server/rpc.py 中定义了 PluginApi 类,该类用于向各种插件发送 RPC 消息。 ```python class PluginApi(agent_rpc.PluginApi, l3_rpc.L3PluginApi, dhcp_rpc_base.DhcpPluginApi): def create_subnet(self, context, subnet): """Create a subnet.""" return self.call(context, self.make_msg('create_subnet', subnet=subnet), version='1.1') ``` 2. 在 neutron/agent/rpc.py 中定义了 PluginApi 类的父类 agent_rpc.PluginApi,该类用于向各种 Agent(包括 DHCP Agent)发送 RPC 消息。 ```python class PluginApi(agent_rpc.API): """Agent side of the neutron RPC API.""" def create_subnet(self, context, subnet): """Create a subnet.""" return self._call_plugin('create_subnet', context=context, subnet=subnet) ``` 3. 在 neutron/agent/dhcp/agent.py 中定义了 DHCP Agent,该 Agent 接收 PluginApi 发送的消息,并进行相应的处理,包括配置 DHCP 服务等。 ```python class DhcpAgentWithStateReport(DhcpAgent): def create_subnet(self, context, subnet): """Handle the subnet create RPC event.""" # 解析子网信息 network_id = subnet['network_id'] subnet_id = subnet['id'] cidr = subnet['cidr'] gateway_ip = subnet['gateway_ip'] # ... 其他操作 # 配置 DHCP 服务 self.enable_dhcp_helper(network_id, subnet_id, device_owner, cidr, gateway_ip, dhcp_server_ips) ``` 需要注意的是,在 neutron.conf 文件中需要配置 DHCP Agent 启用 DHCP 服务,具体配置项为: ```ini [DEFAULT] # ... dhcp_agent_notification = True ``` 这个配置项需要设置为 True,才能使 neutron-server 发送子网信息给 DHCP Agent。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

robin5911

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值