Neutron印象6: LBaaS Service

在OpenStack Grizzly版本中,Quantum组件引入了一个新的网络服务:LoadBalancer(LBaaS),服务的架构遵从Service Insertion框架。LoadBalancer为租户提供到一组虚拟机的流量的负载均衡,其基本实现为:在neutron-lbaas-agent中生成Haproxy的配置文件然后启动Haproxy。

Neutron LBaaS Service Architecture

 

LBaaS主要由以下几个模块构成,如下图所示

  • Loadbalancer 处理Restful API
  • LoadBalancerPlugin,This class manages the workflow of LBaaS request/response. Most DB related works are implemented in class loadbalancer_db.LoadBalancerPluginDb
  • Scheduler: loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler 负责为vip分配相应的agent
  • lbaas-agent 接收plugin消息,并将请求转发给device_driver(HaproxyNSDriver)执行
  • HaproxyNSDriver 实现负载均衡的device driver,生成Haproxy的配置文件然后启动Haproxy

 

LBaaS数据模型

如上图所示,数据模型主要由Pool,VIP,Member,HealthMonitor等四个对象组成。

  • 处在核心位置的是Pool(我倾向于把它命名成loadballancer), 它代表一个负载均衡器。
  • 一个负载均衡器拥有一个VIP,也就是虚拟IP。虚拟IP中的虚拟其实是相对后面的Member而言,也就是说这个VIP不固定在任何一个Member上。用户访问这个VIP,有时由这个成员提供服务,有时由那个成员提供服务。
  • Member是后台提供服务的服务器。
  • HealthMonitor用来监控和检查后台服务器的联通情况。当检查到某个服务器不能使用时,负载均衡器就不会用它来向用户提供服务。一个pool可对应多个health monitor。有四种类型:PING、TCP、HTTP、HTTPS。每种类型就是使用相应的协议对member进行检测。

除了以上四个对象,Session Persistence和Connection Limits这两个特性也比较重要:

  • Session Persistence规定session相同的连接或请求转发的行为。目前支持三种类型:

 

    • SOURCE_IP:指从同一个IP发来的连接请求被某个member接收处理;
    • HTTP_COOKIE:该模式下,loadbalancer为客户端的第一次连接生成cookie,后续携带该cookie的请求会被某个member处理
    • APP_COOKIE:该模式下,依靠后端应用服务器生成的cookie决定被某个member处理

 

  • Connection Limits 这个特性主要用来抵御DDoS攻击

 

LBaaS部署方法

1. DevStack中,增加 ENABLED_SERVICES+=,q-lbaas 选项即可;

2. RDO部署: packstack --allinone --neutron-lbaas-hosts=192.168.1.10 (具体步骤参考:http://openstack.redhat.com/LBaaS)

3. 也可以使用Openstack Heat来部署LBaaS,具体见http://blog.csdn.net/lin_victor/article/details/23060467

复制代码
For LBaaS to be configured properly, various configuration files must have the following changes.

The service_provider parameter should be set in /usr/share/neutron/neutron-dist.conf:

service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

The service_plugin should be set in /etc/neutron/neutron.conf:

service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

The interface_driver and device_driver should be set in /etc/neutron/lbaas_agent.ini. Since the load balancer will be haproxy, set the device_driver accordingly:

device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

The interface_driver will depend on the core L2 plugin being used.

For OpenVSwitch:

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

For linuxbridge:

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

If the above configuration files were changed manually, restart the neutron-server service and neutron-lbaas-agent service.
复制代码

 

LBaaS使用方法

基本的使用步骤为:

 

  • 租户创建一个pool,初始时的member个数为0;
  • 租户在该pool内创建一个或多个member
  • 租户创建一个或多个health monitor
  • 租户将health monitors与pool关联
  • 租户使用pool创建vip

 

UnitedStack博客中整理一个详细的使用步骤:https://www.ustack.com/2013/10/08/neutron_loadbalance/

参考文献

https://wiki.openstack.org/wiki/Neutron_LBaaS_Arch

https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture/Scheduler

http://openstack.redhat.com/LBaaS

https://www.ustack.com/2013/10/08/neutron_loadbalance/

http://blog.csdn.net/matt_mao/article/details/12982963

http://blog.csdn.net/lynn_kong/article/details/8528512



Service Insertion

Service Insertion是Neutron中实现L4/L7层服务的框架。Neutron以前只有一级插件结构用于实现各种L2层技术(如LinuxBridge,OVS等,部署时分两块:用于和数据库打交道的NeutronPlugin+用于干实际事情的L2Agent),对于L3层的路由和dhcp是采用单独的agent(l3-agent,dhcp-agent)来实现的。但L4-L7层服务要求:

(1)像FW,VPN,DNAT服务需要运行在l3-agent所在的网络节点上,即所谓的Routed/Embedded模式,可参见:https://wiki.openstack.org/wiki/Quantum/ServiceInsertion

(2)像LBaaS服务不需要运行在网络节点上,但网络节点上专门为它准备的port和实际运行haproxy的节点应该通,这个port和网关也应该通,即所谓的Floating/In-Path模式。

(3) 还有一种叫Out-of-Path模式,可能在实现sFlow之类的监控时有用:Where the service also runs in a standalone way but in this case the traffic is first sent to the Router entity and then redirected to the Advanced Service, finally send it back to the routed with specific configuration. In particular, this model could be reduced to the first assuming that a standalone service is regarded as a peculiar case of router capable of providing only a specific service. This mode needs specific changes at the routing entity and may not be implemented in Grizzly release.


于是,Neutron又实现了一层叫做“服务”的插件结构,即现在有两层插件结构,在NeutronPlugin上又可以启动多个服务,eutronPlugin的服务插件继续和数据库打交道同时也能共享原有的NeutronPlugin的信息,如port信息;同时,像FWaaS服务要求ServiceAgent可以运行在l3-agent所在的网络节点上,而LBaaS的haproxy并不需要也安装在l3-agent,但l3-agent也应该在一个专门的命名空间里创建一个port和haproxy所在host是联通的。

Service Type Concept

Just like the Quantum plugins allow for using several technologies for implementing the basic logical topologies, advanced services will use a similar mechanism. However, for advanced services, multiple different implementations of the same kind of service might co-exist in the same deployment. There are a number of reasons for this, most importantly the ability of giving tenants a choice among solutions. TheService Type concept tries to address the need for multiple, co-existing, service providers.

A Service Type definitions might be regarded as list of services (and their providers) which can be offered to tenants. Each advanced service, regardless of its insertion mode, should be either directly or indirectly associated with a single service type.

The association between a service and a service type can happen in two ways, according to the insertion mode of the service.

  • Routed Insertion mode: The advanced service will be associated with Quantum logical router, which in turn is associated with a service_type resource;

In order to ensure backward compatibility a default service type must be specified. This implies that all the services which will be inserted on a router will share the same service type.

  • Floating Insertion mode: The service type should be explicitly specified on the advanced service being created; if not, the default service type will be used.

When an advanced service is created at the API layer one of the following two should be specified:

  1. service_type_id # floating or out-of-path insertion
  2. router_id # routed or in-path insertion

It should not be allowed to specify both parameters.

The logical model for service insertion, augmented with the service type concept, is depicted in the following diagram:

 

下面以LBaaS为例(https://wiki.openstack.org/wiki/Quantum/LBaaS),说明代码中是如何实现ServiceInsertion框架的。

1, 在NeutronPlugin的配置文件/etc/neutron/neutron.conf中配置核心插件和服务插件:

service_plugins =neutron.services.loadbalancer.plugin.LoadBalancerPlugin

core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

2,在$neutron/neutron/manager.py的init方法中加载service插件:

self.service_plugins ={constants.CORE:self.plugin}

self._load_service_plugins()

3,AgentDriver的映射即是所说的第二层插件结构,例如haproxydriver只是LbaaSAgent实现的一种:

$neutron/neutron/services/loadbalancer/driver/haproxy/agent.py会调用

$neutron/neutron/services/loadbalancer/driver/haproxy/agent_manager.py来加载haproxy对应的driver,从lbaas的代码架构可以看出:

Service Chain

上节说了通过定义service这第二层的插件结构来实现L4/L7层服务,但一个tenant可能同时需要多个L4/L7层服务,如LB,如FW,并且是有序的。ServiceChain就是来做这件事。

 一个tenant可以请求创建多个有序的Service。ServiceTypes定义了service被插入到tenant网络中的行为:

  • L3, 这类服务有ip具有路由流量,它运行在router上,或者不运行在router上但具有l3-forwarding的功能,如LBaaS

  • L2,这类服务具有交换流量,有能力做l2-switching和mac地址学习,如L2-Firewallservice(如用ovs的流表来代替iptables)。

  • Bump-in-the-wire,嵌入式服务,这类服务既无路由流量也无交换流量,只有出口和入口port,服务在入口port之前就运行了,如Firewallperforming filtering and auditing

  • Tap, 这类服务在servicechain中仅在特定的点消费流量,如monitoringservice

每个service有一个或两个ports。这样操纵一个service实例:

  • 服务由硬件设备来提供(如LB或Firewall硬件设备),neutron也需要提供一个port去请求这些设备来服务。

  • 服务由VM来提供,neutron也需要提供一个port去请求这个可能是独立也可能是共享的VM

  • tenant已经有一个现成的服务实现了,可能需要组合上述两步去请求它。

Neutron LbaaS的应用场景及实现要点

  NeutronLbaaS实现了下列应用场景:



  • VIP可以设置在router

  • VIP也可以不设置在router

所以在下图的实现中,要特别注意防火墙规则保证sgdefault名空间能访问sgweb名空间,即应该让VIP用的port和网关port关联,也和提供LB的虚机所用的TAP关联起来。


防火墙的流程,我分析应该如下:

1)组成LB服务的虚机所在的计算节点上应为虚机nova-compute-haproxy-instance生成它自己的nova-compute-local防火墙规则:

-A nova-compute-local -d 10.0.0.8/32 -jnova-compute-haproxy-instance

-A nova-compute-haproxy-instance -s 10.0.0.0/24-j ACCEPT

-A nova-compute-haproxy-instance -s 10.0.0.1/32-p udp -m udp --sport 67 --dport 68 -j ACCEPT

-A nova-compute-haproxy-instance -jnova-compute-sg-fallback

-A nova-compute-sg-fallback -j DROP

 同时,它也应该有一条默认路由让vip的port(位于sgweb名空间)能访问它所在的网关

route add default gw 10.0.0.1

2)L3-agent的下列防火墙规则能保证l3-agent上的vip可以访问LB池中的其他虚机。

-A nova-network-POSTROUTING -s 10.0.0.0/8 -d10.0.0.0/8 -m conntrack ! --ctstate DNAT -j ACCEPT

参考:

http://blog.csdn.net/quqi99/article/details/9898139

https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

https://wiki.openstack.org/wiki/Neutron/ServiceInsertion

 

本文转自http://blog.csdn.net/quqi99/article/details/9898139,有删改。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值