一个为无阻流量规则分配与端点规则实施而设计的通用最优化框架(三)

写在前面:本文是我做毕业设计参考的一篇并不如何有名的外文文献,其题名为:《OFFICERA general optimization framework for OpenFlow Rule Allocation and Endpoint Policy Enforcement》。有兴趣的朋友可以谷歌该题目,找到就可以下载了。不过为了方便,我尽量还是中英文对照着来写吧,今天是第三部分。


III. RULE ALLOCATION UNDER MEMORY CONSTRAINTS
Considering the network as a black box offers flexibility but may lead to the creation of a potentially very large set of forwarding rules to be installed in the network [1], [7],[10]. With current switch technologies, this large volume of rules poses a memory scaling problem. Such problem can be approached in two different ways: either the memory capacity of switches is not known and the problem is then to minimize the overall memory usage to reduce the cost, or the memory capacity is known and the problem becomes the one of finding an allocation matrix that satisfies as much as possible high-level objectives of the operator and the endpoint policy.

翻译:

将网络考虑为一个黑盒子可以提供灵活性,不过也可能会造成一个需要安装在网络上的非常大的转发规则集合出现[1],[7],[10]。根据当前的交换机技术,这个大体积规则自然就提出了一个存储规模化问题。这个问题又可以分为两种情况:一种是交换机的存储容量未知,这个问题就变成了最小化总的内存使用来减小开销;另一种是存储容量已知,这个问题就变成了寻找一个最能够满足操作者高等级目标和端点规则的分配矩阵。


In Sec. III-A, we show how to use our model to address the memory minimization problem while in Sec. III-B we use our model to maximize the traffic satisfaction in case of constrained switch memory. Unfortunately, finding the optimal solution in all circumstances is NP-hard, so we propose a computationally tractable heuristic in Sec. III-C and evaluate different allocation schemes over representative topologies in Sec. IV.

翻译:

在第三部分A中,我们展示如何使用我们的模型来解决存储最小化问题;而在第三部分B中,我们使用它来最大化在交换机内存受限情况下的流量满足。不幸的是,在所有情形下找到最优解决方案是NP难求解的,所以我们在第三部分C中提出了一个计算驯良的启发式算法,在第四部分中我们针对有代表性的拓扑结构来评估不同的分配方案。


A. Minimizing memory usage
A first application of our model is to minimize the overall amount of memory used in the network to store forwarding rules. This objective is shared by Palette [10] and
OneBigSwitch [1], with always the possibility in our case to relax the routing policy and view the network as a black box.To do so, one has to define the objective function so as to count the number of assigned entries in the allocation matrix as detailed in Eq. (9).

翻译:

A.最小化内存使用

我们模型的第一个应用就是最小化网络中存储转发规则所用的总内存大小。这一目标为Palette [10] 和OneBigSwitch [1]所实现,使得放松路由规则和将网络视为黑盒子在我们的案例中总是可能的。为了这么做,就需要定义目标函数来计算分配矩阵中已分配条目的数量,如方程(9)描述如下:



Constraint (10), derived from constraint (8), is added to prevent packets to always be diverted to the controller (which would effectively minimize memory usage).

翻译:

限制(10)源自于限制(8),是为防止包总被转移到控制器(这会有效减小内存使用)而加的。



Parameters Cs, any s ∈ S and M used by constraints (5) and (6) should be set to ∞. However, if for technical or economical reasons the individual memory of switches cannot exceed a given value, then Cs must be set accordingly.

翻译:

参数Cs,对任意由限制(5)和(6)所使用的s属于S和M应该被设置为无穷大。然而,如果是技术或经济方面的原因,交换机的个体存储能力不能超过一个给定值,那么Cs也必须被相应地设置。


B. Maximizing traffic satisfaction
When the topology and switch memory are fixed in advance,the problem transforms into finding a rule allocation that satisfies the endpoint policy for the maximum percentage of traffic. The definition given in Sec. III-A is sufficient to this end. It must however be complemented with a new objective function, that models the reward from respecting the endpoint policy where a flow that does not see its endpoint policy satisfied is supposed not to bring any reward. A possible objective function for this problem is:

翻译:

B.最大化流量满足

当拓扑和交换机内存都预先固定后,问题就转化成了找到一个能够满足最大化流量百分比的端点规则的规则分配。第三部分A中给出的定义对这一目标来说是充分的。然而它还必须用另一个新的目标函数来补充,通过遵守当一条流不满足端点规则时就不会带来任何收益的这样的端点规则来模型化收益。对于这个问题一个可能的目标函数是:



where wf,l ∈R+ is the normalized gain from flow f ∈F if forwarded on link l ∈ E(f). In other words, wf,l rewards the choice of a particular egress link. In the typical case where the goal is to maximize the volume of traffic leaving the network via an egress point satisfying the endpoint policy, we have any f∈F, any l∈E(f) : wf,l = pf .

翻译:

其中 wf,l ∈R+是流 f ∈F如果在链路 l ∈ E(f)上转发的标准化收益。也就是说, wf,l 奖励一个特别的出口链路的决定。在目标是最大化通过一个满足端点规则离开网络的流量体积的典型案例中,我们有任意 f∈F,任意 l∈E(f) : wf,l = pf。 


Theorem 1. The rule allocation problem defined to maximize traffic satisfaction is NP-hard.

Proof. Let us consider an instance of the problem defined with the objective function (11), with the topology consisting of one OpenFlow switch, one ingress link, and one egress link e for all flows. Then, let us assume that the switch memory is larger than the number of flows and thus the limitation only comes from the available bandwidth at the egress link e. The problem then becomes how to allocate rules so as to maximize the gain from the traffic exiting the network at egress link e (the rest of the traffic is forwarded to the controller over the default path). For this instance, we can simplify the problem as follows:
翻译:

理论1.定义用来最大化流量满足的规则分配问题是NP难求解的。

证明:让我们来考虑用目标函数(11)来定义的一个问题实例,它的拓扑中包含一个OpenFlow交换机,一个入口链路,和一个对于所有流适用的出口链路e。接下来,我们假设交换机内存是比流的数量要大的,因此限制就只存在于出口链路e的允许带宽。这个问题就变成了如何分配规则以最大化在出口链路e(其余的流量通过默认路径转发到控制器)离开网路的流量的收益。对于这一例子,我们可以简化问题如下所示:


This is exactly the 0-1 Knapsack problem, which is known as NP-hard. In consequence, the rule allocation problem defined with the objective function (11) and from which this instance derives is NP-hard.

翻译:

这就是一个精确的0-1背包问题,它是一个熟知的NP难求解问题。所以,以目标函数(11)来定义的规则分配问题也是一个NP难求解问题。


C. Heuristic
Finding a rule allocation that maximizes the value of the traffic correctly forwarded in the network when switch memory is predefined is not tractable (see Theorem 1). Therefore, an optimal solution can only be computed for small networks with a few number of flows. Consequently, we propose in this section a heuristic to find nearly optimal rule allocations in tractable time. The general idea of the heuristic is described in Sec. III-C1 and the exact algorithm and the study of its complexity is given in Sec. III-C2.

翻译:

C.启发式算法

找到一个当交换机内存被预先确定时能够最大化网络中可以正确转发的流量值得规则分配不是驯良的(见理论1)。因此,一个理想的解决方案只能用小的网络和一小部分流来计算。所以在这一部分我们提出了一个启发式算法,在可处理时间内寻找接近最优的规则分配。这个通用的启发式算法的思想在第三部分C1中进行描述,至于它的精确算法和复杂度研究则在第三部分C2中给出。


1) Deflection technique: The number of paths between any pair of nodes exponentially increases with the size of the network. It is therefore impractical to try them all. To reduce the space to explore, we leverage the existence of the default path. Our idea is to forward packets of a flow on the shortest path between the egress point of the flow and one of the nodes on the default path. Consequently, packets of a flow are first forwarded according to the default action and follow the default path without consuming any specific memory entry, then are deflected from the default path (consuming so memory entries) to eventually reach an egress point. That way, we keep tractable the number of paths to try while keeping enough choices to benefit of path diversity in the network. The decision of using the shortest path between default paths and egress points is motivated by the fact that the shorter a path is, the least the number of memory entries to be installed is, letting room for other flows to be installed as well.

翻译:

1)偏转技术:在任意节点对之间的路径数量随着网络规模的增大时呈指数级增加的。因此尝试所有的路径是不切实际的。为了减少探索的空间,我们利用已经存在的默认路径。我们的想法是在流的出口点和默认路径上的一个结点之间的最短路径上转发一个流的包。因此,一个流的包第一次转发是根据默认行为,走默认路径而不会消耗任何特定的内存表项,然后它会从默认路径上偏转(因此消耗内存表项)以最终到达一个出口点。这一方法,我们保持尝试的路径数量是可处理的,同时也能够保有足够的决定权从而在网络的路径多样性中获得好处。决定采用默认路径和出口点之间的最短路径是根据以下事实激励的:路径越短,需要安装的内存表项就越少,就给其他流留出了安装(内存表项)的空间。


To implement this concept, for every flow, switches on the default path are ranked and the algorithm tries each of the switches (starting from the best ranked ones) until an allocation respecting all the constraints is found. If such an allocation exists, a forwarding rule for the flow is installed on each switch of the shortest path from the selected switch on the default path to the egress point. The rank associated to each switch on a default path is computed according to a user-defined strategy.Three possible strategies are:

翻译:

为了实现这一观点,对于每条流而言,在默认路径上的交换机需要被划分等级(排名),这个算法就尝试每一个交换机(从最佳等级开始)直到一个符合所有闲置的分配被找到。如果这样一个分配是存在的,一个对于这条流的转发规则就被安装在从被选出的在默认路径上的交换机到出口点的最短路径上的每一个交换机中。在默认路径上的每一个交换机的排序是根据用户定义的策略进行计算的。三种可能的策略是:


• Closest first (CF): as close as possible of the ingress link of the flow

• Farthest first (FF): as close as possible of the controller.

• Closest to edge first (CE): as close as possible of the egress link.


翻译:

最近优先(CF):流的入口链路尽可能近

最远优先(FF):控制器尽可能近

最近边优先(CE):出口链路尽可能近


In CF (resp. FF) the weight of a switch on the path is then the number of hops between the ingress link (resp. controller) and the switch. On the contrary, the weight of a switch with CE is the number of hops separating it from the egress point. The deflection techniques and the three strategies are summarized in Fig. 2.

翻译:

在CF(FF)中路径上交换机的权重就是入口链路(控制器)与交换机之间的跳数。而CE中交换机的权重是它与出口点之间的跳数。图2总结了偏转技术和三种策略。


2) Greedy algorithm: Algorithm 1 gives the pseudo-code of our heuristic, called OFFICER, constructed around the deflection technique described in Sec. III-C1. The algorithm is built upon the objective function in (11) that aims at maximizing the overall weight of flows eventually leaving the network at their preferred egress point. The algorithm is greedy in the sense that it tries to install flows with the highest weight first and fill the remaining resources with less valuable flows. The rationale being that the flows with the highest weight account the most for the total reward of the network according to Eq. (11).

翻译:

2)贪心算法:算法1给出了我们的启发式算法OFFICER的伪代码描述,它是根据第三部分C1中的偏转技术来实现的。这个算法是在(11)里旨在最大化最终在偏好出口点离开网络的流的总权重这个目标函数基础上建立的。这个算法是贪心的,因为它尝试去优先安装最高权重的流而用价值较小的流填补剩余的资源。根据等式(11),这一基本原理说明拥有最高权重的流对总体网络的收益贡献最大。

Line 2 constructs an order between the flows and their associated egress points according to their weights such that the greedy placement starts with the most valuable flow-egress option. Line 4 determines the sequence of switches along the default path that the algorithm will follow to greedily determine from which switch the flow is diverted from the default path to eventually reach the selected egress point.

翻译:

第2行在流和与之相关的出口点之间根据它们的权重建立一个秩序以便贪心布局能从拥有最高价值的流出口选择开始。第4行根据贪心算法决定从哪个交换机上流在默认路径上进行转移来决定沿着默认路径上的交换机的顺序,从而最终到达选定的出口点。


The canAllocate(A, f, e, s) function determines whether or not flow f can be deflected to egress point e at switch s according to memory, links, and routing constraints. Thanks to constraint (8), the canAllocate function ensures that a flow is not delivered to several egress points. Finally, the allocate(A, f, e, s) function installs rules on the switches towards the egress point by setting af,l = 1 for all l on the shortest path from the deflection point to the egress point. If there are many possible shortest paths, the allocate function selects the path with minimum average load over all links on that path.

翻译:

canAllocate(A,f,e,s)函数决定了根据内存,链路和路由限制,流f在交换机s上是否能够被偏转到出口点e。而多亏了限制(8),canAllocate函数可以确保一条流不可以被传递到几个(不同的)出口点。最后,allocate(A,f,e,s)函数通过对所有l设置af,l = 1来在从偏转点到出口点的最短路径上的朝着出口点的交换机上安装规则。如果有许多可能的最短路径,allocate函数就在所有链路中选择路径上平均负载最小的那条路径。

When the number of flows is very large w.r.t. the number of switches and the number of links, which is the common case, the asymptotic time complexity of the greedy algorithm is driven by Line 2 and is hence O(|F| · log(|F|)). Unfortunately, even with the polynomial time heuristic, computing an allocation matrix may be challenging, since this matrix is the direct product of the number of flows and links. For example, in data-center networks both the number of links and flows can be very large ([11]). With thousands of servers, if flows are defined by their TCP/IP 4-tuple, the matrix can be composed of tens of millions of entries. A way to reduce the size of the allocation matrix is to ignore the small flows that, even if they are numerous, do not account for a large amount of traffic and can hence be treated by the controller.

翻译:

当流的数量相对交换机和链路的数量非常庞大时,这其实非常常见,贪心算法的渐进时间复杂度就由第二行决定了,也就是O(|F| · log(|F|))复杂度。不幸的是,即便使用多项式时间启发式算法,计算一个分配矩阵也是个不小的挑战,因为这个矩阵是流数量和链路数量的直接产物。例如,在数据中心网络中,不论是链路数量还是流数量都非常大[11]。如果流是定义为TCP/IP四维的,在加上几千台服务器,这个矩阵就会由成百万千万的表项组成。一个减少分配矩阵大小的方法是忽略那些小的流,即便他们数量很多也不会导致大量的流量,这样一来控制器就可以(很好地)处理流了。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值