在IBM PureApplication System中管理应用程序运行时环境

在IBM PureApplication System W1500 v1.0和W1700 v1.0(以下称为PureApplication System)中,部署人员将应用程序(称为模式实例)安装到管理员使用云组和环境概要文件定义的运行时环境中。 作为管理员设置PureApplication System时,您需要考虑和创建哪些云组和环境配置文件?

例如,应用程序开发生命周期通常在开发的每个阶段都需要一个单独的运行时环境-它们可能是DEV,TEST和PROD。 这些环境应该分开,以使每个环境中的活动都不会相互干扰。 同样,他们可能需要进一步细分,例如部署到TEST或PROD的多个应用程序的子环境,或者区分不同的地区或发布级别。 您如何使用PureApplication System云组和环境配置文件创建这样的分隔?

本文是三篇文章中的第三篇,解释了PureApplication System为托管应用程序运行时环境提供的硬件和软件基础:

每篇文章均以其前身为基础,以充分解释这一基础。

要确定所需的云组和环境配置文件,您需要首先了解这些产品功能的用途,这意味着首先了解云计算的某些目标以及如何虚拟化硬件资源。 然后,我们将讨论PureApplication System中用于管理虚拟化资源的功能,最后是利用这些功能的策略。 我们还将考虑可以从这些策略中推断出哪些原则。

云概念

为了了解PureApplication System,让我们首先回顾一些云计算的基本概念。

云计算的一个目标是优化资源利用率,这也可以被视为增加应用程序密度。 资源利用率是实际使用的可用资源的总量,越高,达到资源目标限制的程度就越高。 当资源的使用少于其目标限制时,则会浪费未使用的容量及其相关的成本。 最佳的资源利用率可以使资源利用率尽可能接近目标限制。 为了优化利用率,系统需要管理每个应用程序的资源使用情况并预测该使用情况。

为了更好地管理云中的应用程序,云计算系统将每个应用程序作为工作负载运行。 云计算系统是一组运行云的计算机硬件和软件,尤其是能够将应用程序作为工作负载运行的云环境。 工作负载是打包为虚拟应用程序的正在运行的程序。 虚拟化应用程序是独立于任何特定的硬件集安装的,因此它可以在虚拟化平台上的任何位置运行。 虚拟平台是一组硬件,配置有一个或多个虚拟机监控程序以将其软件作为虚拟机运行。 该平台还可以虚拟化对存储和网络资源的访问。 管理程序 (也称为虚拟化管理器或平台虚拟化程序)是仅运行虚拟机的专用操作系统。 虚拟机 (VM)是通过像运行计算机一样运行操作系统来模拟物理计算机的软件,但是计算机是虚拟的,因为VM的虚拟机管理程序将操作系统与底层硬件分离。 这种方法使系统管理程序可以在一组硬件(例如,一台物理计算机)上运行多个操作系统实例(每个虚拟机作为虚拟机),并管理每个人对硬件资源的访问。 它还使每个VM具有高度可移植性,因此它可以在虚拟平台上的任何位置运行。

导航IBM云,第1部分:云技术入门介绍了使用管理程序进行平台虚拟化。 它显示了图1所示的虚拟化堆栈,虚拟机在虚拟机管理程序中运行,该虚拟机直接在物理硬件(类型1)或该硬件的主机操作系统(类型2)中运行。 每个VM都需要其自己的唯一IP地址,以便它可以作为独立计算机参与网络。 系统管理程序本身需要另一个IP地址,以便可以对其进行远程管理。

图1.管理程序的类型
虚拟机监控程序的类型

资源利用率

云计算系统根据需要为其工作负载提供对虚拟化计算资源的访问。 系统管理允许每个工作负载使用的资源,以确保所有工作负载都能成功运行。

为了管理工作负载对虚拟资源的访问,系统采用了多种竞争性方法:

  • 隔离 :工作负载可以彼此隔离,这样一个工作负载中的问题(例如消耗所有CPU或内存的失控进程)不会影响其他工作负载。
  • 共享 :工作负载应从公共资源池中提取,以便在其他资源不需要资源时可以使用更多资源。
  • 分配 :每个工作负载或一组相关的工作负载应获得有限的共享资源集,以确保获得成功运行所需的最低资源,但要限制其增长,以使其所占份额不超过其合理份额。

这些相互竞争的方法必须保持平衡。 如果所有工作负载完全隔离,那就是旧的计算模型,其中每个应用程序都部署在单独的硬件上。 如果工作负载共享所有资源,那么哪个首先占用最多的资源都会使剩余的工作负载不足。 分配通过对工作负载进行分组来达到平衡,通过允许每个组从公用资源池中提取资源来允许在组内共享,同时确保每个组都获得有限数量的资源。 这有助于确保每个工作负载都能公平分配资源。

这种方法组合的目的是优化资源利用。 隔离资源或为工作负载分配专用资源会通过在不需要工作负载时强制其容量保持闲置状态来降低利用率。 相反,资源共享通过允许一个工作负载在另一资源不需要资源时使用它来缓解此问题并提高利用率。

让我们更深入地探讨这些方法,看看它们如何帮助优化资源利用。

资源隔离

资源隔离在资源集之间创建了象征性的隔离墙,因此它们可以独立运行。 这样,一个围墙区域中的问题不会影响其他围墙区域。 结果是资源无法在围墙区域之间共享。

云计算中隔离的两个主要方面是:

  • 计算隔离 :CPU和内存容量的组彼此分开。 当两个工作负载在计算上隔离执行时,一个工作负载对这些资源的消耗不会影响另一工作负载。 隔离可以是物理隔离也可以是虚拟隔离。
    • 物理计算隔离 :也称为专用资源,这意味着每组资源由电路板上的单独芯片组成。
    • 虚拟计算隔离 :也称为虚拟化资源,这意味着虚拟化层会创建看似独立的资源组,这些组实际上可能共享相同的芯片。 所提供的隔离仅与实现它的虚拟器(通常是虚拟机管理程序)一样好。
  • 网络隔离 :通信通过单独的连接在计算资源之间流动。 隔离可以是物理隔离也可以是逻辑隔离。
    • 物理网络隔离 :这意味着单独的网络连接在并行的网络设备集(例如网络接口卡(NIC),电缆,交换机等)上运行。 这样,一个网络的信号就不会在其他网络的硬件上传输。
    • 逻辑网络隔离 :这意味着单独的网络连接在同一网络设备上运行,因此它们共享相同的带宽,但是它们的数据包分别通过不同的广播域通过共享的硬件进行路由。 广播域通常实现为虚拟局域网(VLAN)。

资源共享

资源共享 (也称为资源池)使多个工作负载可以从共享资源池访问其资源。 工作负载不在乎它从池中获取哪个资源。 它们都是等效的并且可以互换。 当工作负载需要大量资源时,它会从池中获取多个项目。 在其他时间,它从池中消耗的物品更少。 使用完资源后,工作负载会将其释放回池中。

共享的优点是系统可以动态分配资源,随着时间的推移从需要更少资源的工作负载转移到需要更多资源的工作负载。 共享的结果是,行为不当的工作负载可能会消耗池中太多的资源,从而使其他工作负载饿死。

资源分配

资源分配 ,也称为逻辑隔离,是一种通过对资源共享设置上下限来设置工作负载边界的方法。 分配可确保工作负载至少获得其所需的最少资源,并且消耗的资源不得超过其公平共享的资源份额。 系统可以在任何共享资源上设置分配限制:CPU,内存,存储,带宽甚至软件许可证。 例如,也许两个共享一个池的工作负载各自指定5-10个CPU的CPU使用率。 使用此设置,可以确保每个工作负载至少有5个CPU可用,但是当工作负载增长时,它最多只能拥有10个CPU。

分配平衡了隔离和共享,在每个工作负载获得自己的专用资源之间找到了一种快乐的介质,但是没有共享和不受控制的共享,这种共享不允许任何工作负载获取所有共享资源。 分配可实现共享,但要在限制范围内。

优化利用率

资源共享有助于提高利用率,但是仍然需要分配资源以确保公平共享资源。 这就提出了如何设置分配的问题。 分配共享资源的最安全方法是保证工作负载的总量不超过该资源的目标限制。 然后,即使每个工作负载同时要求其完全分配,每个工作负载也会接收其请求的资源,并且资源得到了充分利用。 但是,大多数工作负载仅在某些时间使用其全部分配。 如果分配是基于平均需求,则当工作负载超过平均水平时,工作负载就会饿死​​资源,而当需求低于平均水平时,则将容量闲置。 如果基于高峰需求进行分配,则工作负载将始终获得其负载所需的资源,但是更多的容量会在更多时间未被使用。 因此,仅将可用资源容量分配给工作负载只会导致部分资源利用,而不会导致所需的全部利用。

图2摘自“ 导航IBM云”,第1部分:云技术入门 ,显示了基于高峰需求的资源分配情况。 由于在任何特定时间资源利用率最多等于系统容量,并且通常要少得多,因此这意味着许多资源容量仍未使用,从而降低了资源利用率。

图2.资源利用率
资源利用率

可以利用资源分配来提高利用率。 资源超额分配是一种系统有意承诺的容量超出其实际拥有能力的技术。 也就是说,足够多的工作负载未充分利用它们的分配,因此系统确实具有足够的容量来满足实际需求。 这种技术通常用于分配具有较高机会成本的实物资产:航空公司出售的座位数超过飞机所容纳的座位数;酒店允许预订的房间数量超过建筑物中的现有客房数量;银行在金库中存储的现金少于其客户总和。帐户余额。 过度分配不一定是一种坏习惯,而是一种赌注,即由不同的消费者群体在任何特定时间的实际需求将小于个人共同期望的需求。

资源过度分配实际上不是问题,而只是潜在的问题。 这为资源争用创造了机会,这是一个实际问题。 当对共享资源的需求超过该资源的系统容量时,就会发生资源争用 。 太多的工作负载期望他们承诺的太多资源,总的来说超过了系统的可用能力。 在现实世界中,这表现为航班超售的航空公司,酒店住客必须共享房间或被其他财产容纳,以及一家金融机构面临银行挤兑。 资源争夺是国家发动战争的主要原因。 希望,云计算可以帮助防止工作负载及其利益相关者相互竞争。

图3显示了过度分配,特别是资源争用的样子。 系统具有一定的资源容量,但已承诺为工作负载提供更大的容量,从而导致分配的容量过大。 只要工作负载所需的总资源少于可用容量,一切就可以了。 但是,当工作负载对系统分配给所有工作负载的需求过多时,总需求将大于可用资源,从而发生资源争用。 系统必须解决它,以使实际使用量保持在容量之下。

图3.资源争用
资源争夺

发生资源争用时,系统必须解决资源争用。 默认情况下,硬件倾向于通过崩溃来解决资源争用-单个进程或整个操作系统停止运行。 避免崩溃的简单化努力通常并没有多大好处。 例如,操作系统可能会牺牲消耗最多资源的单个进程,但这可能是支持最多用户的进程,因此也是最重要的一个进程! 需要智能来令人满意地解决资源争用。 人工的人工干预通常不能令人满意,因为争用需要立即解决,不能等待委员会确定纠正措施的过程,更不用等待采取措施了。 解决资源争用的情报必须是自动化的,以便可以快速应用。

资源争用管理是解决资源争用的自动化智能。 管理层必须在争夺超额分配资源的工作负载中选择赢家和失败者,并基于工作负载优先级和资源分配的组合来制定决策。

  • 优先级 :此技术按重要性将资源分配给工作负载。 工作负载的优先级可以按状态预先确定,或者系统可以动态分配优先级,例如基于先到先得的原则。 无论分配优先级(强制执行),系统首先满足优先级最高的工作负载的请求,并继续对重要性逐渐降低的工作负载执行此操作,直到耗尽共享资源为止,此时它拒绝其余的请求优先级较低的工作负载。 优先级排序的结果是,高优先级的工作负载根本不会遭受,但低优先级的工作负载则会遭受很大的损失。
  • 比率分配:此技术仅使每个工作负载请求其一部分资源。 通常,该部分是可用资源与请求的总资源之比。 因此,更大的工作负载会获得更大的资源绝对份额,但同时也会遭受更大的绝对短缺。 定量分配的结果是,所有工作负载都得到了他们所要求的一部分,但全部都因有限的资源而饥饿。

总而言之,云计算采用技术来优化资源利用率。 资源过度分配是一种技术,它可以提高资源利用率,但是却为资源争用创造了机会。 当发生资源争用时,系统必须通过对工作负载进行优先级划分和对资源进行配给的组合来使用资源争用管理来解决此问题。 过度分配通常会提高利用率,但是也会增加资源争用发生的可能性和严重性。

PureApplication System环境功能

要了解如何在共享资源时最好地使用PureApplication System隔离工作负载,首先了解该产品提供的以云形式共享资源的主要功能将非常有帮助。

PureApplication System中的五种类型的系统资源与部署和运行模式实例有关。 它们定义了可用于将模式部署到的运行时环境,并控制了可用于运行模式实例的资源。它们是:

  • 计算节点 :这是一组计算机硬件,包含可以访问存储和网络的CPU和内存。
  • IP组 :这是一组IP地址,它们将用于通信的VLAN的ID,以及如何连接到VLAN所在的网络的设置。
  • 云组 :这是一个或多个计算节点和一个或多个IP组的集合。 它本质上是一台逻辑计算机。 它在物理上隔离资源。
  • 环境配置文件 :这是用于将模式部署到云组中的策略。 它通过分配资源来创建资源的逻辑隔离。
  • 用户组 :这是一个具有相同角色的用户的列表,该角色可以使用环境概要文件来部署模式。

环境配置文件将用户组与云组相关联,以便这些用户组中的用户可以将模式部署到这些云组,如图4所示。环境配置文件向用户组授予访问权限,以指定可以使用配置文件的人来部署模式。 一个配置文件可以授予对多个用户组的访问权限,而一个用户组可以被授予对多个配置文件的访问权限。 环境配置文件还指定了可以将其部署到的云组。 多个环境配置文件可以部署到同一云组,而一个配置文件可以部署到多个云组。

图4. PureApplication System资源的关系
PureApplication System资源的关系

图5显示了这些系统资源的一些实例通常看起来是什么样的。

图5.典型的PureApplication System资源实例
典型的PureApplication System资源实例

对于用户组,环境配置文件和云组的给定组合:

  • 用户组指定哪些用户可以使用环境概要文件来部署模式。
  • 环境配置文件指定用户可以将模式部署到云组。
  • 云组指定部署模式将在其上运行的硬件(特别是计算节点和IP组)。

环境配置文件中的设置控制着云组中的资源,这些资源在部署时会分配给模式实例。 这些成为模式实例中的设置,用于控制云组如何运行和管理实例。 这有助于协调和控制在同一云组中如何部署和运行多个模式实例。 如果通过不同的环境配置文件部署实例,则可以对实例的设置进行不同的设置,以使用具有不同限制的不同资源。 环境配置文件部分具有更多详细信息。

PureApplication System管理控制台分为两个主要选项卡:

  • 系统控制台 :用于管理员角色。 它提供对用于由硬件和云管理员(即管理PureApplication System的用户)管理的操作工件的访问。
  • Workload Console :用于部署者角色。 它提供对开发工件的访问,该工件由工作量管理员(例如在PureApplication System上开发和部署模式的用户)管理。

云组在系统控制台上进行管理,而环境配置文件在工作负载控制台上进行管理。 模式部署者应将环境配置文件而不是云组视为部署目标。 但是,将环境概要文件作为隐藏云组的一种方式是一种泄漏的抽象–部署模式时,您选择环境概要文件。 但是,对于每个虚拟部件,则需要选择一个云组,或至少接受默认值。

计算节点

计算节点本质上是一台非常紧凑的计算机,而不是刀片服务器。 作为计算机,计算节点可以运行操作系统。 但是,在PureApplication System中,每个计算节点都运行一个虚拟机监控程序,而不是传统的操作系统。

先前, 图1显示了管理程序如何使多个虚拟机共享一组底层硬件。 在PureApplication System中,如图6所示,物理硬件是计算节点,类型1虚拟机管理程序直接在计算节点上运行,每个虚拟机在操作系统中运行一个中间件服务器。 从技术上讲,VM可以运行可在OS中安装的任何程序。 要将VM部署到PureApplication System上,将需要将其开发为虚拟设备,如在导航IBM云,第1部分:云技术入门中所讨论 。 但是,PureApplication System主要使用其VM运行中间件服务器。

图6.计算节点中的管理程序堆栈
计算节点中的管理程序堆栈

这是计算节点中硬件的简短摘要。 如果您对更多硬件详细信息感到好奇,请参阅IBM PureApplication System中的硬件概览。

计算节点包括:

  • CPU :一个英特尔®计算节点包含16个物理(32个逻辑)内核。
  • 内存 :Intel计算节点包含256 GB RAM。
  • 存储 :计算节点包含一个8 GB SAN适配器。
  • 网络 :计算节点包含一个10 GB的以太网适配器。

它可以访问所有计算节点共享的资源:

  • 存储 :PureApplication System包括两个IBM Storwize V7000存储单元,具有6.4 TB SSD和48 TB HDD存储(作为SAN访问)。
  • 网络 :PureApplication System包含两个BLADE网络技术(BNT)64端口以太网交换机,这些交换机一起构成其内部网络硬件的集线器,并使其能够连接到企业网络。

IP组

IP组是一组IP地址,用列表或范围表示。 由于网络不能使用重复的IP地址正常工作,因此组中的每个地址都是唯一的,并且一个地址只能属于一个组。 该组还具有用于通信的VLAN ID的设置,以及有关如何连接到VLAN所属的外部网络的设置。 集合中的所有地址必须属于外部网络的同一子网,该子网由网络掩码指示。

IP组具有三个功能:

  • 动态IP地址共享 :IP组是在部署组成每个模式实例的VM时系统从中提取的地址共享池。
  • IP地址池隔离 :两个团队可以使用不同的组,因此,如果一个团队部署了太多的应用程序并占用了其所有地址,那么另一个团队仍然可以部署应用程序,因为它使用的是单独的池。
  • 逻辑网络隔离 :IP组的VLAN ID指定的VLAN使该组的应用程序可以在隔离的逻辑网络上进行通信。

虚拟局域网 (VLAN)在逻辑上将其网络流量与作为单独广播域的同一网络上的其他VLAN的流量隔离。 这意味着使用具有不同VLAN ID的两个IP组的应用程序的网络流量在(看似)独立的网络上传输。 例如,这对将模式的HTTP服务器部署在与WebSphere Application Server定制节点不同的网络上很有帮助,以便可以在这两层之间放置网络防火墙。 隔离无关应用程序(如开发与生产,或财务部门与人力资源部门)的网络流量也很有帮助。

PureApplication System并未定义VLAN(网络管理员在网络上定义了VLAN),但是它确实广泛使用了VLAN。 PureApplication System以两种不同的方式使用VLAN,即作为管理VLAN或应用VLAN。

系统内部使用管理VLAN 。

  • 通信 :这些VLAN使系统的内部进程可以相互通信。
  • IP地址 :系统使用其自己的内部IP地址,因此网络管理员无需提供任何IP地址。 网络管理员仍然需要保留网络上的每个VLAN ID,以便网络不会创建具有相同ID的另一个VLAN。
  • 范围 :这些VLAN上的流量仅在系统内部内部流动。 它永远不应从系统外部流到企业网络上。

部署到系统上的业务应用程序使用应用程序VLAN 。

  • 通信 :VLAN使应用程序能够在自己内部,彼此之间以及与网络上的资源进行通信。
  • IP地址 :网络管理员不仅必须为每个VLAN提供VLAN ID,而且还必须提供应用程序用于连接到网络的IP地址池。 网络管理员应保留VLAN ID和IP地址,以使网络不会在其他地方使用这些值。
  • 范围 :这些VLAN中的每个VLAN上的流量都在系统内部流动,并在配置为也使用VLAN的企业网络的任何部分外部流动。

PureApplication System本身需要三个管理VLAN,而且每个云组都需要另一个管理VLAN。 每个IP组都需要一个应用VLAN。 尽管所有IP组可以共享同一VLAN,但是通常不同的应用程序组使用单独的VLAN。 IP组,应用程序VLAN和IP地址集的适当组合取决于您的企业架构师和网络管理员希望如何隔离应用程序的网络流量。

IP组的IP地址分配遵循典型的生命周期,该生命周期是由离散单元组成的共享资源:

  • 部署其模式时,系统会为VM分配IP地址(即,在创建VM作为其正在创建的模式实例的一部分时进行分配)。 与DHCP服务器类似,IP组在部署过程请求IP地址时会提供IP地址。
  • 即使停止或存储模式实例,该地址仍会分配给VM。
  • 删除模式实例后,该地址将返回到IP组。

要定义一个组,企业的网络管理员会指定该组的所有设置:IP地址集,VLAN ID以及其他指定如何连接到企业网络的设置,例如网关,子网和DNS主机。 PureApplication System管理员无法选择这些设置。 他使用网络管理员提供给他的设置来定义IP组​​。

对于网络管理员给定的一组网络设置,PureApplication System管理员可以选择要设置的IP组数量。 网络管理员指定的内容是:

  • 一组网络设置:网关,子网,DNS等
  • 该网络上VLAN的ID
  • 该VLAN上的一组IP地址

反过来,PureApplication System管理员会将这些设置捕获为一个或多个IP组。 每个IP组将具有相同的网络设置和VLAN ID。 区别在于IP地址集可以拆分为多个IP组。 一个IP地址只能属于一个IP组,因此,可能的IP组数量只有一个,而与该组中的地址数量一样大。 例如,如果一组包含十个IP地址,则可以创建以下选项之一:

  • 一个具有全部十个地址的IP组
  • 两个IP组,每个IP组具有五个地址
  • 十个IP组,每个组一个地址

同样,每个组都具有相同的VLAN ID和其他网络设置。 唯一不同的是IP地址集。

云组

云组是用于运行工作负载的虚拟平台,其行为类似于逻辑计算机。 它实现了两个主要目标:

  1. 系统分段 :它将PureApplication System划分为一台或多台逻辑计算机。 各组彼此隔离运行。
  2. 计算节点聚合 :它将一个或多个计算节点以及至少一个IP组组合到一台逻辑计算机中,该计算机的容量可能比单个节点大。

图4显示了云组,计算节点和IP组之间的关系。 云组包含计算节点和IP组。

云组除包含一组计算节点和IP组外,还具有三个主要属性:

  • 名称 :这就是您所说的云组; 例如:DEV,TEST或PROD。
  • 类型 :可以将此设置视为云组的资源过度分配策略。 它定义了在模式部署期间如何将资源(特别是CPU)分配给虚拟机(VM)。 它会影响虚拟机以及通过环境配置文件部署的所有虚拟机可用的CPU容量,这在多个应用程序上的用户负载很高时尤其重要。
  • 管理VLAN ID :这是云组用来在其VM之间启用内部通信的VLAN的ID。 网络必须尚未使用它。 VLAN不需要任何IP地址,因为为VM分配了IP组中的地址。

云组的类型设置具有两个可能值之一:专用或平均值。

  • 专用 :这没有CPU过度分配。 它适用于通常状态为高用户负载的应用程序,例如生产应用程序。 对于具有此策略的云组:
    • 1个虚拟CPU = 1个物理CPU
    • 英特尔计算节点上的16个内核分配为16个CPU。
    • 部署到该云组的模式的VM有望充分利用分配给它们的CPU容量,这是经常遇到高用户负载的应用程序的典型表现。
    • 使用此设置可以将更少的模式部署到云组,但是它们避免了这种CPU过度分配方法引起的资源争用。
  • 平均 :这是4比1的CPU超额分配。 它适用于通常状态为低用户负载的应用程序,例如开发应用程序。 对于具有此策略的云组:
    • 4个虚拟CPU = 1个物理CPU
    • 英特尔计算节点上的16个内核分配为64个CPU。
    • VMs of the patterns deployed to this cloud group are expected to under utilize the CPU capacity assigned to them, which is typical of applications that need to be available, but are not used heavily.
    • You can deploy more patterns to a cloud group with this setting. However, their CPU over allocation means that when the applications are used heavily, they encounter resource contention that leads to degraded performance.

A cloud group acts like one big logical computer that is a virtualized platform. Each virtual machine that runs in a cloud group executes in one of the cloud group's compute nodes and runs with an IP address from one of the IP groups.

A particular compute node can belong to only one cloud group (at most). Typically, a cloud group contains at least two compute nodes so that the group can keep running even if one of the nodes fails. This limits the number of cloud groups that a single PureApplication System can support. For example, a small Intel configuration has six compute nodes, which means that it can support a maximum of six cloud groups, and probably supports just three cloud groups (assuming two compute nodes per group). A particular IP group can belong to only one cloud group (at most), so a lack of IP groups can limit the number of cloud groups that can be created. Each cloud group also requires its own management VLAN, so a lack of VLANs assigned by the network administrators can limit the number of cloud groups that can be created.

Cloud groups create isolated runtime environments, such that workloads running in one group are not affected by workloads running in another group.

Environment profile

An environment profile defines policies for how patterns are deployed to one or more cloud groups and how their instances run in the cloud group. To deploy a pattern, a user selects a profile for performing the deployment, which in turn specifies the cloud groups the deployer can deploy patterns to. The deployer should think of the environment profile as the target he is deploying a pattern to. The fact that the profile deploys the pattern instance into a cloud group is a system-level detail that a workload-level user, like a deployer, does not need to be aware of.

The profile specifies several configuration settings:

  • Access granted to : This is who is allowed to use the profile to deploy patterns.
  • Deploy to cloud groups : This is the list of cloud groups this profile can deploy patterns to, and for each cloud group, which IP groups the deployer can use.
  • IP addresses provided by : This is how the deployment process assigns IP address from the group.
    • IP Groups : This means automatic, that addresses are selected by the deployment process.
    • Pattern Deployer : This means manual, that addresses are selected by the deployer during the deployment process.
  • Environment limits : This enforces limits on the resources available to pattern instances that are deployed through this profile.
    • Computational resources : These are resources such as CPU, memory, and storage.
    • Licenses : This is PVUs per product
  • Deployment priority : This is used to prioritize pattern instances dealing with resource contention (deployment, runtime resources, and failover). See the PureApplication System virtual machine behavior section. The possible priority levels are Platinum, Golden, Silver, and Bronze.
  • Virtual machine name format : This is a naming convention used when creating virtual machines.
  • Environment : This is a list of environment roles, which is a label for the users' convenience that is used in the pattern deployment properties dialog to filter profiles (see the Type field in Figure 7 ). The possible roles are Production, Test, and so on.

Multiple profiles can deploy to the same cloud group, and a single profile can deploy to multiple cloud groups (see Figure 4 and Figure 5 ). Not all deployers have access to all of the system's environment profiles (see User group below). To deploy to a particular cloud group, a deployer needs access to an environment profile that can deploy to that cloud group. That profile will in turn set policies about how those patterns are deployed, such as which IP groups are made available for use and setting the resource limits.

Different profiles enable users deploying potentially the same pattern to the same cloud group to do so with different limits and different settings applied to the pattern instances. When an environment profile deploys a pattern instance, it allocates a portion of the cloud group's shared resources to the instance, creating logical isolation of the instance. Profiles can place limitations on separate teams deploying to the same cloud group to logically isolate their pattern instances and prevent them from using too many resources, such as using the same IP group (and therefore, all of the IP addresses), or too much of the underlying CPU capacity. A profile also sets properties of a pattern instance that are enforced when the cloud group runs the instance.

When two user groups share an environment profile (that is, both are assigned to it), this means that both have the same pattern deployment policies and their pattern instances share the same resource allocations. For two user groups to have different policies or separate allocations of resources, each group needs to be assigned to its own environment profile. Two groups sharing the same profile means that both groups' deployments count against the same limits. Two groups using different profiles means that each group's deployments count against its own limits.

User group

A user group represents a role, a set of types of tasks to be performed and the permissions need to perform these task types. A user group has two main properties:

  • Group members : A list of users who perform this role.
  • Permissions : Capabilities needed by users in this role to perform their tasks.

One of the main functions of a user group is to specify which users can use a particular environment profile to deploy a pattern.

PureApplication System virtual machine behavior

An application is deployed to PureApplication System as a pattern instance composed of virtual machines for running the middleware the application runs in. The behavior of these virtual machines can be tuned with the settings in the environment profile used to deploy the pattern and the settings in the cloud group the pattern instance runs in.

The influence of these environment profile and cloud group settings on the behavior of the virtual machines is seen in two respects:

  • Prioritization : This is the importance of each virtual machine, relative to that of all the other virtual machines in the same cloud group.
  • Resource requirements : This is the amount of resources a virtual machine consumes from the pool allocated by the profile and provided by the cloud group. It depends on what the virtual machine says it requires and how the cloud group accounts for those requirements.

The following sections describe how prioritization and resource requirements influence the behavior of the virtual machines.

Prioritization

The individual virtual machines in deployed patterns include prioritization settings. These settings become relevant during times of resource contention, which is when a cloud group's virtual machines require more resources than the cloud group has available. Resource contention can occur in these situations:

  • Deploying multiple pattern instances concurrently.
  • Assigning runtime resources that are over allocated and overloaded.
  • Moving VMs from one compute node to another.

When these situations occur, the system gives preference first to the higher priority VMs.

PureApplication System prioritizes a VM based on two settings:

  • Profile deployment priority : This is the deployment priority specified in the environment profile used to deploy the pattern. This value is set by the administrator who configures the environment profile. Possible values are Platinum, Golden, Silver, and Bronze.
  • Deployer deployment priority : This is the priority specified in the pattern deployment properties dialog used to deploy the pattern, including selecting the environment profile, as shown in Figure 7. This value is set by the deployer who deploys the pattern. Possible values are High, Medium, and Low.

All VMs in the same pattern instance have the same priority because they are all deployed together.

Figure 7. Priority setting in the pattern deployment properties dialog
Priority setting in the pattern deployment properties dialog

The combination of settings, in order of priority, is shown in Table 1.

Table 1. Priority weighting of workloads
优先 重量
Platinum-High 16
Golden-High 12
Silver-High 8
Platinum-Med 8
Golden-Med 6
Bronze-High 4
Silver-Med 4
Platinum-Low 4
Golden-Low 3
Bronze-Med 2
Silver-Low 2
Bronze-Low 1个

The system's internal processes run with a weight of 20, so they take priority over all user workloads.

This prioritization becomes especially relevant during failover. For example, if a compute node fails, the system recovers those VMs by restarting them on other compute nodes in the same cloud group. The VMs are moved and restarted in priority order, which means that the system recovers the VMs of the higher priority pattern instances faster so those VMs experience shorter downtimes than the lower priority ones. Also, if the target compute nodes do not have enough capacity for all of the failed VMs, then the lower priority VMs are not restarted. An administrator has to resolve this situation manually, typically by stopping some pattern instances to make resources available and restarting the failed ones.

Resource requirements

The individual virtual machines in deployed patterns include resource requirements settings. These specify the resources that the VM requires to run properly:

  • CPU count : This is the number of virtual CPUs assigned to this VM.
  • Virtual memory (MB) : This is the amount of virtual memory assigned to this VM.

The accounting for the CPU count depends on the type setting of the cloud group the VM runs in. For example, if the VM requires four virtual CPUs:

  • A dedicated cloud group assigns the VM four physical CPUs.
  • An average cloud group assigns the VM one physical CPU.

The numbers for these resource requirement settings should be higher for a VM that is expected to support a significant user load. If a VM's numbers are lower and it experiences high user load (and assuming the pattern instance does not load balance to other VMs), then this VM's users will experience degraded performance because it does not have enough resources to serve all of the requests with the customary response times.

Why not simply assign an overabundance of resources to all of your pattern's VMs? Because that curtails the number of pattern instances you can deploy and run successfully. Think of a VM as taking up space. The bigger each VM is, the fewer of them that fit. When you deploy a pattern, its resource totals are subtracted from the environment limits set by the environment profile. Once the limits for a profile reach zero, you cannot use that profile to deploy any more patterns until some of its pattern instances are stored or deleted. If you deploy patterns via separate environment profiles and over allocate the resources in the cloud group, VMs with overly generous resource settings make the over allocation even greater.

When the cloud group's resources are over allocated and the VMs all try to use their allocations, the potential problem turns into an actual problem as over allocation becomes resource contention. The cloud group's resource contention management resolves the contention using prioritization and rationing. The lower priority instances either do not start, or if they are already running, receive less than all of the resources they require.

Therefore, when setting the resource requirements for the VMs in a pattern, you need to find the sweet spot between two opposing constraints:

  • Optimize application performance : Assign the pattern's VMs at least enough resources so that it runs adequately.
  • Optimize resource utilization : Assign the patterns' VMs at most the resources they need to run properly under expected load. This maximizes the number of pattern instances that you can deploy with an environment profile and can run in a cloud group.

Too few resources and your application's performance will suffer. Too many resources and you cannot deploy as many applications and resource utilization will be lower.

PureApplication System environment strategy

Let's consider some approaches on how to use these PureApplication System resource types to fulfill these cloud computing concepts in some common scenarios.

For all of these scenarios, remember the purpose of the two main PureApplication System resource types:

  • Cloud groups represent different physical deployment environments that pattern instances can run in. Cloud groups are physically isolated: You can stop a cloud group or it can fail without affecting the others. If a workload in a cloud group were to somehow go crazy and consume all of the resources, that affects the other workloads in that cloud group but it does not affect the workloads in the other cloud groups.
  • Environment profiles represent different logical deployment environments. They are targets for deployment that define policies for a set of deployers. Two user groups that have the same policies and share the same resource allocations should share an environment profile (that is, both are assigned to it). Two user groups which should have different policies or get separate resource allocations each need their own environment profile.

Scenario: Development lifecycle environments

One common approach for defining environments is to separate stages in the application development lifecycle. Lifecycle stages, and their corresponding runtime environments, typically include:

  • DEV : This is used for developing business applications.
  • TEST : This is used for testing applications.
  • PROD : This is used for running applications for use by business or end users.

Each environment typically runs on independent sets of hardware. Part of the motivation is to prevent problems that occur in the development environment from affecting the test environment and test problems from affecting production.

In PureApplication System, to create these three runtime environments and isolate them from each other, a good practice is to create three cloud groups: Dev, Test, and Prod. On a small configuration Intel system (which is one with six compute nodes), the configuration might be:

  • A Dev cloud group with one compute node (which, for an Intel compute node, gives developers 16 physical cores): Set its type to "average" since development applications are frequently unused and receive low user load.
  • A Test cloud group with two compute nodes : They should reside in two different chassis and different sides of the rack. Set its type to "dedicated" to mimic production.
  • A Prod cloud group with three compute nodes : They should be distributed across the three chassis using both sides. Set its type to "dedicated" since production applications are expected to be used heavily.

How hardware is arranged in the PureApplication System rack, such as compute nodes being housed in chassis and Intel compute nodes being stacked in two columns that are powered separately, is a detailed topic. For an overview of the system's hardware details, see A tour of the hardware in IBM PureApplication System .

Each of these three cloud groups also needs at least one IP group with an otherwise unused VLAN ID to keep their network traffic separated.

As for how many environment profiles to create, there are no hardware limitations so the sky's the limit, though some practical guidelines indicate what is needed. Typically, the set of users who can deploy patterns for PROD is smaller than the set of deployers for TEST, which is smaller than the number for DEV, as shown in Figure 8.

Figure 8. Relative number of deployers per environment
Relative number of deployers per environment

Likewise, the number of profiles needed for each runtime environment tends to decrease from DEV through PROD.

First, it is helpful to define a user group for workload "super users":

  • Workload Administrators : This is a group of workload super users who administrate the full set of workloads within the system (typically users with the Workload resources administration security role) and should be able to deploy patterns using any profile. Assign this group to every environment profile.

Here are some helpful profiles. Each profile has a corresponding user group.

  • Production application environment profiles : These deploy applications into the PROD environment. Set their priority to golden and optionally set the environment setting to "Production".
    • You can create separate profiles for each production application, or per department or line-of-business deploying production applications, which enables the settings to control who can deploy the patterns for each application, and to allocate resources differently for different applications or groups of applications.
    • Then again, it may be better to have one set of users responsible for deploying all applications into production, in which case they all use the same profile. One consequence of deploying all applications with the same profile is that all of the pattern instances share the profile's allocation of resources. If different sets of applications should draw from different allocations of resources, create separate profiles and assign the single production deployment user group to all of those profiles.
  • Test application environment profiles : These deploy applications into the TEST environment. Set their priority to silver and optionally set the environment setting to "Test". Create one profile per team deploying one or more applications to be tested. Using a separate profile for each team and assigning each profile separate resources, such as a separate IP group, helps keep that team's applications isolated within the cloud group, and prevents one team from using up too much of the cloud group's resources so that not enough remains available for the other teams.
  • Development application environment profiles : These deploy applications into the DEV environment. Set their priority to bronze and optionally set the environment setting to "Development". All developers can share one profile, but then each has to be trusted not to use too many resources. To enforce these limits, use a separate profile for each development team, or even each developer. Keep in mind that a profile is only useful if it has settings that are different from other profiles, such as different settings for who can deploy patterns, what IP groups to use, or to enforce limits on resources like CPU and PVUs.

Scenario: Multiple production environments

Another common approach might be to isolate multiple production environments. They are all used for production applications, so they all have equal priority. However, their applications are used for different purposes and so should be isolated from each other.

For example, consider these three hypothetical production environments:

  • Public web site : This hosts the web applications that customers access via the Internet. It is accessible by an enterprise's customers.
  • Internal HR applications : This hosts the applications used by the HR department. It is accessible by the department employees only.
  • Internal Finance applications : This hosts the applications used by the Finance department. It is accessible by the department employees only.

In PureApplication System, to create these three production environments and isolate them from each other, a good practice is to create three cloud groups: PUBLIC, HR, and FINANCE. On a small configuration with six compute nodes, you can assign each cloud group two compute nodes, spreading each pair across chassis and sides, and each with a different unused VLAN ID to use internally. Give each cloud group a one or more IP groups. The IP groups for a cloud group can have the same VLAN IDs, but the IP groups for different cloud groups need different VLAN IDs to isolate the cloud groups' network traffic from each other. Assuming the applications are used heavily, set each cloud group's type to "dedicated". Otherwise, if a cloud group has a surplus of applications that are used sparingly, set its type to "average".

Create an environment profile for each production environment and assign it a user group whose users are responsible for deploying to that environment. Also create a Workload Administrators group of super-users (administrators of all workloads) and assign it to all of the profiles. If there are different teams that share an environment, but are having trouble playing together nicely, create a different user group and profile for each team. The profiles deploy their patterns to the same cloud group, but they allocate resources with limits to help isolate the teams' applications better.

Scenario: Utilization-based environments

One feature of cloud groups is the type setting:

  • Dedicated : This provides one physical CPU for every virtual CPU requested by a virtual machine.
  • Average : This provides one-quarter of a physical CPU for every virtual CPU requested by a virtual machine.

You can use this—in conjunction with the environment limits and priorities on an environment profile and the resource requirements on a virtual machine—to create a cloud group optimized for one of two different types of workloads:

  • A cloud group that runs fewer applications, but is better prepared for more of them to have a higher simultaneous user load.
  • A cloud group that over allocates its CPU to run four times as many applications, enabling underutilized applications to run with higher density for improved average resource utilization.

For an organization with both high-load applications and under utilized applications, a good practice is to create a cloud group for each type.

To apply this practice, create two cloud groups:

  • High Load : This is for applications that usually get significant user load:
    • Set the cloud group's type to "dedicated".
    • Set the environment profiles' environment limits conservatively to help prevent each user group from deploying too many patterns (which uses more than the team's fair share of limited resources).
    • If some applications have higher priority than others, set those in the environment profile and when deploying the patterns. This comes into play when the user load is greater than the cloud group's capacity.
    • Set the resource requirements in the patterns' VMs high, but only as high as they need at peak load.
    • Even if the load peaks on many of these applications simultaneously, this cloud group is better prepared to handle the load and keep the response times consistent.
  • Underutilized : This is for applications that need to be available and sometimes get a fair amount of user load, but generally are not used very heavily:
    • Set the cloud group's type to "average".
    • Set the environment profiles' environment limits liberally so that each team can deploy lots of patterns. The expectation is that they are not used much.
    • If some applications have higher priority than others, set those in the environment profile and when deploying the patterns. This comes into play when the user load is greater than the cloud group's capacity.
    • Set the resource requirements in the patterns' VMs conservatively, but high enough that they can run acceptably under a typical load.
    • This cloud group can accept a higher number of pattern deployments, with the caveat that performance suffers if the load peaks on many of these applications simultaneously.

If a user group is responsible for both high-load and under utilized applications, you can assign it to environment profiles for both cloud groups. Do not use a single environment profile for both cloud groups because you want separate resource limits for the two cloud groups, and you want the limits to be conservative for the high-load group, but liberal enough for the under utilized group.

With this approach, you give the high-load applications the best chance of receiving the resources they need while also getting the best resource utilization possible running applications that are not used much.

Scenario: Shared development and test environment

You can consider combining some of the development lifecycle runtime environments. Combining TEST and PROD is a bad idea because testing can take resources away from PROD and problems in TEST can affect PROD. It is better to isolate those environments in different cloud groups.

On the other hand, combining DEV and TEST in a single environment may make more sense. 这种方法有利有弊。

  • Pro: The resources not being used for one purpose can easily and dynamically be used for another. For example, when the testing effort is low, you can use those resources for development, and then shift them back to testing when that increases.
  • Con: Heavy testing can consume most resources, limiting those available to development (assuming development is set as a lower priority). This is a pro when development ebbs during testing, perhaps because the developers are doing the testing. However, it is a con for developers trying to do development while the testing effort is significant.

To implement this approach, create one DEV-TEST cloud group. Create DEV and TEST pairs of the other resources: IP groups, environment profiles, and user groups. Of course, assign a Workload Administrators user group to both or all of the environment profiles. With these resources, make these settings in the two profiles, as shown in Table 2.

Table 2. Environment profile settings
设置 TEST profile DEV profile
Access granted to Test user group Dev user group
Deploy to cloud groups Dev-Test cloud group Dev-Test cloud group
IP地址 Test IP group Dev IP group
Environment limits 10 virtual CPUs 5 virtual CPUs
部署优先级 青铜
环境 测试 发展历程

The advantages of this approach are:

  • Access granted to : Different user groups control who can use each profile to deploy the patterns.
  • Deploy to cloud groups : Both profiles deploy to the same shared cloud group.
  • IP addresses : Different IP groups make sure that one user group's pattern instances cannot use up all of the addresses and leave the other user group with none. It also enables separate VLANs if desired.
  • Environment limits : This makes sure that DEV gets the computational power of, at most, 5 virtual CPUs and TEST gets, at most, 10 virtual CPUs. If they grow to use all available CPUs, when the higher-priority environment (TEST) needs more CPU, that capacity is taken from the lower-priority environment (DEV).
  • Deployment priority : Sets TEST with a higher priority than DEV so that resource contention is resolved in the favor of the applications deployed with the TEST profile.
  • Environment : This is ignored by the system.

PureApplication System environment principles

These scenarios make consistent use of principles that tend to apply to all scenarios:

  1. Use cloud groups to divide a PureApplication System into isolated logical computers. In order to isolate each cloud group's network as well, each cloud group's IP groups needs to have a different application VLAN ID.
  2. Tune each cloud group for either high-load applications with low density or lightly used applications with high density. If you have both types of applications, create one of each type of cloud group.
  3. An environment profile is typically used to deploy to a single cloud group.
  4. Use multiple environment profiles to separate user groups sharing a cloud group, such as multiple teams deploying to the same testing environment. This enables the PureApplication System administrator to confine each group to use its own IP groups, priorities, and resource limits, which enables (and perhaps forces) the users and their applications to cooperate more easily.
  5. Create a Workload Administrators user group for super users (those who administrate the PureApplication System) and assign that to all environment profiles.

Follow these principles and you are well on your way to using the features of PureApplication System successfully.

结论

This article explained how to design and create application runtime environments in PureApplication System using its features for cloud groups and environment profiles. It showed how these features relate to cloud computing concepts, described these and related features in PureApplication System, considered scenarios for using these features, and reviewed principles that can be drawn from these scenarios. With this information, you are now prepared to administer the runtime environments in your PureApplication System.

致谢

The author would like to thank the following IBMers for their help with this article: Vishy Gadepalli , Stanley Tzue-Ing Shieh , Michael Fraenkel , Shaun Murakami , Jason Anderson , Ajay Apte , Kyle Brown , Rohith Ashok , and Hendrik van Run .


翻译自: https://www.ibm.com/developerworks/websphere/library/techarticles/1210_woolf/1210_woolf.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值