Horizontal Decoupling of Cloud Orchestration for Stabilizing Cloud Operation and Maintenance




In a plain and understandable desire for achieving economies of scale, a cloud orchestration software system should be capable of managing a huge farm of hardware servers. However, even using most advanced software configuration and management tools, the field has trial-and-error reached a common sense that the distribution scale of a cloud orchestrator mustn’t be too large. E.g., VMware, probably among the most experienced players in the trade, stipulates a rule-of-thumb upper bound for its orchestrator vRealize: No more than 1,000 servers per vRealize even if the software is installed on top quality hardware devices. Scaling-up beyond that level, cloud operation and maintenance would become unstable plus incur sharp cost increasements in operation and maintenance. Recent achievements in hyper efficient CPU virtualization by Docker have seminally ignited additional orders-of-magnitude explosions in the number of micro-servicing CPUs, certainly to add further troubles to worsening scalability in cloud orchestration. Current poor scalability status quo in cloud orchestration means that today’s clouds are in small isolated scatters and patches, and therefore cannot efficiently tap cloud potentials from economies of scale.

The essential problem behind poor scalability in cloud orchestration is that all cloud orchestrators, from commercial offerings or from open source projects, unanimously and conventionally evolve from a horizontally tight coupled architecture. A horizontally tight coupled orchestrator is a bunch of software components which are host knowledge interwoven. By speaking of "host knowledge interwoven", we mean that the software components in a cloud orchestrator know the existence, roles and duties of one another right at their birthday of being installed on a farm of server hosts, and throughout their remainder entire lifecycles afterwards. When a farm gets large, some queues of events and messages will inevitably become long; writelock mechanisms for consistency protection and CoW DB accesses will also aggregate momentum to slow down responsiveness; and occasional unfortunate popup of failures, even merely in a benign timeout sense, occurring at one point in the farm would highly likely pull down other knowledge interwoven parts. As a matter of fact, all cloud servicing or hosting providers, as long as having a size, all have to rely on human based operation/maintenance teams 7x24 on-guard the farm, playing similar roles of firefighters!

DaoliCloud presents Network Virtualization Infrastructure (NVI) technology to horizontally decouple cloud orchestration. The NVI technology minimizes the size of a cloud orchestration region down to over one single hardware server, e.g., in the formulation of OpenStack all-in-one installation. An orchestrator managing only one server host of course has absolutely no knowledge whatsoever about any other orchestrator managing another server. Thus, any server host in an NVI farm has no software knowledge about any other server host in the farm. While having obviously maximized stability for cloud operation and maintenance, the overlay cloud resources which are pooled by NVI remain to have unbound scalability. This is because NVI can trans-orchestrator connect overlay nodes in user mode only upon one node initiating communication to another (think of http connection!). NVI can connect various virtual CPUs over independent and heterogeneous cloud orchestrators, e.g., connect lightweight micro-servicing Docker containers and heavy-duty hypervisor VMs, which are independently orchestrated by, e.g., Kubernetes and OpenStack. Moreover, NVI can transparently link different cloud service providers, also in user mode.

The key enabler for any two not-knowing-one-another orchestrators to serve user-mode connection for their respectively orchestrated overlay nodes is a novel OpenFlow formulation for forwarding trans-orchestrator underlay packets. This new SDN formulation succeeds constructing any OSI layer, any form of overlay network without any need of packet encapsulation, i.e., without using any of the trans-host-network protocols such as VLAN, VXLAN, VPN, MPLS, GRE, NVGRE, LISP, STT, Geneve, or any such we have missed from the enumeration! Having avoided trans-host packet encapsulation, there is of course no need for the involving orchestrators to know one another in host mode, neither in the system installation time nor in their remainder entire lifecycles afterwards. It is in such a simple principle that the SDN innovation of NVI achieves complete horizontal decoupling of cloud orchestration. With connection taking place only in user mode, cloud deployment, operation, maintenance, and system upgrading, etc., can become 100% automated. It is now also plainly manifested that the NVI technology supports inter-cloud patching, also in user mode.

With the problem-solving architecture of NVI for truly scalable cloud orchestration, DaoliCloud attempts to contribute to the cloud industry a new production line: “Build, ship and low-cost operate any cloud, any scale”, as a new frontier to work with, extending from the great inspiration of “Build, ship and run any app, anywhere” from Docker. Having fixed a single small size for orchestrator installation and configuration, build, ship, operate and maintain any cloud, private or public, will be low cost, and fast because of one size and automation.

URL http://www.daolicloud.com exposits, in "for dummies" simplicity, a near-product-quality prototype of our new cloud orchestration technology which horizontally decouples globally located orchestrators. These globally distributed, host-mode not knowing each other, and user-mode being well organized orchestrators are independent all-in-one OpenStack hosts located in Beijing and Shanghai, China, and Virginia, USA. We cordially invite much respected reviewers of this abstract, and hopefully many subsequent interested trial users in the audience of the forthcoming OpenStack Summit, to signup on the above URL for a trial use. We humbly wish that some trial users might come to an appreciation that the new architecture work for scalable cloud orchestration can indeed enable a number of never-known-before useful cloud properties which are probably only made possible from the new architectural innovation in cloud orchestration as a specific application, and in network virtualization for a more general pursuit in knowledge and technology advances.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
随着人口老龄化和空巢化等社会问题的日益严峻,养老问题及以及养老方式的变革成为了当前社会的发展焦点,传统的养老模式以救助型和独立型为主,社会养老的服务质量与老年人的养老需求还存在一定的差距,人们生活水平的提高以及养老多元化需求的增加都需要通过创新和灵活开放的养老模式来应对未来的养老需求,结合目前我国养老模式及养老服务问题的内容的分析,互助养老模式作为一种新型的养老模式结合自主互助的集体养老理念,帮助老年人实现了满足个性需求的养老方案,互助养老模式让老年人具备了双重角色的同时也实现可持续的发展特色。目前我国老年人的占比以每年5%的速度在飞速增长,养老问题及养老服务的提供已经无法满足当前社会养老的切实需求,在养老服务质量和养老产品的变革过程中需要集合多元化的养老模式来满足更多老人的养老需求。 鉴于我国目前人口老龄化的现状以及迅速扩张的养老服务需求,现有的养老模式已经无法应对和满足社会发展的需求,快速增长的养老人员以及养老服务供给不足造成了紧张的社会关系,本文结合当前养老服务的发展需求,利用SSM框架以及JSP技术开发设计一款正对在线互助养老的系统,通过系统平台实现养老机构信息的传递及线上预约,搭建了起了用户、养老机构以及系统管理员的三方数据平台,借助网页端实现在线的养老互助信息查询、养老机构在线预约以及求助需求等功能,通过自养互养的养老模式来帮助老年人重新发现自我价值以及丰富养老的主观能动性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值