分布式云计算与分布式并行计算研究综述

Distributed Cloud Computing and Distributed Parallel Computing: A Review 分布式云计算与分布式并行计算研究综述

Abstract: 抽象:

In this paper, we present a discussion panel of two of the hottest topics in this area namely distributed parallel processing and distributed cloud computing. Various aspects have been discussed in this review paper such as concentrating on whether these topics are discussed simultaneously in any previous works. Other aspects that have been reviewed in this paper include the algorithms, which simulated in both distributed parallel computing and distributed cloud computing. The goal is to process the tasks over resources then readjusted the calculation among the servers for the sake of optimization. These help us to improve the system performance with the desired rates. During our review, we presented some articles which explain the designing of applications in distributed cloud computing while some others introduced the concept of decreasing the response time in distributed parallel computing.
在本文中,我们提出了一个讨论小组,讨论该领域最热门的两个话题,即分布式并行处理和分布式云计算。这篇综述论文讨论了各个方面,例如集中讨论这些主题是否在以前的任何作品中同时讨论。本文回顾的其他方面包括在分布式并行计算和分布式云计算中模拟的算法。目标是处理资源上的任务,然后重新调整服务器之间的计算以进行优化。这些有助于我们以所需的速率提高系统性能。在我们的审查中,我们介绍了一些解释分布式云计算中应用程序设计的文章,而另一些文章则介绍了减少分布式并行计算中响应时间的概念。
Published in: 2018 International Conference on Advanced Science and Engineering (ICOASE)
发表于: 2018 International Conference on Advanced Science and Engineering (ICOASE)
Date of Conference: 09-11 October 2018
会议日期:2018年10月9-11日
Date Added to IEEE Xplore: 29 November 2018
添加到IEEE Xplore的日期:2018年11月29日
ISBN Information: ISBN信息:
DOI: 10.1109/ICOASE.2018.8548937
DOI: 10.1109/ICOASE.2018.8548937
Publisher: IEEE 发布者: IEEE
Conference Location: Duhok, Iraq
会议地点:伊拉克杜胡克

SECTION I. 第一节 总则Introduction 介绍

A recently growing technology, that is Cloud computing, entailing scheduling mechanism as a vital part, allows for the processing of huge amounts of data [1].
最近发展的一项技术,即云计算,需要将调度机制作为重要部分,可以处理大量数据 [1] 。

One field where we increasingly see cloud computing is high energy physics (HEP). In this study, we explore the reasons why they are integrated in the applications of HEP and how it is gradually becoming more common [2].
我们越来越多地看到云计算的一个领域是高能物理(HEP)。在这项研究中,我们探讨了它们被整合到HEP应用中的原因,以及它如何逐渐变得越来越普遍 [2] 。

Cloud computing is a broad term generally referring to hosted services. It is the virtualization of physical hardware where data is organized in specified centres. Yet as a new technology with several merits, it is not without its issues, a major challenge is load balancing [3].
云计算是一个广义的术语,通常是指托管服务。它是物理硬件的虚拟化,其中数据在指定的中心进行组织。然而,作为一项具有多项优点的新技术,它并非没有问题,一个主要的挑战是 负载平衡 [3] .

This article proposes the implementation of robot SLAM architecture in order to fulfil the Real-time requirement of practical robot systems, which is essential. The robot SLAM adopts two paralleled threads processing in order to fulfil this role. The computational complexity is dominantly determined by the particle number employed, two distributed threads of variable particle sizes are simultaneously executed [4].
本文提出了机器人SLAM架构的实现,以满足实际机器人系统的实时性要求,这是必不可少的。机器人SLAM采用两个并行线程处理来完成这一角色。计算复杂度主要由所采用的颗粒数决定,同时执行 [4] 两个不同颗粒大小的分布式线程。

In cloud computing services, Virtual clusters allows for the allocation of visualized resources energetically. User management and virtual storage is one of the commonest applications of cloud environment by IT and business companies. In cloud services, the input position is job organization, and its arrangement is vital for the competence of the whole cloud services. The range of appropriate property for job implementation is the mechanism for job implementation [5].
在云计算服务中,虚拟集群允许对可视化资源进行能量分配。用户管理和虚拟存储是IT和商业公司最常见的云环境应用之一。在云服务中,输入位置是工作组织,其安排对于整个云服务的能力至关重要。作业实现的适当属性范围是作业实现 [5] 的机制。

Large scale industrial processing of big data entails monitoring and modelling issues, and in order to deal with this, the approach of distributed and parallel designed principal component analysis is proposed. The large-scale process is decomposed initially into distributed blocks based on a priori process knowledge; this solves the problem of high dimensional process variables [6].
大数据的大规模工业化处理需要监测和建模问题,为了解决这个问题,提出了分布式和并行设计的主成分分析方法。大规模过程最初基于先验过程知识分解为分布式块;这就解决了高维过程变量 [6] 的问题。

Some nodes can be lightly loaded while others heavily loaded in a cloud system [7], resulting in a poor performance. In cloud environment, distributing the load between the nodes is the function of load balancing, which is the spotlight of problems in cloud computing [8].
在云系统中 [7] ,某些节点可以轻负载,而其他节点则负载过重,从而导致性能不佳。在云环境中,在节点之间分配负载是负载均衡的功能,这是云计算 [8] 中问题的焦点。

Even load distribution in the cloud system leads to a better resource utilization and is much desired [7].
云系统中的均匀负载分布导致更好的资源利用率,并且是非常需要 [7] 的。

In order to balance the total load on the system in attempt to ensure a good overall performance relative to some specific metric of system performance, a load balancing algorithm [9] [10] [11] transparently transfers the workload from heavily loaded nodes to lightly loaded nodes. The response time of the processes is what determines performance when the metric is involved. Yet the metric is total system throughput when performance is considered from the resource point of view [12]. Is the function of throughput to ensure fair treatment of all the users and that they make progress [9] [10] [12] [11] [8] [7], this in contrast to response time [10].
为了平衡系统上的总负载,以确保相对于系统性能的某些特定指标具有良好的整体性能,负载平衡算法 [9] [10] [11] 透明地将工作负载从负载较重的节点转移到负载较轻的节点。当涉及指标时,进程的响应时间决定了性能。然而,当从资源角度 [12] 考虑性能时,该指标是总系统吞吐量。是吞吐量的功能,以确保公平对待所有用户并取得进展 [9] [10] [12] [11] [8] [7] ,这与响应时间 [10] 形成鲜明对比。

SECTION II. 第二节.Distributed Cloud Computing 分布式云计算

Scientific applications utilizing clouds is seemingly a rising interest among researchers, at the same time many large corporations are contemplating switching over to hybrid clouds. Parallel processing is needed for effective execution of jobs by complex applications. In parallel process the presence of communication and synchronization allows for more effective use of CPU resources. Thus overall, maintaining the level of responsiveness of parallel jobs while achieving the effective utilization of nodes is mandatory for a data center [1].
利用云的科学应用似乎越来越受到研究人员的兴趣,与此同时,许多大公司正在考虑切换到混合云。复杂应用程序需要并行处理才能有效执行作业。在并行进程中,通信和同步的存在允许更有效地利用 CPU 资源。因此,总体而言,在实现节点有效利用的同时保持并行作业的响应水平对于数据中心 [1] 来说是强制性的。

Through cloud computing, a client can request information, shared resources, software and other services at any time, according to his specifications. It is an on-demand service, and the term is commonly seen across the internet. You can view the whole internet as a cloud. Not to mention, utilizing cloud decreases the capital and operational costs. However, a major challenge in cloud computing is load balancing, a distributed solution to this issue is always required. Because of the complexity of cloud and widespread distribution of its component, it is difficult to have efficient load balancing by assigning jobs to appropriate servers and clients individually, and it is not cost effective or practical fulfilling the required demands by maintaining one or more idle services. While jobs are assigned, some uncertainty is attached [3].
通过云计算,客户可以根据自己的要求随时请求信息、共享资源、软件和其他服务。它是一种按需服务,该术语在互联网上很常见。您可以将整个互联网视为云。更不用说,利用云可以降低资本和运营成本。然而,云计算的一个主要挑战是负载均衡,这个问题总是需要分布式解决方案。由于云的复杂性及其组件的广泛分布,很难通过单独将作业分配给适当的服务器和客户端来实现有效的负载均衡,并且通过维护一个或多个空闲服务来满足所需的需求既不经济高效也不切实际。在分配工作时,存在一些不确定性 [3] 。

A protocol is thus proposed and designed, the purpose of which is enhancing server throughput and performance, enhancing resource utilization and switching time. In this protocol, it is in the cloud where the jobs are scheduled and within the existing protocols the drawbacks are solved. In order to minimize the waiting and switching time, the job that offers better performance to the computer are given priority. Solving drawbacks of existing protocols by managing the scheduling of job has taken considerable effort, along with improvising the throughput and efficiency of the server [1].
因此,提出并设计了一种协议,其目的是提高服务器吞吐量和性能,提高资源利用率和切换时间。在此协议中,在云中安排作业,并且在现有协议中解决了缺点。为了最大程度地减少等待和切换时间,优先考虑为计算机提供更好性能的作业。通过管理作业的调度来解决现有协议的缺点需要付出相当大的努力,同时提高服务器 [1] 的吞吐量和效率。

Scientific applications utilizing systems based on cloud computing allows for high throughput computing (HTC). Applications in particle physics have systems with a unified infrastructure that utilize a number of distinct IaaS clouds. There are a number of criteria that the system of our cloud computing is based on. The applications of embarrassingly parallel single HEP need to run in a batch environment in the system. No inter-node or inter-process communications are required; However, the memory footprint that sharing process memory produces are reduced by using multi-process jobs [2].
利用基于云计算的系统的科学应用可实现高吞吐量计算 (HTC)。粒子物理学中的应用程序具有具有统一基础架构的系统,这些基础架构利用了许多不同的 IaaS 云。我们的云计算系统基于许多标准。尴尬的并行单个 HEP 的应用程序需要在系统中的批处理环境中运行。不需要节点间或进程间通信;但是,使用多进程作业 [2] 可以减少共享进程内存产生的内存占用量。

Topology island formation and fast network topology processing is realized by CIM parallel topological processing, which the power network is based on and is discussed in this paper. High throughput, high reliability and high availability storages for cloud storage platform are designed by them. Also, the design idea of MySQL-CIM model was introduced by them as well and efficient tube MySQL-CIM model in data is realized by Ogma development. The development of power network topology processing (NTP) application is done [13].
本文基于CIM并行拓扑处理实现了电力网络的拓扑岛形成和快速网络拓扑处理,并进行了讨论。他们为云存储平台设计了高吞吐量、高可靠性和高可用性的存储。此外,他们还引入了MySQL-CIM模型的设计思想,并通过Ogma开发实现了高效的数据管MySQL-CIM模型。电力网络拓扑处理(NTP)应用开发完成 [13] 。

Cloud computing is a broad term generally referring to hosted services. It is the virtualization of physical hardware where data is organized in specified centres. Yet as a new technology with several merits, it is not without its issues, a major challenge is load balancing, and an important topic in cloud computing requiring many researches and studies. Since many systems are involved in the structure of data centre, it is difficult to perform load balancing, especially in case of cloud computing. Majority of researches on the subject were have been done in distributed environment, yet using semi-distributed load balancing has been the target of so little research. Load balancing in semi-distributed way would allow the design of new algorithm for cloud computing [3].
云计算是一个广义的术语,通常是指托管服务。它是物理硬件的虚拟化,其中数据在指定的中心进行组织。然而,作为一项具有多种优点的新技术,它并非没有问题,一个主要的挑战是负载均衡,也是云计算中需要大量研究和研究的重要课题。由于数据中心的结构涉及许多系统,因此很难进行负载均衡,尤其是在云计算的情况下。关于该主题的大多数研究都是在分布式环境中完成的,但使用半分布式负载均衡的研究对象很少。半分布式方式的负载均衡将允许设计新的云计算 [3] 算法。

The merits of decentralized computing away from data centres, along with the consideration of using infrastructure obtained from multiple providers and changing the infrastructure of cloud has been discussed in this paper. A new architecture for computing is demanded by these new trends and need to be fulfilled by cloud infrastructure in the future. Self-learning systems, service space, data intensive computing and people – device connections are areas expected to be most influenced by these new architectures. Finally, in order to realize the next generation cloud system’s potential, a roadmap of the challenges that need to be considered has been proposed. [14]
本文讨论了远离数据中心的去中心化计算的优点,以及使用从多个提供商那里获得的基础设施和改变云基础设施的考虑。这些新趋势需要一种新的计算架构,并且需要在未来通过云基础设施来实现。自学习系统、服务空间、数据密集型计算和人员-设备连接是受这些新架构影响最大的领域。最后,为了实现下一代云系统的潜力,提出了需要考虑的挑战路线图。 [14]

Creating ad hoc clouds and harnessing computing for online applications and mobile both at the edge of the network, using computing models based on voluntarily providing resources has been discussed in this paper. The idea of paying for a cloud VM even if the server executing on the VM is idle is a traditional notion, and we have presented a computing model in order to replace it. An uprising computing model of cloud has been mentioned in this paper that integrates resilience and is software defined. Some areas will be influenced by newly forming computing architecture and changing cloud infrastructure. The internet-of-Things paradigm will be eased further enhancing the connectivity between people and devices, and new architectures will play a vital role. The volume of data provides a challenge in data intensive computing and novel techniques are needed to address this. There will be rising interest in new services such as acceleration, containers and function. Self-learning system will be realized when there will be convergence of search areas with cloud systems. The academia and the industry are leading forces in these changes, yet many challenges need to be solved in the future. Development of next generation cloud computing with sustainable systems and efficient management, expressing applications and improving security have been discussed in this paper, along with their approach and direction [14].
本文讨论了在网络边缘创建临时云并利用在线应用和移动计算,使用基于自愿提供资源的计算模型。即使虚拟机上执行的服务器处于空闲状态,也要为云虚拟机付费的想法是一个传统概念,我们提出了一个计算模型来取代它。本文提到了一种新兴的云计算模型,该模型集成了弹性并由软件定义。一些领域将受到新形成的计算架构和不断变化的云基础设施的影响。物联网范式将得到缓解,进一步增强人与设备之间的连接性,新的架构将发挥至关重要的作用。数据量给数据密集型计算带来了挑战,需要新的技术来解决这个问题。人们对加速、容器和功能等新服务的兴趣将越来越大。当搜索区域与云系统融合时,将实现自学习系统。学术界和工业界是这些变化的主导力量,但未来仍有许多挑战需要解决。本文讨论了具有可持续系统和高效管理的下一代云计算的开发,表达了应用程序和提高安全性,以及它们的方法和方向 [14] 。

Grid computing distributed computing, and parallel computing developments are part of cloud computing. Firstly, two traditional parallel programming models are introduced in this paper, along with expounding the concept of cloud computing. Secondly, it analyses and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce, respectively. Finally, it discusses and compares MPI, OpenMP models and Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing. [15]
网格计算、分布式计算和并行计算的发展是云计算的一部分。首先,介绍了两种传统的并行编程模型,并阐述了云计算的概念;其次,分别分析和研究了OpenMP、MPI和Map Reduce的原理、优缺点。最后,从云计算的角度对MPI、OpenMP模型和Map Reduce进行了讨论和比较。本文的研究结果旨在为并行计算的发展提供参考。 [15]

Great emphasis has been made on distributed computing in this paper. The differences between distributed and parallel computing has been studied as well, along with terminologies, task allocation, performance parameters, the advantages and scope of distributed computing, as well as parallel distributed algorithm models [16].
本文重点介绍了分布式计算。还研究了分布式计算和并行计算之间的差异,以及术语、任务分配、性能参数、分布式计算的优势和范围,以及并行分布式算法模型 [16] 。

SECTION III. 第三节.Distributed Parallel Processing 分布式并行处理

Some nodes can be lightly loaded while others heavily loaded in a cloud system, resulting in a poor performance. In cloud environment, distributing the load between the nodes is the function of load balancing, which is the spotlight of problems in cloud computing [7].
在云系统中,某些节点可能负载较轻,而其他节点负载较重,导致性能较差。在云环境中,在节点之间分配负载是负载均衡的功能,这是云计算 [7] 中问题的焦点。

The Quality of service needs to meet the standards regarding the job arrangement to make the virtual users happy. Decreasing source price, minimizing makespan and safeguarding error acceptance along with quality of service are used to improve resource allotment and job arrangement. Cloudsim toolkit has been used through the existing scheduling policies to evaluate the proposed algorithms. Early results of experiments have shown better results in terms of user response, execution time and cost, and time on different cloud workloads for the proposed framework compared to the already existing algorithms. Different virtual machine scheduling algorithms along with their performance that are based on various quality metrics has been discussed in this paper. The approach depends on CPU performance, network consumption, scheduling success grade, standard implementation time and so on.
服务质量需要符合有关工作安排的标准,以使虚拟用户满意。降低源价格、最小化制造跨度、保证错误接受度以及服务质量,用于改善资源分配和工作安排。Cloudsim工具包已通过现有的调度策略用于评估所提出的算法。与现有算法相比,早期实验结果显示,所提出的框架在用户响应、执行时间和成本以及不同云工作负载上的时间方面取得了更好的结果。本文讨论了基于各种质量指标的不同虚拟机调度算法及其性能。该方法取决于 CPU 性能、网络消耗、调度成功等级、标准实现时间等。

We have shed some light on processing user request utilizing caching in the network itself. By making use of parallel processing a new catching strategy was introduced and its performance was heavily evaluated in terms of reducing redundant traffic and data access delay in different catching scenarios.
我们已经阐明了如何利用网络本身的缓存来处理用户请求。通过利用并行处理,引入了一种新的捕获策略,并在减少不同捕获场景下的冗余流量和数据访问延迟方面对其性能进行了大量评估。

The cost of implementing the aforementioned catching network has also been studied. The improved performance by having decreased delay comes at the price of increased cost, and when implementing the proposed parallel processing, the trade-off needs to be carefully considered. The new strategy has shown marked effectiveness in the simulation results [17].
还研究了实施上述捕捞网络的成本。通过减少延迟来提高性能是以增加成本为代价的,在实现建议的并行处理时,需要仔细考虑权衡。新策略在仿真结果 [17] 中显示出显著的有效性。

In order to improve the real time performance of PF based robot SLAM, an effective parallel implementation has been proposed in this paper. The acceleration of the overall SLAM algorithm can be increased by the discussed distributed parallel idea, because in the scenario when a keyfram laser can is grabbed, only a large number of particles is used. The results we have obtained from our experiments have verified that the temporal cost can be cut off effectively using this distributed architecture [4].
为了提高基于PF的机器人SLAM的实时性,该文提出了一种有效的并行实现方法。通过所讨论的分布式并行思想可以提高整个SLAM算法的加速度,因为在抓取keyfram激光器罐的场景中,只使用了大量的粒子。我们从实验中获得的结果已经验证了使用这种分布式架构 [4] 可以有效地切断时间成本。

P. Srinivasa Rao et al. in [18] referred to the elements to take a balanced approach to effective, the information nodes of all other nodes. When a node receives a job, it has to query the status of the other buttons to find out which node has less usage to forward the work, and when all the nodes are query phenomenon overload happens. With the nodes broadcast a statement informed about its status will also cause huge load on the network. Next is the issue of time wasted at each node to perform query status. Besides, the current state of the network is also a factor affecting the performance load balancing. This is because, in a complex network, with multiple subnets, the network node configured to locate all the other node is a fairly complex task. Thus, querying the status of nodes in the cloud will affect the performance load balancing. [18]
P. Srinivasa Rao等人在 [18] 提到要采取平衡方法的要素时,对所有其他节点的信息节点有效。当一个节点接到一个作业时,它必须查询其他按钮的状态,找出哪个节点的用量较少来转发工作,当所有节点都出现查询时发生过载现象。随着节点的广播,通知其状态的声明也会对网络造成巨大的负载。接下来是在每个节点上执行查询状态所浪费的时间问题。此外,网络的当前状态也是影响性能负载均衡的一个因素。这是因为,在具有多个子网的复杂网络中,配置为定位所有其他节点的网络节点是一项相当复杂的任务。因此,查询云中节点的状态会影响性能负载均衡。 [18]

Reference [19] showed that factors like response time will greatly affect the performance load balancing on the cloud. Another study outlined two outstanding issues of the previous algorithm: i) load balancing occurs only when the server is overloaded; ii) continuous information retrieval resources available lead to increased computational cost and bandwidth consumption So the authors based on the response time of the request have proposed an algorithm to assign the required decisions for servers appropriately, the algorithm approach has reduced the query information on available resources, reduced communication and computation on each server.
参考 [19] 表明,响应时间等因素将极大地影响云上的性能负载均衡。另一项研究概述了先前算法的两个突出问题:i)负载平衡仅在服务器过载时发生;ii)持续的信息检索资源可用导致计算成本和带宽消耗增加 因此,作者基于请求的响应时间提出了一种算法,可以适当地为服务器分配所需的决策,该算法方法减少了对可用资源的查询信息,减少了每个服务器上的通信和计算。

According to the algorithm, Min – Min [20] minimises the time to complete the work in each network node; however, the algorithm has yet to consider the workload of each resource. Therefore, the authors proposed the algorithm of Load Balance Improved Min-Min (LBIMM) to overcome this weakness. Failure to consider the workload of each resource leading to a number of resources are overloaded, some resources are idle; therefore, the work done in each resource is a factor affecting the performance load balancing on the cloud. There is a simple traditional Min - Min Algorithm and the current scheduling algorithm in cloud computing is based upon it.
根据该算法,Min – Min [20] 最大限度地减少了在每个网络节点中完成工作的时间;但是,该算法尚未考虑每个资源的工作量。因此,作者提出了负载均衡改进最小-最小值(LBIMM)算法来克服这一弱点。不考虑每个资源的工作量导致一些资源超负荷,一些资源处于闲置状态;因此,在每个资源中完成的工作是影响云上性能负载均衡的一个因素。有一个简单的传统Min - Min算法,目前云计算中的调度算法就是基于它。

Kapur in [21] proposed the algorithm of LBRS (Load Balanced Resource Scheduling Algorithm) to consider the importance of resource scheduling policies and load balancing for resources in cloud. The main goals are to maximise the CPU utilisation, maximise the throughput, minimise the response time, minimise the waiting time, minimise the turnaround time, minimise the resource cost and obey the Fairness Principle. In here, the parameters mentioned are QoS: throughput, response time, and waiting time. We have simulation and analysis of data on the impact of the parameters on cloud to perform load balancing. From there, we discovered that, the parameter of makespan (runtime) is of great significance for the data centre cloud. So, the task of the researchers is to study that the algorithms have effective load balancing to reduce the time makespan of virtual machines.
Kapur 提出了 [21] LBRS(Load Balanced Resource Scheduling Algorithm)算法,以考虑资源调度策略和负载均衡对云中资源的重要性。主要目标是最大限度地提高CPU利用率,最大限度地提高吞吐量,最小化响应时间,最小化等待时间,最小化周转时间,最小化资源成本,并遵守公平原则。在这里,提到的参数是 QoS:吞吐量、响应时间和等待时间。我们对参数对云的影响进行模拟和分析,以执行负载均衡。由此,我们发现makespan(runtime)的参数对于数据中心云具有重要意义。因此,研究人员的任务是研究这些算法是否具有有效的负载平衡,以减少虚拟机的时间跨度。

To achieve utilization of resources in an optimal way, the dynamic workload is distributed by the load balancing across multiple resources and this prevents single from being underutilized or overwhelmed; this however is a considerable optimization problem. A strategy for load balancing originating from Simulated Annealing (SA) has been proposed in this paper, and balancing the cloud infrastructure load is its primary function. A traditional Cloud Analyst simulator is modified and the effectiveness of the algorithm is measured. In comparison to existing approaches, like First Come First Serve (FCFS), local search algorithms i.e. Stochastic Hill climbing (SHC) and Round Robing (RR), the proposed algorithm has shown a better overall performance [22].
为了实现资源的最佳利用率,动态工作负载通过跨多个资源的负载均衡进行分配,从而防止单个资源未得到充分利用或不堪重负;然而,这是一个相当大的优化问题。该文提出了一种源自模拟退火(SA)的负载均衡策略,均衡云基础设施负载是其主要功能。对传统的Cloud Analyst模拟器进行了修改,并衡量了算法的有效性。与现有的方法相比,如先到先得(FCFS)、本地搜索算法(即随机爬山(SHC)和循环罗布(RR),所提出的算法显示出更好的整体性能 [22] 。

We tried to maximise the utilisation of resources to keep working resources available for tasks that are yet to come and also concentrate on the reliability of cloud services. We propose a new scheduling algorithm called Dabbawala cloud scheduling Algorithm based on Mumbai Dabbawala delivery system. In our proposed system, the tasks are grouped according to its cost required to complete in a Cluster and its VM resources. We found the lowest cost cluster and its VM for each task requested and group it together for getting services as in the Hadoop Map Reduce model. We have two phases called mapping the tasks and reduce the mapped tasks. The algorithm consists of four Dabbawala for each task to get serviced. From this algorithm, some available scheduling algorithms are compared. We achieve considerable gain in time and resources utilisation. [23].
我们试图最大限度地利用资源,以保持工作资源可用于即将到来的任务,并专注于云服务的可靠性。我们提出了一种新的调度算法,称为基于孟买达巴瓦拉交付系统的达巴瓦拉云调度算法。在我们建议的系统中,任务根据在集群中完成所需的成本及其 VM 资源进行分组。我们为每个请求的任务找到了成本最低的集群及其 VM,并将其组合在一起以获取服务,就像在 Hadoop Map Reduce 模型中一样。我们有两个阶段,分别称为映射任务和减少映射任务。该算法由四个 Dabbawala 组成,用于每个要处理的任务。通过该算法,比较了一些可用的调度算法。我们在时间和资源利用方面取得了可观的收益。 [23] 。

SECTION IV. 第四节.Existing Load Balancing Technique in Cloud 云中现有的负载均衡技术

A. VectorDot A. 矢量点
VectorDot is an innovative algorithm proposed by A. Singh et al. [18] for load balancing. It utilizes a flexible data centre with technologies of storage virtualization and integrated server, to handle the multidimensional resource loads that are distributed across network switches, servers and storages and the hierarchical complexity of the data centre. VectorDot helps in improving overloads on storage nodes, switches and servers, at the same time based on item requirements distinguishes nodes using dot product.
VectorDot 是 A. Singh 等人 [18] 提出的一种用于负载均衡的创新算法。它利用灵活的数据中心以及存储虚拟化和集成服务器技术,处理分布在网络交换机、服务器和存储上的多维资源负载以及数据中心的分层复杂性。VectorDot 有助于改善存储节点、交换机和服务器上的过载,同时根据项目要求使用点积区分节点。

B. LB of VM resources scheduling strategy
B. 虚拟机资源调度策略的LB
A scheduling strategy that utilizes the current state of the system and historical data for load balancing of VM resources, was proposed by J. Hu et al [9]. By implementing a genetic algorithm, this strategy reduces dynamic migration and accomplishes the best load balancing. It achieves a better resource utilization by dealing with the issues of high cost of migration and load imbalances.
J. 胡 等人提出了一种利用系统当前状态和历史数据对虚拟机资源进行负载均衡的调度策略 [9] 。通过实施遗传算法,该策略可减少动态迁移并实现最佳负载均衡。它通过处理迁移成本高和负载不平衡的问题,实现了更好的资源利用。

C. Task Scheduling Based on LB
C. 基于LB的任务调度
A mechanism to gain high resource utilization and satisfy the users’ dynamic requirements based on load balancing with a two-level task scheduling, is discussed by Y. Fang et al. [11]. It maps tasks to virtual machines, then host resources, accomplishing load balancing, resulting in a cloud computing environment with better resource utilization, improved task response time and an overall gain in performance.
Y. Fang 等人讨论了一种基于两级任务调度的负载均衡获得高资源利用率和满足用户动态需求的机制。 [11] 它将任务映射到虚拟机,然后托管资源,实现负载均衡,从而形成一个具有更好资源利用率、改进任务响应时间和整体性能提升的云计算环境。

D. Active Clustering D. 主动聚类
Optimizing job assignments by using local re-wiring to connect similar services, was the self-aggregating technique for load balancing that M. Randles et al. [9] investigated. Using resources effectively in a high resource system lead to an increased throughput bettering the performance of the system. However, it is degraded with increase in system diversity.
通过使用本地重新布线来连接类似的服务来优化作业分配,是 M. Randles 等人 [9] 研究的负载均衡自聚合技术。在高资源系统中有效使用资源可以提高吞吐量,从而提高系统性能。然而,随着系统多样性的增加,它逐渐退化。

E. Cloud Load Balancing Metrics
E. 云负载均衡指标
Various metrics considered in existing load balancing techniques in cloud computing are discussed below:
下面将讨论云计算中现有负载均衡技术中考虑的各种指标:

The number of the executed tasks is measured using throughput, and a high number indicates a good system performance.
执行的任务数是使用吞吐量来衡量的,数字越大表示系统性能良好。

When applying an algorithm for load balancing, the involvement of overhead is measured by overhead associates. The inter-process, inter-processor and task mobilization composes overhead and the more efficient a load balancing technique is, the less overhead is involved.
应用负载均衡算法时,开销的参与度由开销关联来衡量。进程间、处理器间和任务调动构成了开销,负载平衡技术越高效,涉及的开销就越少。

Load balancing needs to have a good technique for fault toleration. And it is the ability to do uniform load balancing by an algorithm, despite of link or arbitrary node failure.
负载平衡需要具有良好的容错技术。它能够通过算法进行统一的负载均衡,即使发生链路或任意节点故障。

Good performance systems have minimized migration times, and it is the time needed for migration of resources or jobs between individual nods, the less the better
性能良好的系统最大限度地减少了迁移时间,并且是在各个节点之间迁移资源或作业所需的时间,越少越好

Response time is another parameter, which if minimized leads to a better system performance and it is the time needed for a particular load balancing algorithm to respond in a distributed system.
响应时间是另一个参数,如果将其最小化,则会导致更好的系统性能,并且是特定负载平衡算法在分布式系统中响应所需的时间。

Effective resource utilization is mandatory for an efficient load balancing and optimization should be done.
有效的资源利用率对于有效的负载均衡是必不可少的,并且应该进行优化。

Scalability of algorithm is determined by its ability to perform load balancing for any finite number of nodes in a system. Enhanced scalability is desired.
算法的可伸缩性取决于它对系统中任意有限数量的节点执行负载均衡的能力。需要增强的可伸缩性。

The effectiveness of a system is measured by its performance, yet the cost effectiveness needs to considered and kept reasonable. An example would be keeping acceptable delays while decreasing task response times [24].
一个系统的有效性是以其性能来衡量的,但成本效益需要考虑并保持合理。一个例子是保持可接受的延迟,同时减少任务响应时间 [24] 。

SECTION V. 第五节Discussion 讨论

Distributed cloud computing is a new technology to interconnect data and applications served from different locations. In information technology, the term ‘distributed’ means that something is shared among multiple users or systems that are geographically different. As shown in Table I, we have some features that we can get from using distributed cloud computing, and each feature has an effect in using cloud technology.
分布式云计算是一种新技术,用于将来自不同位置的数据和应用程序互连起来。在信息技术中,术语“分布式”意味着在地理上不同的多个用户或系统之间共享某些内容。如图所示 Table I ,我们有一些功能可以从使用分布式云计算中获得,每个功能在使用云技术时都有影响。

An important feature used in more than one resources is multi-process job feature because an important aim of cloud computing is processing multi jobs at the same time via more than one server if the servers are in different locations. When we have a large amount of data to process, we can divide the big amount of data into small pieces, and each part may be processed by a different server. The aim of this process is to decrease the CPU usage, minimise switching time, minimise waiting time for processing data, improve server throughput and improve performance of the communication and computing of data.
在多个资源中使用的一个重要功能是多进程作业功能,因为云计算的一个重要目标是通过多个服务器同时处理多个作业(如果服务器位于不同的位置)。当我们有大量的数据需要处理时,我们可以将大量的数据分成小块,每个部分可能由不同的服务器处理。此过程的目的是降低 CPU 使用率、最大限度地减少切换时间、最大限度地减少处理数据的等待时间、提高服务器吞吐量以及提高数据通信和计算的性能。

Another feature is designing an application for cloud structure that helps users to easily use any cloud application for make communication with different users in different locations. One more feature is reducing the usage of memory because one problem in the last is using memory, and now after using cloud, user can use memory as they need by contacting with cloud application manage for expanding the memory in this time users only use cloud memory and reducing his storage.
另一个功能是设计一个云结构的应用程序,帮助用户轻松使用任何云应用程序与不同位置的不同用户进行通信。另一个功能是减少内存的使用,因为最后一个问题是使用内存,现在使用云后,用户可以通过联系云应用程序管理来根据需要使用内存,此时用户只使用云内存并减少他的存储。

The last important feature is improving server performance because the performance of communication is important. After reviewing all the references mentioned in Table I, we decided that the reference [1] is best works on distributed cloud computing because they cover great number of features that we discussed in above details.
最后一个重要功能是提高服务器性能,因为通信性能很重要。在查看了 Table I 中提到的所有参考文献后,我们认为该参考 [1] 文献最适合分布式云计算,因为它们涵盖了我们在上面详细讨论的大量功能。

Improving performance of digital computers and its other attributes (like cost effectiveness, reliability and so on) by means of various forms of concurrency is the concern of parallel processing, and achieves this through various algorithmic and architectural methods. There are three types of parallel processing approaches: distributed, shared and hybrid memory systems. In this review, we focused on distributed parallel processing and determined some important features as shown in Table II.
通过各种形式的并发性提高数字计算机的性能及其其他属性(如成本效益、可靠性等)是并行处理的关注点,并通过各种算法和架构方法实现这一目标。并行处理方法有三种类型:分布式、共享和混合内存系统。在这篇综述中,我们重点关注分布式并行处理,并确定了一些重要的特性,如 Table II 所示。

TABLE I. Distributed Cloud Computing Summary
表 I. 分布式云计算摘要
Table I.-
Distributed Cloud Computing Summary
The important feature mentioned in more than one resource is improving performance using load balancing technique. By using load balancing among servers, we can distribute the process and make balance between servers for processing the jobs and improve performance of our distributed system. Another feature is minimising resource cost because when we divide the load among servers, we can minimise the resource cost such as CPU, memory and storage. All of references implement here idea by proposing an algorithm for using distributed parallel processing based on response time of responding user requests because when the system have minimum response time it is better to user for using that system for responding user requests.
多个资源中提到的重要功能是使用负载平衡技术提高性能。通过在服务器之间使用负载平衡,我们可以分配进程并在服务器之间进行平衡以处理作业并提高分布式系统的性能。另一个特点是最小化资源成本,因为当我们在服务器之间分配负载时,我们可以最小化资源成本,例如 CPU、内存和存储。所有参考文献都通过提出一种基于响应用户请求的响应时间使用分布式并行处理的算法来实现这里的想法,因为当系统具有最短的响应时间时,用户最好使用该系统来响应用户请求。

After reviewing the references in this paper, we decided that reference [21] is better because it covers a great number of features including load balancing for improving system performance and minimising both response time and resource cost.
在查看了本文中的参考文献后,我们认为参考 [21] 文献更好,因为它涵盖了许多功能,包括用于提高系统性能的负载平衡,以及最大限度地减少响应时间和资源成本。

Table II. Distributed parallel processing Summary
表二.分布式并行处理总结
Table II.-
Distributed parallel processing Summary

SECTION VI. 第六节.Conclusion 结论

This review paper has covered many ideas in distributed cloud computing and distributed parallel computing. Subjects such as the combination of both algorithms have been devoted in this review paper. The main goal of this paper relates to the process of distributing workloads over servers and then process them among master and slave’s nodes. However, articles have been discussed in this paper include the methodology of designing applications in distributed cloud computing and introducing a concept of optimizing the level of response times while executing user’s images.
这篇综述涵盖了分布式云计算和分布式并行计算方面的许多思想。这篇综述论文专门讨论了两种算法的结合等主题。本文的主要目标涉及在服务器上分配工作负载,然后在主节点和从节点之间处理它们的过程。然而,本文讨论的文章包括在分布式云计算中设计应用程序的方法,以及引入在执行用户图像时优化响应时间水平的概念。

Authors
References
Download PDFs
Export
References & Cited By
参考文献和引用文献
Select All 全选
1.
L. Tripathy and R.R. Patra, “SCHEDULING IN CLOUD COMPUTING”, International Journal on Cloud Computing: Services and Architecture (IJCCSA), vol. 4, no. 5, October 2014.
L. Tripathy 和 R.R. Patra,“SCHEDULING IN CLOUD COMPUTING”,International Journal on Cloud Computing: Services and Architecture (IJCCSA),第 4 卷,第 5 期,2014 年 10 月。
Show in Context CrossRef Google Scholar
在上下文中显示 CrossRef Google 学术搜索
2.
R. J. Sobie, “Distributed Cloud Computing in High Energy Physics”, DCC '14 Proceedings of the 2014 ACM SIGCOMM workshop on Distributed cloud computing, pp. 17-22, August 2014.
R. J. Sobie,“高能物理中的分布式云计算”,DCC '14 Proceedings of the 2014 ACM SIGCOMM workshop on Distributed cloud computing,第 17-22 页,2014 年 8 月。
Show in Context CrossRef Google Scholar
在上下文中显示 CrossRef Google 学术搜索
3.
P. A. Pawade and V. T. Gaikwad, “Semi-Distributed Cloud Computing System with Load Balancing Algorithm”, 2014.
P. A. Pawade 和 V. T. Gaikwad,“具有负载平衡算法的半分布式云计算系统”,2014 年。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索
4.
Xiuzhi Li, Songmin Jia, Ke Wang and Xiaolin Yin, “DISTRIBUTED PARALLEL PROCESSING OF MOBILE ROBOT PF-SLAM”, International Conference on Automatic Control and Artificial Intelligence (ACAI 2012) Xiamen China, April 2013.
Xiuzhi Li, Songmin Jia, Ke Wang and Xiaolin Yin,“移动机器人 PF-SLAM 的分布式并行处理”,自动控制与人工智能国际会议 (ACAI 2012),中国厦门,2013 年 4 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索

A. K.Indira and B. M. K. Devi, “Effective Integrated Parallel Distributed Processing Approach in Optimized Multi-cloud computing Environment”, Sixth International Conference on Advanced Computing (lCoAC) pages 17-19 Chennai India, Dec. 2014.
A. K.Indira 和 B. M. K. Devi,“优化多云计算环境中的有效集成并行分布式处理方法”,第六届高级计算国际会议 (lCoAC),第 17-19 页,印度金奈,2014 年 12 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索

J. Zhu, Z. Ge and Z. Song, “Distributed Parallel PCA for Modeling and Monitoring of Large-scale Plant-wide Processes with Big Data”, IEEE Transactions on Industrial Informatics, vol. 13, no. 4, pp. 1877-1885, Aug. 2017.
J. Zhu、Z. Ge 和 Z. Song,“Distributed Parallel PCA for Modeling and Monitoring of Large-scale Plant-wide Processes with Big Data”,IEEE Transactions on Industrial Informatics,第 13 卷,第 4 期,第 1877-1885 页,2017 年 8 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索

A. Khiyati, M. Zbakh, H. El Bakkali and D. El Kettani, “Load Balancing Cloud Computing: State Of Art”, 2012 National Days of Network Security and Systems pages 20-21 Marrakech Morocco, April 2012.
A. Khiyati、M. Zbakh、H. El Bakkali 和 D. El Kettani,“Load Balancing Cloud Computing: State Of Art”,2012 年全国网络安全和系统日,第 20-21 页,摩洛哥马拉喀什,2012 年 4 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索

M. A. Vouk, “Cloud Computing – Issues Research and Implementations”, ITI 2008 - 30th International Conference on Information Technology Interfaces pages 23-26 Dubrovnik Croatia, June 2008.
M. A. Vouk,“云计算 - 问题研究和实施”,ITI 2008 - 第 30 届信息技术接口国际会议,第 23-26 页,克罗地亚杜布罗夫尼克,2008 年 6 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索

C. Lin, H. Chin and D. Deng, “Dynamic Multiservice Load Balancing in Cloud-Based Multimedia System”, IEEE Systems Journal, vol. 8, pp. 225-234, March 2014.
C. Lin、H. Chin 和 D. 邓,“基于云的多媒体系统中的动态多服务负载平衡”,IEEE 系统杂志,第 8 卷,第 225-234 页,2014 年 3 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索
10.
Y. Deng and Rynson W.H. Lau, “On Delay Adjustment for Dynamic Load Balancing in Distributed Virtual Environments”, IEEE transaction on visualization and computer graphics, vol. 18, no. 4, April 2012.
Y. 邓 和 Rynson W.H. Lau,“On Delay Adjustment for Dynamic Load Balancing in Distributed Virtual Environments”,IEEE transaction on visualization and computer graphics,第 18 卷,第 4 期,2012 年 4 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索
11.
L.D. D. Babua and P. V. Krishna, “Honey bee behavior inspired load balancing of tasks in cloud computing environments” in Applied Soft Computing, Amsterdam, The Netherlands, vol. 13, no. 5, pp. 2292-2303, May 2013.
L.D. D. Babua 和 P. V. Krishna,“蜜蜂行为激发了云计算环境中任务的负载均衡”,载于 Applied Soft Computing,荷兰阿姆斯特丹,第 13 卷,第 5 期,第 2292-2303 页,2013 年 5 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索

D. Warneke and O. Kao, “Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud”, IEEE transaction on parallel and distributed systems, vol. 22, no. 6, JUNE 2011.
D. Warneke 和 O. Kao,“Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud”,IEEE Transaction on Parallel and Distributed Systems,第 22 卷,第 6 期,2011 年 6 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索
13.
X. L. Xingong and X. Lv, “Distributed Cloud Storage and Parallel Topology Processing of Power Network”, Third International Conference on Trustworthy Systems and Their Applications pages 18-22 Wuhan China, Sept. 2016.
X. L. Xingong 和 X. Lv,“分布式云存储和电力网络的并行拓扑处理”,第三届可信系统及其应用国际会议,第 18-22 页,中国武汉,2016 年 9 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索
14.
B. Varghese and R. Buyya, “Next Generation Cloud Computing: New Trends and Research Directions1”, Future Generation Computer Systems, September 2017.
B. Varghese 和 R. Buyya,“下一代云计算:新趋势和研究方向1”,下一代计算机系统,2017 年 9 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索
15.
Z. Peng, Q. Gong, Y. Duan and Y. Wang, “The Research of the Parallel Computing Development from the Angle of Cloud Computing”, IOP Conf. Series: Journal of Physics: Conf., 2017.
Z. Peng, Q. Gong, Y. Duan and Y. Wang, “The Research of the Parallel Computing Development from the Angle of Cloud Computing”, IOP Conf. Series: Journal of Physics: Conf., 2017.
Show in Context CrossRef Google Scholar
在上下文中显示 CrossRef Google 学术搜索
16.
Md. F. Ali and R. Zaman Khan, “Distributed Computing: AnOverview”, International Journal of Advanced Networking and Applications, vol. 07, no. 01, pp. 2630-2635, 2015.
Md. F. Ali 和 R. Zaman Khan,“分布式计算:概述”,《国际高级网络与应用杂志》,第 07 卷,第 01 期,第 2630-2635 页,2015 年。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索

Y. Sun, Z. Zhu and Z. Fan, “Distributed Caching in Wireless Cellular Networks Incorporating Parallel Processing”, IEEE Internet Computing, vol. 22, no. 1, pp. 52-61, Feb. 2018.
Y. Sun、Z. Zhu 和 Z. Fan,“Distributed Caching in Wireless Cellular Networks Incorporated Parallel Processing”,IEEE Internet Computing,第 22 卷,第 1 期,第 52-61 页,2018 年 2 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索
18.
P. Srinivasa Rao, V.P.C Rao and A. Govardhan, “Dynamic Load Balancing With Central Monitoring of Distributed Job Processing System”, International Journal of Computer Applications, vol. 65, no. 21, March 2013.
P. Srinivasa Rao、V.P.C Rao 和 A. Govardhan,“Dynamic Load Balancing with Central Monitoring of Distributed Job Processing System”,《国际计算机应用杂志》,第 65 卷,第 21 期,2013 年 3 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索

A. Sharma and S. K. Peddoju, “Response Time Based Load Balancing in Cloud Computing”, 2014 International Conference on Control Instrumentation Communication and Computational Technologies (ICCICCT), July 2014.
A. Sharma 和 S. K. Peddoju,“云计算中基于响应时间的负载平衡”,2014 年控制仪表通信和计算技术国际会议 (ICCICCT),2014 年 7 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索

H. Chen, F. Wang, N. Helian and G. Akanmu, “User-Priority Guided Min-Min Scheduling Algorithm For Load Balancing in Cloud Computing”, 2013 National Conference on Parallel Computing Technologies (PARCOMPTECH) Bangalore India, Feb 2013.
H. Chen、F. Wang、N. Helian 和 G. Akanmu,“用于云计算负载平衡的用户优先引导的最小-最小调度算法”,2013 年全国并行计算技术会议 (PARCOMPTECH),印度班加罗尔,2013 年 2 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索

R. Kapur, “A Workload Balanced Approach for Resource Scheduling in Cloud Computing”, 2015 Eighth International Conference on Contemporary Computing (IC3) page 20-22 Noida India, Aug 2015.
R. Kapur,“A Workload Balanced Approach for Resource Scheduling in Cloud Computing”,2015 年第八届当代计算国际会议 (IC3),第 20-22 页,印度诺伊达,2015 年 8 月。
Show in Context View Article
在上下文中显示 查看文章
Google Scholar
Google 学术搜索
22.
B. Mondal and A. Choudhury, “Simulated Annealing (SA) based Load Balancing Strategy for Cloud Computing”, International Journal of Computer Science and Information Technologies, vol. 6, pp. 3307-3312, 2015.
B. Mondal 和 A. Choudhury,“基于模拟退火 (SA) 的云计算负载均衡策略”,《国际计算机科学与信息技术杂志》,第 6 卷,第 3307-3312 页,2015 年。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索
23.
S.K.S. Kumar and P. Balasubramanie, “Cloud Scheduling Using Mumbai Dabbawala”, International Journal of Computer Science and Mobile Computing, vol. 4, no. 10, October 2015.
S.K.S. Kumar 和 P. Balasubramanie,“Cloud Scheduling Using Mumbai Dabbawala”,International Journal of Computer Science and Mobile Computing,第 4 卷,第 10 期,2015 年 10 月。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索
24.
N. J. Kansal and I. Chana, “Existing Load balancing techniques in cloud computing: A SYSTEMATIC REVIEW”, Journal of Information Systems and Communication, vol. 3, no. 1, pp. 87-91, 2012.
N. J. Kansal 和 I. Chana,“云计算中的现有负载均衡技术:系统综述”,《信息系统与通信杂志》,第 3 卷,第 1 期,第 87-91 页,2012 年。
Show in Context Google Scholar
在上下文中显示 Google 学术搜索

  • 20
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

资源存储库

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值