Web2.0 Or Cloud computing?

http://radar.oreilly.com.cn/blog/tim/web-20-and-cloud-computing

Web 2.0 and Cloud Computing(Web 2.0与云计算)

A couple of months ago, Hugh Macleod created a bit of buzz with his blog post The Cloud's Best Kept Secret. Hugh's argument: that cloud computing will lead to a huge monopoly. Of course, a couple of weeks ago, Larry Ellison made the opposite point, arguing that salesforce.com is "barely profitable", and that no one will make much money in cloud computing.

In this post, I'm going to explain why Ellison is right, and yet, for the strategic future of Oracle, he is dangerously wrong.

First, let's take a look at Hugh Macleod's argument:

...nobody seems to be talking about Power Laws. Nobody's saying that one day a single company may possibly emerge to dominate The Cloud, the way Google came to dominate Search, the way Microsoft came to dominate Software.

Monopoly issues aside, could you imagine such a company? We wouldn't be talking about a multi-billion dollar business like today's Microsoft or Google. We're talking about something that could feasibly dwarf them. We're potentially talking about a multi-trillion dollar company. Possibly the largest company to have ever existed.

I imagine many of my friends who work for the aforementioned companies know all about this, and know how VAST the stakes are.

Windows vs Apple? Who cares? Kid's stuff. There's a much bigger game going on... And for some reason, its utter enormity seems to be a very well-kept secret, at least to non-combatants like myself.

The problem with this analysis is that it doesn't take into account what causes power laws in online activity. Understanding the dynamics of increasing returns on the web is the essence of what I called Web 2.0. Ultimately, on the network, applications win if they get better the more people use them. As I pointed out back in 2005, Google, Amazon, ebay, craigslist, wikipedia, and all other other Web 2.0 superstar applications have this in common.

Cloud computing, at least in the sense that Hugh seems to be using the term, as a synonym for the infrastructure level of the cloud as best exemplified by Amazon S3 and EC2, doesn't have this kind of dynamic. (More on different types of cloud computing later.)

Of course, it is true that the bigger players will have economies of scale in the cost of equipment, and especially in the cost of power, that are not available to smaller players. But there are quite a few big players -- Google, Microsoft, Amazon -- to name a few, that are already at that scale, with or without a cloud computing play. What's more, economies of scale are not the same as increasing returns from user network effects. They may be characteristic of a commoditizing marketplace that does not actually give outsize economic leverage to the winners.

I can't vouch for the authenticity of the following remark, since I heard it secondhand, but it was from a thoughtful, informed source: Jeff Bezos is reported to have said that he welcomes cloud competition from Google and Microsoft, because they'll subsidize their cloud services with profits from other part of their business, while Amazon will always have to make it pay. "We're good at commodity businesses," Jeff is reported to have said, and the facts bear him out.

If cloud computing is a commodity business, then the outsize profits that Hugh envisioned are not going to be there. This is a business that will be huge, but it may be more similar to the web hosting and ISP markets, which are also huge, but not hugely profitable. (See Rackspace's numbers for a taste.)

But because one of my goals at Radar is to help people think about the future, I wanted to spend some time on the possible futures and strategies that could turn cloud computing into the kind of massive monopoly that Hugh envisioned.

Types of Cloud Computing

Since "cloud" seems to mean a lot of different things, let me start with some definitions of what I see as three very distinct types of cloud computing:

  1. Utility computing. Amazon's success in providing virtual machine instances, storage, and computation at pay-as-you-go utility pricing was the breakthrough in this category, and now everyone wants to play. Developers, not end-users, are the target of this kind of cloud computing.

    This is the layer at which I don't presently see any strong network effect benefits (yet). Other than a rise in Amazon's commitment to the business, neither early adopter Smugmug nor any of its users get any benefit from the fact that thousands of other application developers have their work now hosted on AWS. If anything, they may be competing for the same resources.

    That being said, to the extent that developers become committed to the platform, there is the possibility of the kind of developer ecosystem advantages that once accrued to Microsoft. More developers have the skills to build AWS applications, so more talent is available. But take note: Microsoft took charge of this developer ecosystem by building tools that both created a revenue stream for Microsoft and made developers more reliant on them. In addition, they built a deep -- very deep -- well of complex APIs that bound developers ever-tighter to their platform.

    So far, most of the tools and higher level APIs for AWS are being developed by third-parties. In the offerings of companies like Heroku, Rightscale, and EngineYard (not based on AWS, but on their own hosting platform, while sharing the RoR approach to managing cloud infrastructure), we see the beginnings of one significant toolchain. And you can already see that many of these companies are building into their promise the idea of independence from any cloud infrastructure vendor.

    In short, if Amazon intends to gain lock-in and true competitive advantage (other than the aforementioned advantage of being the low-cost provider), expect to see them roll out their own more advanced APIs and developer tools, or acquire promising startups building such tools. Alternatively, if current trends continue, I expect to see Amazon as a kind of foundation for a Linux-like aggregation of applications, tools and services not controlled by Amazon, rather than for a Microsoft Windows-like API and tools play. There will be many providers of commodity infrastructure, and a constellation of competing, but largely compatible, tools vendors. Given the momentum towards open source and cloud computing, this is a likely future.

     

  2. Platform as a Service. One step up from pure utility computing are platforms like Google AppEngine and Salesforce's force.com, which hide machine instances behind higher-level APIs. Porting an application from one of these platforms to another is more like porting from Mac to Windows than from one Linux distribution to another.

    The key question at this level remains: are there advantages to developers in one of these platforms from other developers being on the same platform? force.com seems to me to have some ecosystem benefits, which means that the more developers are there, the better it is for both Salesforce and other application developers. I don't see that with AppEngine. What's more, many of the applications being deployed there seem trivial compared to the substantial applications being deployed on the Amazon and force.com platforms. One question is whether that's because developers are afraid of Google, or because the APIs that Google has provided don't give enough control and ownership for serious applications. I'd love your thoughts on this subject.

     

  3. Cloud-based end-user applications. Any web application is a cloud application in the sense that it resides in the cloud. Google, Amazon, Facebook, twitter, flickr, and virtually every other Web 2.0 application is a cloud application in this sense. However, it seems to me that people use the term "cloud" more specifically in describing web applications that were formerly delivered locally on a PC, like spreadsheets, word processing, databases, and even email. Thus even though they may reside on the same server farm, people tend to think of gmail or Google docs and spreadsheets as "cloud applications" in a way that they don't think of Google search or Google maps.

    This common usage points up a meaningful difference: people tend to think differently about cloud applications when they host individual user data. The prospect of "my" data disappearing or being unavailable is far more alarming than, for example, the disappearance of a service that merely hosts an aggregated view of data that is available elsewhere (say Yahoo! search or Microsoft live maps.) And that, of course, points us squarely back into the center of the Web 2.0 proposition: that users add value to the application by their use of it. Take that away, and you're a step back in the direction of commodity computing.

    Ideally, the user's data becomes more valuable because it is in the same space as other users' data. This is why a listing on craigslist or ebay is more powerful than a listing on an individual blog, why a listing on amazon is more powerful than a listing on Joe's bookstore, why a listing on the first results page of Google's search engine, or an ad placed into the Google ad auction, is more valuable than similar placement on Microsoft or Yahoo!. This is also why every social network is competing to build its own social graph rather than relying on a shared social graph utility.

    This top level of cloud computing definitely has network effects. If I had to place a bet, it would be that the application-level developer ecosystems eventually work their way back down the stack towards the infrastructure level, and the two meet in the middle. In fact, you can argue that that's what force.com has already done, and thus represents the shape of things. It's a platform I have a strong feeling I (and anyone else interested in the evolution of the cloud platform) ought to be paying more attention to.

The Law of Conservation of Attractive Profits

A lot of my thinking about web 2.0 grew directly out of my thinking about open source. My argument in The Open Source Paradigm Shift was that what we learned from the history of the IBM personal computer -- a commodity platform built from off-the-shelf parts -- was that it drained value out of the hardware ecosystem, turning it into a low-margin business. But profits didn't go away. Instead, through something that Clayton Christensen calls "the law of conservation of attractive profits," value migrated elsewhere, from hardware to software, from IBM to Microsoft. Christensen:

When attractive profits disappear at one stage in the value chain because a product becomes modular and commoditized, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.

I believe strongly that open source and open internet standards are doing the same to traditional software. And value is migrating to a new kind of layer, which we now call Web 2.0, which consists of applications driven not just by software but by network-effects databases driven by explicit or implicit user contribution.

So when Larry Ellison says that cloud computing and open source won't produce many hugely profitable companies, he's right, but only if you look at the pure software layer. This is a lot like saying that the PC wouldn't produce many hugely profitable companies, and looking only at hardware vendors! First Microsoft, and now Google give the lie to Ellison's analysis. The big winners are those who best grasp the rules of the new platform.

So here's the real trick: cloud computing is real. Everything is moving into the cloud, in whole or in part. The utility layer of cloud computing will be just that, a utility, without outsized profits.

But the cloud platform, like the software platform before it, has new rules for competitive advantage. And chief among those advantages are those that we've identified as "Web 2.0", the design of systems that harness network effects to get better the more people use them.

If Oracle isn't playing that game, they will one day be doomed to irrelevance. Perhaps, like hardware giants of the past - Compaq, say - they will be absorbed by a bigger company. Or perhaps, like Unisys, they will linger on in specialized markets, too big to go away but no longer on the cutting edge of anything. Or they will understand that it's not the database software that matters, but the data that it holds, and the services that can be built against that data.

The company that creates the right platform for network effects in data may well achieve the scale that Hugh Macleod envisioned.

P.S. I will be doing two panels on cloud computing at the Web 2.0 Summit in San Francisco the week after next, one on the application layer, and one on the infrastructure layer. Panelists include Paul Maritz (CEO of VMware, who, by the way totally gets what I'm talking about here), Russ Daniels (CTO for cloud services at HP), Padmasree Warrior (CTO at Cisco), the inimitable Marc Benioff of Salesforce.com, Kevin Lynch, CTO of Adobe, and Dave Girouard, who is in charge of Google Apps for the Enterprise. Should be some interesting conversations on the subjects raised in this post!

翻译:xiaochong

几个月前Hugh Macleod的博客文章“云计算的惊人秘密引起一片讨论。Hugh认为:云计算会导致超级垄断。几周前Larry Ellison又表达了相反的看法,认为salesforce.com很难赢利,而且没人会在云计算方面赚到太多钱。

本文中我将解释为什么Ellison是对的,以及为什么就Oracle公司未来战略来讲Ellison的观点又是很危险的。

首先看一下Hugh Macleod的观点:

……没人注意到Power定律。没人想到有一天一个公司将会统治云计算,就像Google统治搜索业务、微软统治软件领域一样。

除了垄断问题你能想象这样一个公司吗?我们讨论的不是一个像现在的微软或Google这样的几十亿美元的业务。这些企业与我们谈到的事情相形见绌。这有可能是数万亿的公司。有可能是有史以来最大的企业。

我认为我很多在这些公司里工作的朋友了解这一点,知道这是一笔多么大的财富。

谁还会关系Windows与苹果之争?简直是小儿科。这里有更大的事情……其巨大程度似乎被大家作为秘密保守,至少对像我这样的局外人保守这个秘密。

这一分析的问题在于没有考虑引发在线领域Power定律的原因。我所谈的Web 2.0核心就是要理解Web上递增回报的原动力。归根到底在网络上如果一个应用用的人越多效果就越好它就会取得胜利。正如我2005年指出的,Google、Amazon、ebay、craigslist、wikipedia以及其他Web 2.0超级明星企业均具备这一特点。

Hugh似乎将云计算作为像Amazon S3和EC2那样的基础结构的同义词,至少从这个意义上看云计算不具备这种原动力。(后面谈到更多类型的云计算。)

当然大型企业在成本方面具有规模经济,尤其是在电力成本方面,这对小型企业来讲是做不到的。但是现在已经有几个这样的大企业了——Google、微软、Amazon,规模足够大,有的参与了云计算,有的没有。更为重要的是规模经济并不等同于用户网络效应带来的持续性回报。规模经济更是大众商品市场的特征,这一领域对于最终胜利者并没有巨大的经济杠杆作用。

既然我也是听别人讲的所以我不确定下面消息的准确性,但它的确来自于一个重要的消息灵通之处:据说Jeff Bezos讲过他欢迎Google和微软参与到云计算的竞争中来,因为他们会从其他业务来补贴云计算服务,而Amazon则总是从中获益。“我们更长于大众商品业务,”据说Jeff是这么讲的,事实也印证了这一点。

如果云计算是大众商品业务,就不会有Hugh设想的巨大利润。这一业务将会很大但利润很薄,就像虚拟主机和ISP市场一样。(有兴趣可以看一下Racksspace的数字。)

但是因为Radar的一个目标就是帮助大家思考未来,所以我要花点时间来谈谈可能将云计算转化为Hugh所讲的超级垄断的未来和战略。

云计算的种类

既然“云计算”似乎包含很多不同的内容,让我从看到的三种类型迥异的云计算定义开始吧:

  1. 计算工具。Amazon以用多少付多少的定价方式提供虚拟机、存储以及计算的成功是一个创举,现在所有人都希望分一杯羹。开发人员(而不是最终用户)是这种云计算的目标人群。
    这一层面上我目前还没看到任何强有力的网络效应收益。除了Amazon不断对这一业务的跟进,无论是第一个吃螃蟹的人 Smugmug还是其他用户都没得到任何收益,尽管无数应用开发人员已经将工作放到AWS上了。如果能有什么收益估计他们也是针对同一资源展开争夺。
    就是说如果开发人员接受了这一平台,有可能产生像微软那样的开发人员生态优势。越多开发人员掌握了AWS应用开发技能就有越多的人才。但是不要忘记:微软对此是向开发人员收费的,通过构建工具不但为自己创造了收益流也使开发人员进一步依赖该平台。而且,他们构建了越来越神秘的复杂的API,这就将开发人员更紧地绑在自己的平台之上。
    到目前为止大多数AWS上的工具和高层API都是第三方开发的。从像 HerokuRightscale以及 EngineYard这些公司(不是基于AWS的,是在他们自己的平台上,然后以RoR的方式来提供云计算基础结构)提供的服务中我们已经开始看到一个非常重要的工具链。而且你已经可以看到这些公司正在实现诺言,独立于任何云计算基础设施提供商。
    简而言之,如果Amzon想进行封锁取得真正的竞争优势(而不是前面讲的作为一个低成本提供者的优势),他们就会搞出一套自己的高级API和开发工具,或者收购一个优秀的创业公司来做这件事。另外一种可能,如果目前的趋势发展下去,我希望看到Amazon成为一种类似Linux的应用、工具和服务的基础,这些应用工具服务不被Amazon控制,而不要像微软那样搞出一套Windows式的API和工具。会有很多大众商品基础设施的提供者,以及相互竞争又相互协调的工具提供商。鉴于 开源与云计算的势头,这种情况未来是可能的。
  2. 平台作为服务。由纯粹的计算工具再进一步就是像Google AppEngine和Salesforce的 force.com这样的平台,它们将机器实例隐藏在更高级别的API后面。在这些平台之间移植应用更像是从Mac移植到Windows上,而不是在不同Linux发布包之间移植。
    这一层面上也有问题:这些平台上的开发人员比同一平台上的其它开发者有优势?force.com对我而言有些许生态系统收益,有越多的开发人员存在对Salesforce和其它应用开发人员就越好。在AppEngine上我就没看到这一点。而且部署在AppEngine上的应用和Amazon以及force.com平台上的应用相比微不足道。这是否是因为开发人员对Google敬而远之,或者Google提供的API没有给开发人员对自己的重要应用足够的控制和所属权?在这个问题上我同意大家的看法。
  3. 基于云计算的终端用户应用。从这个意义上讲任何Web应用都是一个云计算应用。Google、Amazon、Facebook、Twitter、flickr以及所有其它Web 2.0应用都是云计算。然而在我看来这是人们用云计算这个词更具体地描述那些以前需要分发到本地PC上的Web应用,比如表格软件、文本处理、数据库甚至是邮件。尽管所有应用都是在同样的服务器群上完成的,人们总是趋于将Gmail、Google docs和spreadsheets称为“云计算应用”,而对Google搜索和地图业务却不这样讲。
    这种共同点表明了一种不同:当掌握了个人用户数据时人们喜欢将之区别对待,称为云计算应用。自己的数据丢失或者不可用可要比那些数据聚合服务(比如Yahoo!搜索或者Microsoft live maps)消失更可怕,那些聚集来的数据毕竟在其它地方可以找到。这一点也直接将我们带回Web 2.0命题的中心:用户通过使用应用为之增加价值。没有这一点你只能退回大众消费计算的老路上。
    理想情况下用户数据放在一起会变得更有价值。这就是为什么craigslist或者ebay比个人博客更有力,为什么Amazon比单一书店成功,为什么Google搜索引擎结果页第一页或者Google的广告要比Microsoft或Yahoo!的更具价值。这也是为什么所有社交网站都竞相构建自己的社交图而不是 用别人的

这种最高层面的云计算肯定有网络效应。如果一定要我选我认为应用层面的开发人员将向底层基础结构层面过渡,两个层面最终在中间相遇。实际上你可以认为force.com已经在做这些了。这一平台我强烈感觉应该更多给与关注。

诱人利润守恒定律

很多我关于Web 2.0的思想直接来自于我关于开源的思想。我在The Open Source Paradigm Shift中谈到从IBM个人计算机——由市场上的零件构建起来的大众商品平台——的历史可以看到它榨干了硬件的价值,将其变成一个微利产业。但是利润并未消失。正如Clayton Christensen的“诱人利润守恒定律”描述的一样,价值迁移了,从硬件迁移到软件,从IBM迁移到微软。Christensen认为:

当诱人利润随着一个产品标准化和大众商品化在一个价值链中某一部分消失时,赚取诱人利润的机会往往会出现在相邻的部分。

我非常相信开源和开放互联网标准正在对传统软件产业实施同样的影响。价值正在向新的层次转移,我们称之为Web 2.0,组成其整体的应用不仅仅由软件驱动还是由具备网络效应的数据库来驱动,这些数据库也是由明确的或者隐含的用户贡献来驱动的。

所以,Larry Ellison认为云计算和开源都不可能产生太多利润丰厚的公司,他讲得对,但这仅局限于单纯软件层面。这就像说PC产业不可能有利润丰厚的企业一样,只是局限在硬件厂商范畴。先是微软现在是Google证明了Ellison的分析的谬误。最后的胜利者是那些真正把握了新平台规则的人。

所以真正的重点是:云计算是真实的。一切正在向云计算转移。云计算在计算工具层面上也仅仅是个工具,没有太大的利润空间。

但是云计算平台正如当年的软件平台有新的竞争优势规则。这些优势中最重要部分就是那些我们称之为“Web 2.0”的集合,设计系统能够驾驭网络效应,用的人越多效果越好。

如果Oracle不参与到云计算的比赛中来,那注定有一天会被边缘化,像昔日的硬件巨人——Compaq——被更大的公司吃掉。或者像Unisys那样可能在某一特定市场中存活下去,太大了,死不了但什么事也做不好。也许他们能理解重要的不再是数据库软件了,而是数据,以及由这些数据构建起来的服务。

能为数据网络效应创建正确平台的公司才有可能取得Hugh Macleod所描述的那种规模。

另外:我在两周后旧金山的Web 2.0峰会上会参加两个小组,一个是关于云计算应用层面的,另一个是针对基础结构层面的。小组成员包括Paul Maritz (VMware CEO)、Russ Daniels (HP云计算服务CTO)、Padmasree Warrior (Cisco CTO)、Salesforce.com的Marc Benioff、Kevin Lynch (Adobe CTO),还有Dave Girouard,他负责企业Google Apps。届时会有很多关于本文中提出问题的有趣讨论!

 

云理论是实现概念的定性值与数字的定量值之间自然转换的有力工具。

  本文在云理论的基础上,提出了实现概念计算(也叫简化计算)的云计算方法。概述了云模型与不确定推理;给出了计算的逻辑描述,将计算过程抽象成为推理过程;运用机器学习的方法,给出了计算云化的过程,并且采用不确定推理的方法,给出了云的计算过程;简单阐述了云化计算的系统实现。

  随着网络的日益普及,“网络存储”这样的服务也日益深入人心,有了众多的使用者,网络服务业方面的开发商自然不会错过这个大市场。几天前,微软推出了Windows Live SkyDrive,并已经在互联网上全面进行测试,虽然容量仅仅为500MB,但是毕竟是免费的,使用的人数量极为可观。而谷歌也不是等闲之辈,SkyDrive刚刚登场就推出了自己的超大容量网络存储方案,不过收取的费用上还需要商榷。另外,苹果公司开发的.Mac平台也瞄准了“网络存储”这块大蛋糕,随时准备加入这场 “战争”。

  近日,arstechnica等国外各大网站发表了一篇名为《Google, Microsoft and Apple building online storage havens: you win》的文章,引起了全球范围内网友的热烈讨论,微软、谷歌、苹果的“粉丝”在各大论坛议论纷纷,强烈支持各自的“阵营”,一时间硝烟四起,网络存储三强鼎立的局面已经日渐明显。

  微软公司的SkyDrive走的是纯免费路线,不过不排除这只是微软公司的“鱼饵”,毕竟天上还是不会掉馅饼的。谷歌公司则直接与美元挂钩,要想用我的服务,当然可以,而且非常欢迎,但是容量的大小是要用美元来衡量的。从2.8GB升级到2.8GB+6GB也许我们都还可以接受,但是500美元直接买下 250GB的空间则有些奢侈了,而且仅局限于谷歌本身的服务拓展性也太小了。苹果公司的.Mac平台之前就已经有网络存储的服务了,不过现在容量却已经从当时极不起眼的1GB升级到了10GB,同样是99美元的.Mac平台使用年费,加量不加价是感觉还是不错的。

  竞争必然会引起降价和服务质量的提升,相信这也是我们普通用户希望看到的,而网络存储全面普及的时代也将来临,希望从“云计算”(Cloud computing)的“云存储”发展起来的网络存储能够带给我们更多的惊喜,正如目前的IM即时通讯软件那样百花齐放。

  要深入理解云计算,需要把握以下五个方面(文章来源——中国云计算网www.cloudcomputing-china.cn)

  (一)原理:

  云计算(Cloud Computing)是分布式处理(Distributed Computing)、并行处理(Parallel Computing)和网格计算(Grid Computing)的发展,或者说是这些计算机科学概念的商业实现。

  云计算的基本原理是,通过使计算分布在大量的分布式计算机上,而非本地计算机或远程服务器中,企业数据中心的运行将更与互联网相似。这使得企业能够将资源切换到需要的应用上,根据需求访问计算机和存储系统。

  这可是一种革命性的举措,打个比方,这就好比是从古老的单台发电机模式转向了电厂集中供电的模式。它意味着计算能力也可以作为一种商品进行流通,就像煤气、水电一样,取用方便,费用低廉。最大的不同在于,它是通过互联网进行传输的。

  云计算的蓝图已经呼之欲出:在未来,只需要一台笔记本或者一个手机,就可以通过网络服务来实现我们需要的一切,甚至包括超级计算这样的任务。从这个角度而言,最终用户才是云计算的真正拥有者。

  云计算的应用包含这样的一种思想,把力量联合起来,给其中的每一个成员使用。

  (二)“云”时代

  目前,PC依然是我们日常工作生活中的核心工具——我们用PC处理文档、存储资料,通过电子邮件或U盘与他人分享信息。如果PC硬盘坏了,我们会因为资料丢失而束手无策。

  而在“云计算”时代,“云”会替我们做存储和计算的工作。“云”就是计算机群,每一群包括了几十万台、甚至上百万台计算机。“云”的好处还在于,其中的计算机可以随时更新,保证“云”长生不老。Google就有好几个这样的“云”,其他IT巨头,如微软、雅虎、亚马逊(Amazon)也有或正在建设这样的“云”。

  届时,我们只需要一台能上网的电脑,不需关心存储或计算发生在哪朵“云”上,但一旦有需要,我们可以在任何地点用任何设备,如电脑、手机等,快速地计算和找到这些资料。我们再也不用担心资料丢失。 

  (三)云计算的几大形式

  1.SAAS(软件即服务)

  这种类型的云计算通过浏览器把程序传给成千上万的用户。在用户眼中看来,这样会省去在服务器和软件授权上的开支;从供应商角度来看,这样只需要维持一个程序就够了,这样能够减少成本。Salesforce.com是迄今为止这类服务最为出名的公司。SAAS在人力资源管理程序和ERP中比较常用。 Google Apps和Zoho Office也是类似的服务

  2.实用计算(Utility Computing)

  这个主意很早就有了,但是直到最近才在Amazon.com、Sun、IBM和其它提供存储服务和虚拟服务器的公司中新生。这种云计算是为IT行业创造虚拟的数据中心使得其能够把内存、I/O设备、存储和计算能力集中起来成为一个虚拟的资源池来为整个网络提供服务。

  3.网络服务

  同SAAS关系密切,网络服务提供者们能够提供API让开发者能够开发更多基于互联网的应用,而不是提供单机程序。

  4.平台即服务

  另一种SAAS,这种形式的云计算把开发环境作为一种服务来提供。你可以使用中间商的设备来开发自己的程序并通过互联网和其服务器传到用户手中。

  5.MSP(管理服务提供商)

  最古老的云计算运用之一。这种应用更多的是面向IT行业而不是终端用户,常用于邮件病毒扫描、程序监控等等。

  6.商业服务平台

  SAAS和MSP的混合应用,该类云计算为用户和提供商之间的互动提供了一个平台。比如用户个人开支管理系统,能够根据用户的设置来管理其开支并协调其订购的各种服务。

  7.互联网整合

  将互联网上提供类似服务的公司整合起来,以便用户能够更方便的比较和选择自己的服务供应商。

  (四)《纽约时报》:云计算到底指什么?

  云计算的说法正在广为流行,Gartner高级分析师BenPring评价道:“它正在成为一个大众化的词语。”但是,问题是似乎每个人对于云计算的理解各不相同。作为一个对互联网的比喻,“云”是很容易理解的。但是一旦同“计算”联系起来,它的意义就扩展了,而且开始变得模糊起来。有些分析师和公司把云计算仅仅定义为计算的升级版——基本上就是互联网上提供的众多虚拟服务器。另外一些人把云计算定义的更加宽泛,他们认为用户在防火墙保护之外消费的任何事物都处于“云”之中。

  云计算被人们关注是在人们考虑IT业到底需要什么之后,人们需要找到一种办法能够在不增加新的投资,新的人力和新的软件的情况下增加互联网的能力和容量。而云计算正好提供了这种可能。

  现今云计算正处于一个起步的阶段,大大小小的公司提供着各式各样的云计算服务,从软件应用到网络存储再到邮件过滤。这些公司一部分是基础设备提供商,另一部分是像Salesforce.com之类的SAAS(软件即服务)提供商。现今主要实现的是基于互联网的个人服务,但是云计算的聚合和整合正在产生。

  (五)廉价的“云”设备

  由于“云时代”到来之后,几乎所有包括应用软件在内的数据都存储到“云”里,终端的功能将会退化,并将促成硬件产业和传统软件业的革命。

  未来的终端,谷歌称之为“云”设备。其特点是一定要拥有一个功能完整的浏览器,并安装一个简单的操作系统,包括PC、手机、MP3、汽车上的CD甚至手表,“一开机,输入用户名和密码就能从‘云’端获取自己的应用,比在终端里更简单。”李开复描述道。而由于存储和运算能力都在“云”里,“云”设备未来的存储能力和运算能力将被极度削弱。

  按李开复的理解,“云”设备的优势就是廉价、开放的环境以及简单。据了解,由于手机操作系统不开放,谷歌专门为“云计算”设计了一个Android操作系统。“Android就是为‘云时代’设计的。”李开复说,Android是个完整的操作系统,有个功能齐全的浏览器,跟其他操作系统不同的是,Android是个具有开放标准的“云”设备,它可以免费提供给用户,可以使手机变得更廉价。

  按谷歌的想法,“云计算”的互联网时代到来之后,几乎所有数据和运算能力都搬到网络上,使廉价的PC、简单的操作系统成为选择,而由此带来的硬件、软件产业的革命已拉开。

  在PC时代,你看到的是一个摩尔定律决定的硬件产业的速度竞赛模式:WINTEL架构相辅相成,你做一个更大的操作系统,我做一个更快的CPU,并导致硬盘、内存产业随之不断升级。但是李开复认为,“云时代”到来之后,这种模式的价值已无法延续。

  “在互联网时代,我们的主要活动都在浏览器里,对PC的性能要求并不太高,所以硬件产业需要一个新的模式。”李开复认为,在“云时代”,一个简单开放的Linux操作系统、并不快的CPU、256M内存、不到10G的硬盘,就可以支撑起你日常的应用,而手机、汽车上的CD机、MP3甚至手表……能够上网、开放、廉价将是“云”设备的代名词。

  “一个芯片制造者当然希望最快、最贵,但在桌面上这并不是未来业务的趋势。”李开复说。

  cloud computing 带来的三大影响

  

  首先,云计算将赋予互联网更大的内涵并改变互联网企业的运营模式。过去几乎所有应用都是装在用户端或者局端数据库上运行,但今后通过云计算,更多地应用能够以互联网服务的方式进行。云计算的先驱者之一谷歌甚至强调未来几乎所有的软件都可以搬上互联网,以服务取代软件。当然这种观点过于极端,不可能所有的应用都完全通过网络作为存储和计算,但是云计算作为一种应用的模式将成为更多企业和个人的选择。由此,也必然引起互联网企业在运营模式方面的相应改变。

  其次,云计算将扩大软硬件应用的外延并改变软硬件产品的应用模式。有一种流行的说法,将云计算比喻为电厂集中供电的模式,即客户不再需要自己购买发电机发电,而只要去电厂买电使用就够了。也就是说,通过云计算,用户可以不必购买新的服务器和部署软件,就可以得到应用环境或者应用本身。对于用户来说,软硬件不必是部署在自己身边的、专属于自己的产品,而是可以变身为可利用的、虚拟的一种资源。而且,可以利用的软硬件资源也不仅限于自己企业内部的设备和软件,而是可以通过网络得到扩展的软硬件资源。

  再者,IT产品的开发方向也将发生变化以适应上述两种情况。一段时间以来,我们可以看到业界巨头们纷纷发布自己的云计算战略以配合这种发展趋势。英特尔表示,未来的技术发展会和“云”里的应用发生很大关系,英特尔设计的服务器和平台会按这样的方向变化,技术发展的目标中将增加新的内容。IBM对云计算更是投下了重注,并为此命名为“蓝云”计划,目前,IBM已经部署了200多名研发人员在这项业务的研究上。近期国外已有专门定位于云计算应用的终端设备发布,被称为云计算计算机,它的推出主要是瞄准那些需要PC,同时只是使用Web和Email的,希望价位低的人群,并且更多PC厂商都在开始规划这方面的产品。

  cloud computing的由来

  cloud computing被它的吹捧者视为“革命性的计算模型”,因为它使得超级计算能力通过互联网自由流通成为了可能。企业与个人用户无需再投入昂贵的硬件购置成本,只需要通过互联网来购买租赁计算力,“把你的计算机当做接入口,一切都交给互联网吧”。

  “用户只需要640K的内存就足够了。”比尔·盖茨1989年在谈论“计算机科学的过去现在与未来时”时如是说。那时,所有的程序都很省很小,100MB的硬盘简直用不完。互联网还在实验室被开发着,超文本协议刚刚被提出。它们的广泛应用,将在6年之后开始。

  目前(2008年),在提供装机服务的网站上可以检索到这样的信息,一个普通白领上班所需的电脑标配是:低端酷睿双核/1GB内存/100GB硬盘,很快,兆级的硬盘就将进入家庭机使用范围。

  硬件配置飞速飚高的背后,是互联网上数据飞速的的增长——这简直在挑战人类想象力的极限,海量数据作为一个概念被提出时,单位以GB计。而现在这只是一个小网站的数据量单位。不尽畅想,如果有一天,互联网上可用的数据是现在的1000倍甚至更多时,我们的PC将变成什么样子?硬件会进化到怎样的形态?又或者,个人计算机根本就不必承受如此海量的数据计算?

  云计算给出了另一种可能。

转载于:https://www.cnblogs.com/maqintoshi/archive/2009/01/29/cloudcomputing.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值