Linux之父Linus说:并行计算基本上就是浪费大家的时间



本文的内容翻译自 Linux 之父 linus torvalds 最近发表的一个帖子

并行计算有什么好的?

硬件的性能无法永远提升,当前的趋势实际上趋于降低功耗。那么推广并行技术这个灵丹妙药又有什么好处呢?我们已经知道适当的乱序CPU是必要的,因为人们需要合理的性能,并且乱序执行已被证明比顺序执行效率更高。

推崇所谓的“并行”极大地浪费了大家的时间。“并行更高效”的高大上理念纯粹是扯淡。大容量缓存可以提高效率。在一些没有附带缓存的微内核上搞并行毫无意义,除非是针对大量的规则运算(比如图形处理)。

没人会回到从前了。那些复杂的乱序运行内核不会消失。扩展不会一直进行下去,人们需要的是移动性,因此那些主张扩展至上百内核的都是疯子,不要鸟他们。

他们究竟是如何幻想那些神奇的并行算法会有用武之地的呢?

并行只有对图形计算和服务器有意义,而在这些领域我们已经大量应用并行了。把并行推广到其他的领域没有意义。

所以说忘记并行吧。它不会到来的。4个左右的内核对终端用户来说没有问题,在移动领域里,不大幅增加能耗的情况下,你没办法再塞进更多的核。任何一个理智的人都不会为了要塞入更多的内核而阉割内核以降低其大小和性能,阉割内核的唯一理由是你想进一步降低功耗,因此你还是不会得到大量的核。

所以争论是否要讲究程序的并行性根本就是谬误,其前提条件都是错误的。它只不过是一个早该过时的时髦术语罢了。

并行程序在上面提到的一些地方是有用的,并且已经大量地运用了,比如在服务器领域,人们已经并行很多年了。

在其他的领域,并行不是一定必须的,即便是在将来的一些未知领域也是如此,因为你做不到。假如你要做低功耗通用计算机视觉,我基本可以保证你不会使用通用图形处理器(GP CPU)。你甚至不会用图形处理器,因为其功耗也太高了。你大概会用特殊的硬件,很可能是基于某些神经网络的硬件。

放弃吧。“并行就是未来”的说法就是一片浮云。

Linus




Linus Torvalds (torvalds.delete@this.linux-foundation.org) on December 8, 2014 1:34 pm wrote:
> Jouni Osmala (josmala.delete@this.cc.hut.fi) on December 8, 2014 1:10 pm wrote:
> > 
> > I'm assuming that 90+% of programs already run fast enough and they don't matter for this.
> > Its all about asking question in what use current computers are too slow , and can you parallerize 
> > that or are those cases already parallel. And I'm assuming you can parallerize atleast 10% 
> > of those times where user waits CPU for long enough to actually notice it.

> What's the advantage?

> You won't get scaling for much longer, and current trends are actually for lower power anyway. So what's the 
> upside of pushing the whole parallelism snake-oil? We know that we need fairly complex OoO CPU's anyway, because 
> people want reasonable performance and it turns out OoO is actually more efficient than slow in-order.

> The whole "let's parallelize" thing is a huge waste of everybody's time. There's this huge 
> body of "knowledge" that parallel is somehow more efficient, and that whole huge body is pure 
> and utter garbage. Big caches are efficient. Parallel stupid small cores without caches are 
> <i>horrible</i> unless you have a very specific load that is hugely regular (ie graphics).

> Nobody is ever going to go backwards from where we are today. Those complex OoO cores aren't going 
> away. Scaling isn't going to continue forever, and people want mobility, so the crazies talking about 
> scaling to hundreds of cores are just that - crazy. Why give them an ounce of credibility? 

> Where the hell do you envision that those magical parallel algorithms would be used? 

> The only place where parallelism matters is in graphics or on the server side, 
> where we already largely have it. Pushing it anywhere else is just pointless.

> So give up on parallelism already. It's not going to happen. End users are fine with roughly 
> on the order of four cores, and you can't fit any more anyway without using too much energy 
> to be practical in that space. And nobody sane would make the cores smaller and weaker in order 
> to fit more of them - the only reason to make them smaller and weaker is because you want to 
> go even further down in power use, so you'd <i>still</i> not have lots of those weak cores.

> So the whole argument that people should parallelise their code is fundamentally flawed. 
>  It rests on incorrect assumptions. It's a fad that has been going on too long.

> Parallel code makes sense in the few cases I mentioned, where we already largely have 
> it covered, because in the server space, people have been parallel for a long time.

> It does <i>not</i> necessarily make sense elsewhere. Even in completely new areas that we don't 
> do today because you cant' afford it. If you want to do low-power ubiquotous computer vision 
> etc, I can pretty much guarantee that you're not going to do it with code on a GP CPU. You're 
> likely not even going to do it on a GPU because even that is too expensive (power wise), 
> but with specialized hardware, probably based on some neural network model.

> Give it up. The whole "parallel computing is the future" is a bunch of crock.

>                        Linus
















Avoiding ping pong

By:  Linus Torvalds (torvalds.delete@this.linux-foundation.org), December 8, 2014 1:34 pm
Room:  Moderated Discussions
Jouni Osmala (josmala.delete@this.cc.hut.fi) on December 8, 2014 1:10 pm wrote:

> I'm assuming that 90+% of programs already run fast enough and they don't matter for this.
> Its all about asking question in what use current computers are too slow , and can you parallerize 
> that or are those cases already parallel. And I'm assuming you can parallerize atleast 10% 
> of those times where user waits CPU for long enough to actually notice it.

What's the advantage?

You won't get scaling for much longer, and current trends are actually for lower power anyway. So what's the upside of pushing the whole parallelism snake-oil? We know that we need fairly complex OoO CPU's anyway, because people want reasonable performance and it turns out OoO is actually more efficient than slow in-order.

The whole "let's parallelize" thing is a huge waste of everybody's time. There's this huge body of "knowledge" that parallel is somehow more efficient, and that whole huge body is pure and utter garbage. Big caches are efficient. Parallel stupid small cores without caches are  horrible unless you have a very specific load that is hugely regular (ie graphics).

Nobody is ever going to go backwards from where we are today. Those complex OoO cores aren't going away. Scaling isn't going to continue forever, and people want mobility, so the crazies talking about scaling to hundreds of cores are just that - crazy. Why give them an ounce of credibility? 

Where the hell do you envision that those magical parallel algorithms would be used? 

The only place where parallelism matters is in graphics or on the server side, where we already largely have it. Pushing it anywhere else is just pointless.

So give up on parallelism already. It's not going to happen. End users are fine with roughly on the order of four cores, and you can't fit any more anyway without using too much energy to be practical in that space. And nobody sane would make the cores smaller and weaker in order to fit more of them - the only reason to make them smaller and weaker is because you want to go even further down in power use, so you'd  still not have lots of those weak cores.

So the whole argument that people should parallelise their code is fundamentally flawed. It rests on incorrect assumptions. It's a fad that has been going on too long.

Parallel code makes sense in the few cases I mentioned, where we already largely have it covered, because in the server space, people have been parallel for a long time.

It does  not necessarily make sense elsewhere. Even in completely new areas that we don't do today because you cant' afford it. If you want to do low-power ubiquotous computer vision etc, I can pretty much guarantee that you're not going to do it with code on a GP CPU. You're likely not even going to do it on a GPU because even that is too expensive (power wise), but with specialized hardware, probably based on some neural network model.

Give it up. The whole "parallel computing is the future" is a bunch of crock.

Linus





















  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值