可视化算法与降维算法区别_人性化算法的危险

可视化算法与降维算法区别

To many, 2016 marked the year when artificial intelligence (AI) came of age. AlphaGo triumphed against the world’s best human Go players, demonstrating the almost inexhaustible potential of artificial intelligence. Programs playing board games with superhuman skills like AlphaGo or AlphaZero have created unparalleled hype surrounding AI, and this has only been fueled by big data availability.

对于许多人来说,2016年标志着人工智能(AI)时代的到来。 AlphaGo击败了世界上最好的人类围棋选手,展示了人工智能几乎无穷无尽的潜力。 诸如AlphaGo或AlphaZero之类的具有超人技能的棋盘游戏程序已经在AI领域掀起了无与伦比的炒作,而这仅受大数据可用性的推动。

In this context, it is not surprising that the public, business, and scientific interest in machine learning are unchecked. These programs can go further than beating a human player, going so far as to invent new and ingenious gameplay. They learn from data, identify patterns, and make decisions based on these patterns. Depending on the application, decision-making occurs without or with only minimal human intervention. Since data production is a continuous process, machine learning solutions adapt autonomously, learning from new information and previous operations. In 2016, AlphaGo used a total of 300,000 games as training data to achieve its excellent results.

在这种情况下,公众,企业和机器学习对科学的兴趣不受抑制也就不足为奇了。 这些程序比殴打人类玩家要走得更远,甚至可以发明出新颖而巧妙的游戏方式。 他们从数据中学习,识别模式并根据这些模式做出决策。 根据不同的应用,决策的制定不需要或只需很少的人工干预。 由于数据生成是一个连续的过程,因此机器学习解决方案可以自动适应,从新信息和以前的操作中学习。 2016年,AlphaGo 总共使用了300,000场比赛作为训练数据,以取得出色的成绩。

Every guide out there about how to implement machine learning applications will tell you that you need a clear vision of the problem it has to solve.

那里有关如何实现机器学习应用程序的所有指南都将告诉您,您需要清楚地了解它必须解决的问题。

In many cases, the machine learning applications are faster, more accurate, and time-saving, therefore — among other benefits — shortening time-to-market. However, it will only address this specific problem with the data given.

在许多情况下,机器学习应用程序更快,更准确且节省时间,因此,除其他优点外,缩短了产品上市时间。 但是,它将仅使用给定的数据解决此特定问题。

But is this learning in correspondence to the way humans learn? No, it is not. Not even remotely.

但是,这种学习是否与人类的学习方式相对应? 不它不是。 甚至不是遥远的。

未解之谜:什么是智力? (Mystery Unsolved: What Is Intelligence?)

As humans, we have an idea of what we consider smart or intelligent. However, scientifically speaking, it proves almost impossible to grasp and understand it.

作为人类,我们对我们认为聪明或聪明的东西有一个想法。 但是,从科学上讲,事实证明几乎不可能掌握和理解它。

There are several reasons for this, one of which is cultural. For example, in the West, being smart is associated with being quick. The person who answers a question the fastest is seen as the most intelligent. But in other cultures, being smart is related to considering an idea thoroughly before answering. A well-thought-out, contemplative answer is the best answer. Another reason is we can’t measure all aspects of intelligence.

造成这种情况的原因有很多,其中一个是文化原因。 例如,在西方,聪明与快就相关。 最快回答问题的人被认为是最聪明的人。 但是在其他文化中,机智与在回答之前彻底考虑一个想法有关。 经过深思熟虑的沉思答案是最好的答案。 另一个原因是我们无法衡量智力的所有方面。

For developmental psychologist Howard Gardner, there are not one but nine domains of intelligence. Only three of those are measured by an IQ test:

对于发展心理学家霍华德·加德纳(Howard Gardner)而言,情报领域并不只有一个,而是九个领域。 智商测试只能测量其中的三个:

  • Logical-mathematical

    逻辑数学
  • Linguistic

    语言学
  • Spatial

    空间空间

However, the following six are entirely omitted by IQ tests:

但是,以下六项被IQ测试完全忽略:

  • Musical

    音乐
  • Bodily-kinesthetic

    身体动觉
  • Naturalistic

    自然主义
  • Interpersonal

    人际交往
  • Intrapersonal

    人际关系
  • Existential

    存在的

Therefore, a high IQ does not mean success in life or indicate that a person has common sense or excellent interpersonal skills. Other theories on intelligence include Sternberg’s triarchic theory of intelligence.

因此,高智商并不意味着生活成功,也不表示一个人具有常识或出色的人际交往能力。 其他有关智力的理论包括斯特恩伯格的三级智力理论。

What all psychological theories have in common is that intelligence is perceived as a broad term to describe human intellectual capabilities manifested in sophisticated cognitive accomplishments and high levels of motivation and self-awareness.

所有心理学理论的共同点是,智力被视为广义的术语,用来描述人类的智力能力,这些能力表现为复杂的认知成就,高水平的动机和自我意识。

“Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings — ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.” — Mainstream Science on Intelligence

“智力是一种非常普遍的心理能力,其中包括推理,计划,解决问题,抽象思考,理解复杂想法,快速学习和从经验中学习的能力。 这不仅仅是书本学习,学术能力不足或应试技巧。 相反,它反映了理解环境的更广泛和更深的能力-“捕捉”,“理解”事物或“弄清楚”该做什么。” — 主流情报学

However, in the context of machine learning, learning is statistical and should answer the following fundamental question (per The Nature of Statistical Learning): “What must one know a priori about an unknown functional dependency to estimate it on the basis of observations?”

但是,在机器学习的上下文中,学习是统计的,应该回答以下基本问题(根据统计学习的本质 ):“必须具备什么先验知识才能了解未知的函数依赖性,才能根据观察结果对其进行估计?”

While Léon Bottou argues that this statistical nature is well understood and statistical machine learning methods are now commonplace, the quality of reasoning proves more elusive.

尽管莱昂·波托( LéonBottou )认为这种统计性质已广为人知,并且统计机器学习方法现在很普遍,但推理质量却更加难以捉摸

AI的拟人化 (Anthropomorphism in AI)

Anthropomorphism is the tendency to ascribe human characteristics to non-human objects (e.g. Bambi), and it is evident throughout the field. Just ask yourself, “What does artificial intelligence itself imply?”

拟人化是将人类特征归因于非人类物体 (例如Bambi )的趋势,并且在整个领域中都很明显。 只是问问自己,“人工智能本身意味着什么?”

If we look more closely into the statistical concepts trying to achieve reasoning, flawed terminology strikes especially hard in deep learning applications. In this subset of AI, artificial neural networks are built for algorithms to learn from vast amounts of data. They are centered around building a learning machine to accomplish a valuable task. It was created to summarize complex information into tangible results, inspired by the human brain. The advantage of these networks is the profound abstraction of relations between input data and the abstracted neuron values with the output data done through several layers of the networks (while traditional neural networks only contain 2–3 hidden layers, deep networks can have up to as many as 150). But still, deep neural networks are brittle, inefficient, and myopic compared to that of an actual human brain.

如果我们更仔细地研究试图实现推理的统计概念,那么在深度学习应用程序中,术语缺陷就尤为突出。 在AI的这个子集中, 人工神经网络被构建用于算法以从大量数据中学习。 他们围绕构建学习机来完成一项有价值的任务为中心。 它的创建是为了将复杂的信息汇总为切实的结果,并受到人脑的启发。 这些网络的优势在于,输入数据和抽象的神经元值之间的关系得到了深刻的抽象,而输出数据则通过网络的多个层来完成(而传统的神经网络仅包含2-3个隐藏层,而深度网络最多可以包含多达150 )。 但是,深度神经网络仍然脆弱,效率低下且近视 与实际的人脑相比。

(Brittle)

Deep neural networks can be tricked easily with slight perturbations to the training inputs. A glitch here and deep-learning algorithms start to mislabel objects and other absurd combinations wildly. This critical distinction between biological and artificial neural networks poses a far-reaching challenge for deep neural networks in areas comparable to clinical medicine and autonomous driving.

只需对训练输入进行微扰,即可轻松欺骗深层神经网络。 此处的故障和深度学习算法开始误标记对象和其他荒谬的组合。 生物神经网络和人工神经网络之间的关键区别对深层神经网络提出了与临床医学和自动驾驶相当的领域所面临的深远挑战。

低效 (Inefficient)

Data-hungry deep neural networks are inefficient, requiring vast amounts of training examples. Also, how these models are processing them frequently remains a mystery. We can tell a dog from a zebra right away. The brain-like network, however, needs training data to achieve this. This comes to show that human-level insights require the capacity to go past information and deep-learning calculations. People can construct models of the world as they see it, including ordinary common-sense information and everyday common-sense knowledge, and subsequently utilize these models to explain their actions and decisions.

渴望数据的深度神经网络效率低下,需要大量的训练示例。 而且,这些模型如何频繁地处理它们仍然是一个谜。 我们可以马上从斑马告诉狗。 但是,类似大脑的网络需要训练数据来实现这一目标。 这表明,人类层面的见识需要具备超越信息和深度学习计算的能力。 人们可以构建自己所见的世界模型,包括普通常识信息和日常常识知识,然后利用这些模型来解释其行为和决策。

近视 (Myopic)

Let’s face it. Deep-learning models are strangely intolerant, lacking the same level of cognition. In other words, whereas a human can instinctively tell that a cloud that might have the shape and features of a dog is not a genuine dog, a deep-learning algorithm will have trouble separating between appearing like something and being that thing.

面对现实吧。 深度学习模型令人难以忍受,缺乏相同的认知水平。 换句话说,尽管人类可以凭直觉辨别出可能具有狗的形状和特征的云不是真正的狗,但是深度学习算法将很难在看起来像事物和成为事物之间进行区分。

But yet, we call it neural network. If this is not brilliant marketing, then I don’t know what it is. A three-month-old baby has a better grasp of what to make of its surroundings than any deep-learning application built to date.

但是,我们称其为神经网络。 如果这不是出色的营销,那么我不知道它是什么。 三个月大的婴儿比迄今构建的任何深度学习应用程序都能更好地掌握周围环境

这种修辞充其量是误导人的,而最糟的则是彻头彻尾的危险 (This Rhetoric Is Misleading at Best and Downright Dangerous at Worst)

Humans have always wanted to create machines that can think, learn, and reason. Current research in AI pushes us to look at specific algorithms claiming to be comparable to our human ways of thinking and, subsequently, reasoning. Because of this rhetoric, everybody expects intelligent androids to appear any day. And quite frankly, papers have shown that it’s not only the general public that is torn between science fiction, make-believe, and what can be accomplished.

人类一直想创造一种可以思考,学习和推理的机器。 当前在AI方面的研究促使我们研究特定的算法,这些算法声称可以与人类的思维方式以及推理方式相提并论。 由于这种言论,每个人都希望智能android出现在任何一天。 坦率地说,论文表明,在科幻小说,虚构的信仰和可以成就的事物之间折磨的不仅仅是普通大众

How can we tell whether to take these descriptions literally or metaphorically?

我们如何辨别是字面还是隐喻地采用这些描述?

Here is where it gets tricky. On the one hand, using anthropomorphic tendencies to describe AI phenomena can benefit future research in the field. However, it is also very hindering if not even dangerous in socially sensitive applications. Why? Because the anthropomorphic tendency in AI is not ethically neutral.

这是棘手的地方。 一方面,使用拟人化的趋势来描述AI现象可以使该领域的未来研究受益。 但是,这在社会敏感的应用程序中甚至是非常危险的情况也非常不利。 为什么? 因为AI的拟人化趋势在道德上不是中立的。

What happens when we let algorithms decide in socially sensitive applications? For one, while depending on the data fed into the system, we are potentially faced with racist, sexist, and discriminating outcomes. Secondly, how can we sustain our ability to hold influential individuals and groups accountable for their technologically mediated actions?

当我们让算法决定对社会敏感的应用程序时会发生什么? 一方面,根据提供给系统的数据,我们可能面临种族主义,性别歧视和歧视性结果。 其次,我们如何保持我们的能力,使有影响力的个人和团体对其技术介导的行为负责?

It is of paramount importance to understand that the notion of machine learning technologies being humanlike when it comes to their ability to fully understand data (meaning finding patterns and exploiting them) is not correct. While these applications are powerful (e.g. the Optometrist Algorithm), they merely mimic human intelligence.

至关重要的是,要理解机器学习技术在完全理解数据的能力(意味着寻找模式和利用它们)方面具有人类特色的概念是不正确的。 尽管这些应用程序功能强大(例如验光师算法 ),但它们仅模仿人类的智能。

And that is what is essential here: Such systems are powerful tools for good or for evil. Or, as David Watson put it in a recent article, “The choice, as ever, is ours.”

在这里,这是至关重要的:此类系统是用于善恶的强大工具。 或者,就像大卫·沃森(David Watson) 在最近的文章中所说的那样 ,“一如既往的选择是我们的选择。”

翻译自: https://medium.com/better-programming/the-danger-of-humanizing-algorithms-a9a0e1a5c8e6

可视化算法与降维算法区别

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值