xkcd目录_12条展示AI真相的XKCD片段

xkcd目录

XKCD, a 15-year-old “webcomic of romance, sarcasm, math, and language,” ingeniously distills complex ideas, like AI, into simple strips.

XKCD是15岁的“浪漫,讽刺,数学和语言的网络漫画”,巧妙地将诸如AI之类的复杂想法分解为简单的条带。

XKCD graciously allows re-printing with attribution, so here are 12 XKCD strips that show the truth about AI.

XKCD宽容地允许带有属性的重新打印,因此这里有12条XKCD带,它们显示了关于AI的真相。

生物与人工神经网络 (Biological vs Artificial Neural Nets)

Image for post
XKCD #2173 XKCD#2173

Your brain is an interconnected network of 86 billion neurons — a neural net, if you will. Artificial neural nets are inspired by this design, and while the simplest construct — a perceptron — is just a single neuron, modern neural nets (NNs) can reach up to a billion weights and millions of neurons.

您的大脑是一个由860亿个神经元组成的相互连接的网络—如果您愿意的话,它是一个神经网络。 人工神经网络受此设计的启发,而最简单的构造(感知器)只是单个神经元,而现代神经网络(NN)可以达到十亿重量 和数百万个神经元

By learning patterns from data, NNs can accomplish a wide range of tasks, from image recognition to forecasting.

通过从数据中学习模式,NN可以完成从图​​像识别到预测的广泛任务。

However, if we take it too far, we might waste time building models when doing the work manually would work better.

但是,如果我们走得太远,当手动进行工作会更好时,我们可能会浪费时间来构建模型。

假人工智能 (Fake AI)

Image for post
XKCD #1897 XKCD#1897

By all accounts, AI seems ubiquitous. Turn to Product Hunt, Twitter, or /r/startups, and it’ll look like new AI solutions are popping up every minute.

众所周知,人工智能似乎无处不在。 转到Product Hunt,Twitter或/ r / startups,看起来新的AI解决方案每分钟都会弹出。

But is that really the case — or are some companies “cheating”? As it turns out, several companies have been caught in the act, claiming to use AI while actually outsourcing menial tasks. As Forbes reports, these are just some of the companies guilty of “pseudo-AI”:

但这是真的吗?还是有些公司在“作弊”? 事实证明,有几家公司陷入了困境,声称在实际外包琐事时使用了AI。 正如《福布斯》报道的那样,这些只是对“伪AI”有罪的一些公司:

  • Hanson Robotics

    汉森机器人
  • X.AI

    AI
  • Clara Labs

    克拉拉实验室

理论 (Theory)

Image for post
XKCD #1450 XKCD#1450

Though futurists and thought leaders might tell you otherwise, we really have no idea what will happen after a superintelligent AI is created.

尽管未来主义者和思想领袖可能会告诉您其他情况,但我们真的不知道超级智能AI创建之后会发生什么。

Will it be benevolent? Malevolent? Neutral? Non-sentient yet superintelligent? Whatever happens may surprise us all.

会仁慈吗? 恶毒? 中性? 不知情但超级智能? 无论发生什么,都可能令我们所有人惊讶。

数据管道 (Data Pipelines)

Image for post
XKCD #2054 XKCD#2054

Building data pipelines isn’t easy. To build data products, you need to be able to collect data from potentially millions of users and process the results in near real-time. Your pipeline needs to be robust, scalable, efficient, and with monitoring capabilities.

建立数据管道并不容易。 要构建数据产品,您需要能够从潜在的数百万个用户那里收集数据并以接近实时的方式处理结果。 您的管道需要健壮,可伸缩,高效并具有监视功能。

With how difficult it is, many pipelines don’t check all the boxes.

多么困难,许多管道没有选中所有框。

训练 (Training)

Image for post
XKCD #2265 XKCD#2265

By training a neural network — or passing training data through a composite function many times, so as to learn patterns — we can predict new data. If you don’t train long enough, your model will “underfit,” or simply not have learned patterns in the data.

通过训练神经网络-或多次通过复合函数传递训练数据以学习模式-我们可以预测新数据。 如果训练时间不够长,则模型将“欠拟合”,或者根本就没有学习到数据中的模式。

You might end up with a chatbot that speaks gibberish, or a self-driving car that only drives straight.

您可能最终会说出胡言乱语的聊天机器人,或只会直行的自动驾驶汽车。

聊天机器人 (Chatbots)

Image for post
XKCD #948 XKCD#948

Cleverbot was released in 1997, so there’s a long history of somewhat-decent chatbots.

Cleverbot 于1997年发布,因此在相当不错的聊天机器人方面已有悠久的历史。

However, there’s a big difference between an AI that creates something new and unique, and one that just retrieves what humans have done or said in the past — like Cleverbot.

但是,创建新事物和独特事物的AI与仅检索人类过去所做或所做的事情的AI(例如Cleverbot)之间存在很大的差异。

To be fair, modern chatbots have drastically improved on the technology, and are astonishingly accurate.

公平地讲,现代聊天机器人在技术上已进行了巨大改进,并且具有惊人的准确性。

数据➡️答案 (Data ➡️ Answers)

Image for post
XKCD #1838 XKCD#1838

For all the progress we’ve made on AI, relatively little has been done in the way of explainability. While the idea of “black box” AI is a bit of a myth — as there are ways to interpret the results — there isn’t complete transparency or intuition into how most AI models, especially deep learning, really work under the hood.

对于我们在AI方面取得的所有进步,在可解释性方面所做的工作相对较少。 尽管“黑匣子” AI的想法有些荒诞(因为有多种方法可以解释结果),但对于大多数AI模型(尤其是深度学习)如何真正在幕后运作,还没有完全的透明度或直觉。

知觉 (Sentience)

Image for post
XKCD #1626 XKCD#1626

Asimov’s third law of robotics is that a robot must protect its own existence. Nuclear blasts trigger EMPs that destroy electronics, so a sentient AI that abides by Asimov’s laws may seek to destroy our nuclear weapons, rather than use them against us.

阿西莫夫的机器人技术第三定律是,机器人必须保护自己的存在。 核爆炸会触发EMP ,从而破坏电子设备,因此遵守阿西莫夫法律的有情力的AI可能会试图摧毁我们的核武器,而不是对我们使用核武器。

我们不应该为人类担心吗? (Shouldn’t we be worried about humans?)

Image for post
XKCD #1955 XKCD#1955

There’s a lot of fear-mongering about AI, some justified (AI may bake in racial and sexual biases, AI may spur job loss, and so on), others irrational (AI will kill us).

人们对AI充满恐惧,有些理由合理(AI可能会因种族和性别偏见而生,AI可能会刺激失业),另一些则不合理(AI会杀死我们)。

However, all this fear forgets one thing: Humans, not AI, are the danger. The unfortunate truth is that up to a billion people have been killed in wars throughout history.

但是,所有这些恐惧忘记了一件事:危险是人类而不是AI 。 不幸的事实是,整个历史上多达十亿人在战争中丧生。

图灵测试 (Turing Test)

Image for post
XKCD #329 XKCD#329

In the seminal paper on Artificial Intelligence, Computing Machinery and Intelligence, Turing asked: “Can machines think?” — or, more accurately, can a machine imitate thought?

在关于人工智能, 计算机技术和智能的开创性论文中,图灵问:“机器可以思考吗?” —或者更准确地说,机器可以模仿思想吗?

So far, the answer is “no,” but we’ll likely get there one day, and perhaps raise the bar with a new test.

到目前为止,答案是“否”,但我们可能有一天会到达那里,并可能通过新的测试来提高标准。

人类更擅长…… (Humans are better at…)

Image for post
XKCD #1263 XKCD#1263

With every new AI advancement, cynics keep shifting the goalposts, and AI keeps catching up.

随着AI的每一项新进步,愤世嫉俗的人不断改变目标 ,并且AI不断追赶。

简单与不可能的任务 (Easy vs Impossible Tasks)

Image for post
XKCD #1425 XKCD#1425

In computing, two tasks that may seem similar to a lay-person could easily be the difference between trivial and near-impossible. Today, years after the creation of the strip above, image recognition tasks have been made far easier, but many other tasks prove incredibly challenging.

在计算中,看起来很像外行的两个任务很容易是琐碎的和几乎不可能的区别。 如今,在上面的条带创建多年之后,图像识别任务已经变得非常容易,但是许多其他任务被证明具有极大的挑战性。

For example, how can we create a neural network that is not only explainable, but intuitively interpretable? How can we create new state-of-the-art neural networks without simply adding more compute, more data, and more parameters? How can we create general AI, as opposed to narrow AI? How can we make level 5 autonomous driving — which can handle things like a road-worker holding a stop sign, while motioning, to the driver next to you, via eye contact, to proceed?

例如,我们如何创建一个不仅可以解释,而且可以直观解释的神经网络? 我们如何创建新的最新神经网络而又不仅仅添加更多的计算,更多的数据和更多的参数? 我们如何创建通用AI,而不是狭窄AI? 我们如何才能进行5级自动驾驶-它可以处理诸如道路工在停车时举着停车牌的动作,通过目光与旁边的驾驶员进行互动?

There are many unanswered questions in the field, which makes it all the more exciting!

该领域存在许多未解决的问题,这使它变得更加令人兴奋!

想要更多这样的内容? (Want More Content Like This?)

If you’d like to learn more about AI and data science, then follow me on Towards Data Science, and Apteo’s data science blog that I contribute to.

如果您想了解有关AI和数据科学的更多信息,请在Towards Data Science和我贡献的Apteo 数据科学博客上关注我。

翻译自: https://towardsdatascience.com/12-xkcd-strips-that-show-the-truth-about-ai-e09fbcd00c4c

xkcd目录

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值