自然语言理解和自然语言处理_4种自然语言处理和理解的方法

自然语言理解和自然语言处理

by Mariya Yao

姚iya(Mariya Yao)

4种自然语言处理和理解的方法 (4 Approaches To Natural Language Processing & Understanding)

In 1971, Terry Winograd wrote the SHRDLU program while completing his PhD at MIT.

1971年,Terry Winograd在麻省理工学院攻读博士学位时编写了SHRDLU程序。

SHRDLU features a world of toy blocks where the computer translates human commands into physical actions, such as “move the red pyramid next to the blue cube.”

SHRDLU具有玩具积木世界,其中计算机将人工命令转换为实际动作,例如“将红色金字塔移到蓝色立方体旁边”。

To succeed at such tasks, the computer must build up semantic knowledge iteratively, a process Winograd discovered as brittle and limited.

为了成功完成这些任务,计算机必须迭代地建立语义知识,而Winograd发现该过程脆弱且受限制。

The rise of chatbots and voice activated technologies has renewed fervor in natural language processing (NLP) and natural language understanding (NLU) techniques that can produce satisfying human-computer dialogs.

聊天机器人和语音激活技术的兴起重新激发了人们对自然语言处理(NLP)和自然语言理解(NLU)技术的热情,这些技术可以产生令人满意的人机对话。

Unfortunately, academic breakthroughs have not yet translated into improved user experience. Gizmodo writer Darren Orf declared Messenger chatbots “frustrating and useless” and Facebook admitted a 70% failure rate for their highly anticipated conversational assistant, “M.”

不幸的是,学术上的突破尚未转化为改善的用户体验。 Gizmodo作家Darren Orf宣布Messenger聊天机器人“ 令人沮丧且无用 ”,而Facebook承认其备受期待的对话助手“ M”的失败率高达70%

Nevertheless, researchers forge ahead with new plans of attack, occasionally revisiting the same tactics and principles Winograd tried in the 70s.

尽管如此,研究人员还是提出了新的进攻计划,偶尔会重温Winograd在70年代尝试过的相同战术和原则。

OpenAI recently leveraged reinforcement learning to teach to agents to design their own language by “dropping them into a set of simple worlds, giving them the ability to communicate, and then giving them goals that can be best achieved by communicating with other agents.” The agents independently developed a simple “grounded” language.

OpenAI最近利用强化学习来教给代理人设计自己的语言,方法是“将他们放到一组简单的世界中,赋予他们交流的能力,然后赋予他们可以与其他代理人进行交流的最佳目标。” 代理商独立开发了一种简单的“扎根”语言。

MIT Media Lab presents this satisfying clarification on what “grounded” means in the context of language:

麻省理工学院媒体实验室就语言中“扎根”的含义提出了令人满意的澄清:

“Language is grounded in experience. Unlike dictionaries which define words in terms of other words, humans understand many basic words in terms of associations with sensory-motor experiences. People must interact physically with their world to grasp the essence of words like “red,” “heavy,” and “above.” Abstract words are acquired only in relation to more concretely grounded terms. Grounding is thus a fundamental aspect of spoken language, which enables humans to acquire and to use words and sentences in context.”

语言基于经验。 与用其他词来定义词的词典不同,人类根据与感觉运动体验的关联来理解许多基本词。 人们必须与自己的世界进行互动,以掌握“红色”,“沉重”和“上方”等词语的本质。 仅与更具体的基础术语相关地获取抽象词。 因此,扎根是口头语言的基本方面,它使人类能够在上下文中获取和使用单词和句子。”

The antithesis of grounded language is inferred language. Inferred language derives meaning from words themselves rather than what they represent.

基本语言的对立是推断语言。 推断语言是从单词本身而不是它们所代表的含义中获得含义的。

When trained only on large corpuses of text — but not on real-world representations — statistical methods for NLP and NLU lack true understanding of what words mean.

如果只接受大型文本语料库的训练,而不能接受真实世界的表示法的训练,那么NLP和NLU的统计方法就无法真正理解单词的含义。

OpenAI points out that such approaches share the weaknesses revealed by John Searle’s famous

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值