ICML 2019 分析

ICML 2019 分析

Word Embeddings

Understanding the Origins of Bias in Word Embeddings

Popular word embedding algorithms exhibit stereotypical biases, such as gender bias.

The widespread use of these algorithms in machine learning systems can amplify stereotypes in important contexts.

Although some methods have been developed to mitigate this problem, how word embedding biases arise during training is poorly understood.

In this work we develop a technique to address this question.

Given a word embedding, our method reveals how perturbing the training corpus would affect the resulting embedding bias.

By tracing the origins of word embedding bias back to the original training documents, one can identify subsets of documents whose removal would most reduce bias.

We demonstrate our methodology on Wikipedia and New York Times corpora, and find it to be very accurate.

《理解单词嵌入中偏见的起源》

常用的嵌入词算法表现出典型的偏见,如性别偏见。

Analogies Explained: Towards Understanding Word Embeddings

Word embeddings generated by neural network methods such as word2vec (W2V) are well known to exhibit seemingly linear behaviour, e.g. the embeddings of analogy woman is to queen as man is to king'' approximately describe a parallelogram.

This property is particularly intriguing since the embeddings are not trained to achieve it.

Several explanations have been proposed, but each introduces assumptions that do not hold in practice.

We derive a probabilistically grounded definition of paraphrasing that we re-interpret as word transformation, a mathematical description of \(w_x\) is to \(w_y\)''.

From these concepts we prove existence of linear relationship between W2V-type embeddings that underlie the analogical phenomenon, identifying explicit error terms.

《类比解释:对嵌入词的理解》

神经网络方法(如word2vec(w2v))生成的嵌入词通常表现出看似线性的行为,例如,将女性嵌入到皇后中,就像男人对国王“近似描述一个平行四边形”。

这种特性特别有趣,因为嵌入没有经过训练来实现它。

已经提出了几种解释,但每种解释都引入了在实践中不成立的假设。

我们推导了一个基于概率的释义定义,我们将其重新解释为单词转换,一个 \(w_x\)\(w_y\) 的数学描述。

从这些概念中,我们证明了W2V类型嵌入之间存在线性关系,这些嵌入构成了类比现象的基础,识别了显式错误项。

Humor in Word Embeddings: Cockamamie Gobbledegook for Nincompoops

While humor is often thought to be beyond the reach of Natural Language Processing, we show that several aspects of single-word humor correlate with simple linear directions in Word Embeddings.

In particular:

(a) the word vectors capture multiple aspects discussed in humor theories from various disciplines;

(b) each individual's sense of humor can be represented by a vector, which can predict differences in people's senses of humor on new, unrated, words; and

(c) upon clustering humor ratings of multiple demographic groups, different humor preferences emerge across the different groups.

Humor ratings are taken from the work of Engelthaler and Hills (2017) as well as from an original crowdsourcing study of 120,000 words.

Our dataset further includes annotations for the theoretically-motivated humor features we identify.

《文字嵌入中的幽默:繁琐的戈布尔德古克》

虽然幽默通常被认为是超出了自然语言处理的范围,但我们发现,单词幽默的几个方面与嵌入单词的简单线性方向相关。

特别是:

(a)词汇载体从各个学科捕获幽默理论中讨论的多个方面;

(b)每个人的幽默感可由一个矢量表示,该矢量可预测人们在新的、未分级的词语上的幽默感差异;及

(c)通过对多个人口统计学群体的幽默评分进行聚类,不同群体之间会出现不同的幽默偏好。

幽默评分取自Engelthaler和Hills(2017)的作品,以及120000字的原始众包研究。

我们的数据集还包括对我们所识别的理论性幽默特征的注释。

转载于:https://www.cnblogs.com/fengyubo/p/11088776.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值