算法伦理

For five years the British government used a racist algorithm in order to help determine the outcome of visa applications. Last week they announced that it “needs to be rebuilt from the ground up”.

五年来,英国政府一直使用种族主义算法来帮助确定签证申请的结果。 上周,他们宣布它“需要从头开始重建”。

“The Home Office’s own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates. This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.”

内政部自己对Windrush丑闻的独立审查发现,它没有遵守种族主义的假设和运作的制度。 这种流媒体工具采取了数十年的制度上种族主义的做法,例如针对特定国籍的移民袭击,并将其转变为软件。 移民系统需要从头开始重建,以监测这种偏见并消除这种偏见。”

This is a colossal victory for everyone, the government has begun to identify parts of the racist machine and is now dismantling them. This is justice. But how did the government get their hands on a racist algorithm? Who designed it? And should they be punished?

对于每个人来说这都是一个巨大的胜利 ,政府已经开始确定种族主义机器的各个部分,现在正在拆除它们。 这是正义。 但是,政府如何掌握种族主义算法呢? 谁设计的? 他们应该受到惩罚吗?

‘Hand-Written’ AlgorithmsHistorically algorithms were ‘hand-written’ whereby the developers would manually set parameters that would dictate the outcome of an algorithm. A famous example is Facebook’s EdgeRank algorithm, which would consider a measly three factors (user affinity, content weighting and time-based decay).

“手写”算法从历史上看,算法是“手写的”,开发人员可以手动设置指示算法结果的参数。 一个著名的例子是Facebook的EdgeRank算法 ,该算法仅考虑三个因素(用户亲和力,内容权重和基于时间的衰减)。

Image for post
source) 来源 )

The score between users is calculated based on how ‘strong’ the developers of the algorithm perceive their interactions to be e.g. they dictate that if you shared your mate’s post last week, you like them 20% more than your other mate who made a similar post and you only liked it instead.

用户之间的得分是根据算法开发人员对他们的互动的“强烈”程度来计算的,例如, 他们指出 ,如果您上周分享了同伴的帖子,那么您比其他发类似帖子的同伴要高20%而您只喜欢它。

At that point, it would’ve been quite easy for Facebook to be accountable for the results of their algorithms. They told it exactly what to do so they could be held liable for the outcomes of their algorithm. Unfortunately, this is no longer the case, popular commercial algorithms have undergone a radical change in design. The parameters that were once ‘hand-written’ are now decided by a ghost in the machine.

到那时, Facebook要对其算法的结果负责很容易。 他们确切地告诉了该怎么做,以便对算法的结果负责。 不幸的是,情况已不再如此,流行的商业算法在设计上发生了根本性的变化。 曾经“手写”的参数现在由机器中的重影决定。

The Ghost in the MachineFrom the beginning of 2011 until around 2015 Facebook used its new machine-learning algorithm to dictate what users saw on their newsfeeds, instead of three parameters, this new beast considers at least 100,000 different factors that are weighted by the machine learning (ML) algorithm(s) (source — Facebook still use an ML algorithm today). Not a single one of these parameters is known to the developers of the algorithms, the AI is a black box that spits out an answer based on whatever information it has been fed previously.

机器中的幽灵从2011年初到2015年左右,Facebook使用其新的机器学习算法来指示用户在其新闻源上看到的内容,而不是三个参数,该新的野兽考虑了至少100,000个由机器加权的不同因素学习(ML)算法( 来源 -今天的Facebook仍在使用ML算法)。 这些算法的开发者并不知道这些参数中的任何一个,AI是一个黑匣子,它会根据先前提供的任何信息吐出答案。

Image for post
source — excellent read) 来源 -优秀阅读)

In “The ethics of algorithms: Mapping the debate” the authors identify six ethical concerns that are raised by algorithms. The epistemic concerns (degree of validation of knowledge) arise from having poor datasets, without sound data your AI isn’t about to make sound decisions. The results and behaviors that these algorithms invoke create the normative concerns, they help create a new undesirable norm from the low-quality datasets. Then when it’s all said and done, none of these decisions can be traced back to their origins and no one is held accountable for the outcomes.

“算法的伦理学:为辩论做准备”中 ,作者指出了算法提出的六个伦理学问题。 认知问题(知识的验证程度)是由不良数据集引起的,如果没有声音数据,您的AI将无法做出正确的决定。 这些算法调用的结果和行为创建了规范关注点,它们有助于从低质量数据集中创建新的不良规范。 然后,当一切都说完了,所有这些决定都无法追溯到其起源,也没有人对结果负责

Windrush DataThe ‘streaming tool’ (ML algorithm) used by the British government ranked each visa applicant red, yellow, or green — this would heavily influence the outcome of the government’s decision of whether or not to grant a visa. The (racist) datasets that were fed into this algorithm were definitely misguided, inconclusive and instructable. The outcomes were determined by datasets created in the past, created by people who judged someone a threat solely based on the color of their skin or their country of origin. This racist data was fed into the algorithm and the algorithm made racist decisions.

Windrush数据英国政府使用的“流工具”(ML算法)将每个签证申请人排名为红色,黄色或绿色-这将严重影响政府决定是否批准签证的结果。 馈送到此算法中的(种族)数据集绝对是错误的,不确定的和可指导的。 结果是由过去创建的数据集确定的,这些数据集是由仅根据某人肤色或原籍国来判断某人为威胁的人创建的。 该种族数据被输入到算法中,并且该算法做出了种族决策。

So it’s not the algorithm’s fault that the creators of the dataset were racist, this algorithm was completely neutral, it only showed us a reflection of the shortcomings of the data that it has received. What can be done to improve the ethics of our algorithms and of our datasets in order to reduce unfair outcomes?

因此,数据集的创建者是种族主义者不是算法的过错,该算法是完全中立的,它只向我们反映了它接收到的数据的缺点。 为了减少不公平的结果,可以采取什么措施来提高我们算法和数据集的道德标准?

  • Transparent Algorithms would help the developers and consumers understand the unethical biases within the machine, once identified, the model or dataset could be refined in order to eliminate the bias.

    透明算法将帮助开发人员和消费者理解机器中不道德的偏差,一旦确定,可以对模型或数据集进行完善以消除偏差。

  • Improved Data Regulation is required in order to prevent data from being mistreated in order to create unfair outcomes for minorities and society as a whole.

    为了防止数据被滥用以对少数民族和整个社会造成不公平的结果,需要改进数据法规

  • Education will help everyone see without rose-tinted spectacles. To understand that it’s YOUR data and you should have adequate rights to protect it. Failing to do so only feeds the unethical algorithms.

    教育将帮助每个人没有玫瑰色的眼镜。 要了解这是您的数据,您应该拥有足够的权利来保护它。 否则,只会滋养不道德的算法。

Final ThoughtsMachine Learning algorithms are popular for two very good reasons, they’re profitable and AI is sexy. Naturally, there is a commercial interest in the former and most computer science students have the latter on their mind.

《 Final Thoughts》机器学习算法之所以受欢迎,有两个很好的原因,它们是有利可图的,而AI则很性感。 当然,前者具有商业利益,大多数计算机科学专业的学生都对后者感兴趣。

I encourage everyone to talk to their friends and family about this topic because these algorithms are far more powerful than most people realise.

我鼓励每个人都与他们的朋友和家人谈论这个话题,因为这些算法比大多数人意识到的要强大得多。

Popular algorithms are dictating how we think by choosing what we see on a daily basis, they pick our news, they pick our films, they pick our wardrobe, they pick our romantic partners, they pick our friends… And no one knows how they do it, so no one is accountable.

流行的算法决定了我们如何选择每天看的东西,他们选择我们的新闻,他们选择我们的电影,他们选择我们的衣橱,选择我们的浪漫伴侣,他们选择我们的朋友……而没人知道他们如何做它,所以没有人负责

翻译自: https://medium.com/swlh/the-ethics-of-algorithms-1c69b87a656

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值