脸书隐藏了未能阻止滥用技术的官僚主义报道的失败

重点 (Top highlight)

If you’ve never read a Facebook “Community Standards Enforcement Report,” you probably haven’t witnessed the technocratic language used to obscure the harm done by the platform. Bluntly, it’s appalling.

如果您从未阅读过Facebook的“社区标准执行报告”,那么您可能还没有亲眼目睹用来掩盖该平台所带来的危害的技术专家语言。 坦率地说,这令人震惊。

The report offers statistics about how much more effective the company is getting at removing material that violates its community standards. The most recent edition, covering April through June, was published earlier this month. These numbers are deceptive because Facebook’s team often speaks of solving a problem it directly created. It would be as if a meat-packer was both intentionally poisoning its meat while also releasing stats about making the meat safer.

我如何更有效的公司在删除违反其社区标准材料获得S上的报告提供的统计数据。 最新版本涵盖了4月到6月,已于本月初发布 。 这些数字具有欺骗性,因为Facebook的团队经常谈论解决直接创建的问题。 好像一个肉类包装商既有意使肉中毒,又发布了使肉类更安全的统计数据。

Facebook hosts, recommends, and amplifies hate as part of its business model while also touting its success at taking that hate down. One only needs to look at the “Kenosha Guard” militia group that may have spurred the double-murder of protesters earlier this week to see the impacts of this business model in real life. The technocratic language Facebook uses to describe its takedown numbers serves to hide the fact that behind such metrics are very real human beings who are targeted and traumatized by material that only exists because Facebook made it possible in the first place. Car companies use crash test dummies; Facebook exposes all of its billions of live users to “crashes” all the time.

Facebook在其商业模式中主持,推荐和扩大仇恨 ,同时也吹捧其成功消除仇恨。 只需要看一下“ Kenosha Guard”民兵组织 ,他们可能会在本周早些时候引发抗议者的双重谋杀,以了解这种商业模式在现实生活中的影响。 Facebook用来描述其下架数量的技术专家语言掩盖了这样一个事实,即在这些指标背后是非常真实的人类,这些人类受到了针对性的伤害,而这些伤害只是因为Facebook首先使之成为现实而存在。 汽车公司使用碰撞试验假人; Facebook始终使数十亿的实时用户“崩溃”。

One of Facebook’s key metrics in these enforcement reports is its “proactive detection rate,” which refers to the amount of content that was intercepted before users reported it. Take the “hate speech” category for example: In its most recent report, Facebook said it proactively detected 94.5% of the 22.5 million pieces of hate speech content identified in this time period. That leaves well over 1 million pieces of that content to be seen and reported by users. It’s essential to keep in mind that Facebook is speaking of a “who” when it provides these numbers, but it wants you to concentrate on the “what.”

这些执行报告中Facebook的关键指标之一是其“主动检测率”,它是指用户报告之前被拦截的内容量。 以“仇恨言论”类别为例:在最近的报告中,Facebook说,它主动检测到了这段时间内识别出的2250万条仇恨言论内容中的94.5%。 这样一来,用户可以观看和报告的内容超过100万条。 必须记住,Facebook在提供这些数字时会说“谁”,但它希望您专注于“什么”。

We’ve long been told by Facebook’s top executives that the company’s A.I. is getting better at catching content that violates its “community standards,” but let’s take a look under the hood of the report.

长期以来,Facebook的高层管理人员一直告诉我们,该公司的AI在捕获违反其“社区标准”的内容方面越来越好,但是让我们来看看该报告的内容。

According to a recent story in Fast Company, Facebook “reports the percentage of the content its A.I. systems detect versus the percentage reported by users. But those two numbers don’t add up to the whole universe of harmful content on the network. It represents only the toxic content Facebook sees.”

根据Fast Company的最新报道 ,Facebook“报告了其AI系统检测到的内容所占百分比与用户报告的百分比。 但是,这两个数字并不构成网络上有害内容的总和。 它仅代表Facebook看到的有毒内容。”

The “known unknowns,” if you will. This next part is crucial:

如果您愿意,则是“已知的未知数”。 下一部分至关重要:

For the rest, Facebook intends to estimate the ‘prevalence’ of undetected toxic content, meaning the number of times its users are likely seeing it in their feeds. The estimate is derived by sampling content views on Facebook and Instagram, measuring the incidences of toxic content within those views, then extrapolating that number to the entire Facebook community. But Facebook has yet to produce prevalence numbers for hate posts and several other categories of harmful content.

对于其余的内容,Facebook打算估计未检测到的有毒成分的“流行率”,这意味着其用户可能在其饲料中看到它的次数。 通过对Facebook和Instagram上的内容视图进行采样,测量这些视图中有毒内容的发生率,然后将该数字外推到整个Facebook社区,得出估算值。 但是,Facebook尚未产生仇恨帖子和其他几种有害内容类别的流行率数字。

Nor does Facebook actively report how many hours toxic posts missed by the A.I. stayed visible to users, or how many times they were shared, before their eventual removal. In addition, the company does not offer similar estimates for misinformation posts.

Facebook也没有主动报告AI遗漏了多少小时的有毒帖子对用户可见,或者在最终被删除之前被共享了多少次。 此外,该公司没有提供错误信息职位的类似估计。

Facebook did not immediately respond to a request for comment about toxic content on its platform and whether it will provide more detailed information in its reports moving forward. For the time being, Facebook has a number that only accounts for what it sees, but there’s a whole ‘nother number that it has to estimate based on a sample. It calls this number “prevalence.”

Facebook没有立即回应有关其平台上有毒内容以及是否会在其报告中提供更详细信息的评论请求。 目前,Facebook有一个数字只能说明其所见,但还有一个“另一个数字,必须根据样本进行估算”。 它称此数字为“患病率”。

In other words, there’s too much toxic content for Facebook to ever really see it all (hello, scale) so Facebook has a formula for determining the amount based on random sampling. Abstractly, perhaps everyone understands this, but to think about it another way: There’s so much sewage on the platform that the company must continually guess how much of it people are actually seeing.

换句话说,Facebook有太多有毒成分,因此无法真正看到全部内容(您好, scale ),因此Facebook有一个公式可以根据随机抽样确定含量。 抽象地讲,也许每个人都理解这一点,但是换一种方式思考:平台上有太多的污水,公司必须不断猜测人们实际上看到了多少污水。

Most businesses don’t release reports saying “we didn’t poison” this many people, or “we traumatized 5% fewer people” during Q3 of this year.

大多数企业没有发布报告说今年第三季度“我们没有中毒”这么多人,或者“我们伤害了5%的人”。

Behind those numbers are people being exposed to virulent racism, misogyny, transphobia, holocaust denial, and dangerous misinformation and conspiracy theories. This is where Facebook’s language plays an important role. It’s not just “data” or numbers that these reports are referring to: it’s increments of hate spewing out, harming individuals, their communities, and increasingly, democracies and societies.

这些数字的背后是人们遭受暴力的种族主义,厌女症,恐惧症,否认大屠杀以及危险的错误信息和阴谋论。 这是Facebook语言发挥重要作用的地方。 这些报告所指的不只是“数据”或数字:仇恨的喷发是在增加,伤害个人,其社区以及越来越多的民主国家和社会。

So, every time Facebook releases these numbers, it is asking us to think about all the people the company didn’t harm. All the misinformation it didn’t spread. All the conspiracies it didn’t promote. Most businesses don’t release reports saying “we didn’t poison” this many people, or “we traumatized 5% fewer people” during Q3 of this year. It would rightly be seen as unacceptable.

因此,Facebook每次发布这些数字时,都要求我们考虑该公司没有伤害到的所有人员。 所有的错误信息都没有传播。 它没有助长所有的阴谋。 大多数企业没有发布报告说今年第三季度“我们没有中毒”这么多人,或者“我们伤害了5%的人”。 正确地将其视为不可接受的。

And this type of language also obscures the effects on commercial content moderators that Sarah Roberts writes about in Behind the Screen. Commercial content moderators are traumatized by the job of sifting through content on Facebook so that everyone else is traumatized less.

而且这种语言还掩盖了莎拉罗伯茨(Sarah Roberts)在“屏幕背后”(The Behind the Screen)中对商业内容主持人的影响。 商业内容主持人会因在Facebook上浏览内容而感到不便,因此其他所有人所受的伤害也较少。

From Facebook’s report: “Lastly, because we’ve prioritized removing harmful content over measuring certain efforts during this time, we were unable to calculate the prevalence of violent and graphic content, and adult nudity and sexual activity.”

来自Facebook的报告:“最后,由于我们在这段时间内优先考虑删除有害内容,而不是衡量某些工作,因此我们无法计算暴力和图像内容以及成人裸体和性行为的发生率。”

This is pretty astounding when you dig in. A multibillion-dollar company is telling us that because it prioritized removing one kind of harmful content, it is unable to concentrate on other kinds of content. Imagine something analogous from another business. We spent so much time making sure there was no rat feces in your food, we weren’t able to screen for metal shavings. We dedicated most of our resources to seat belts, so this quarter, the brakes won’t work as well…

当您深入研究时,这真是令人惊讶。一家价值数十亿美元的公司告诉我们,由于它优先考虑删除一种有害内容,因此无法专注于其他种类的内容。 想象一下与另一项业务类似的事情。 我们花了很多时间来确保您的食物中没有老鼠粪便,我们无法筛查金属屑。 我们将大部分资源专用于安全带,因此本季度刹车效果不佳……

In the report, Guy Rosen, Facebook’s VP of integrity, said: “We’ve made progress combating hate on our apps, but we know we have more to do to ensure everyone feels comfortable using our services.” This reveals the paradox (some might say lie) at the center of Facebook and its mission. Mark Zuckerberg, Sheryl Sandberg, and anyone else who represents Facebook in public life consistently says that the root of the company’s mission is connection. But the obvious and typically unstated corollary is that if your mission is to connect everyone, your company is necessarily going to connect some of the most odious and hateful individuals with like-minded people. And Facebook does these hateful people the extra favor of recommending them to even more people. Many journalists and researchers have argued that the move toward groups would make hate on the platform even more seamless and more difficult to detect, which it has.

Facebook诚信副总裁盖伊·罗森(Guy Rosen)在报告中说:“我们在消除对应用程序的仇恨方面取得了进展,但我们知道,我们还有很多工作要做,以确保每个人都能舒适地使用我们的服务。” 这揭示了Facebook及其使命中心的悖论(有人可能会说谎)。 马克·扎克伯格(Mark Zuckerberg),谢丽尔·桑德伯格(Sheryl Sandberg)以及在公共生活中代表Facebook的任何其他人都一致表示,公司使命的根源是联系。 但是,显而易见的,通常没有说明的推论是,如果您的任务是连接所有人,那么您的公司必然会将一些最可恶和可恨的人与志趣相投的人联系起来。 Facebook向这些可恶的人推荐了将其推荐给更多人的额外好处。 许多记者和研究人员认为,朝着团体前进的趋势会使平台上的仇恨更加无缝难以察觉, 而这种仇恨具有它

Hate on the platform isn’t so much things going wrong on Facebook as it is the platform doing exactly what it’s designed to do. Thinking otherwise, and letting numbers obscure the effect of that toxicity, is giving Facebook what it wants — to be thought of as a force for good that has some unforeseen side effects. It’s not, and we shouldn’t treat it as such.

对平台的仇恨并没有在Facebook上出很多问题,而是因为平台完全按照其设计的方式运行。 换个说法,让数字掩盖了这种毒性的影响,正在给Facebook带来想要的东西-被认为是一种有益的力量,具有一些无法预料的副作用。 不是,我们不应该这样对待。

翻译自: https://onezero.medium.com/facebook-is-hiding-its-failure-to-keep-abuse-off-its-platform-behind-technocratic-reports-682d871ef1ca

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值