修理互联网

“Ban TikTok!” “Take down Twitter and Facebook!” “Remove everything factually inaccurate!”

“班提克!” “记下Twitter和Facebook!” “删除实际上不准确的所有内容!”

For years, we’ve been treated to reactionary, high-octane political takes for addressing a range of problems in contemporary social media. Congress, as irresponsible and absurd in its hot takes on technology as any editorial page, has seen a range of proposals from across the political spectrum. Many of these takes — from pundits and members of Congress alike — are outright nonsense. They often lack a basic understanding of how the technology works, which companies are responsible for what products, and even the basic structures of the algorithms they want to regulate.

多年来,我们一直在处理反动,高辛烷值的政治需要解决一系列当代社会化媒体的问题。 国会对技术的热烈追捧是不负责任和荒谬的,就像在任何社论页面上一样,国会看到了来自各个政治领域的一系列提议。 这些建议中的许多-来自专家和国会议员-都是胡说八道。 他们通常对技术的工作原理,对哪些产品负责的公司,甚至他们想调节的算法基本结构缺乏基本的了解。

Though this fact — that some of the most powerful people in these discussions are wildly unqualified and ignorant — is grounds for despondency, let’s see if we can make sense of the Problem of the Internet.

尽管这个事实(这些讨论中的一些最有权势的人疯狂地缺乏资格和无知)是令人沮丧的理由,但让我们看看我们是否可以理解互联网问题。

Here’s a list of the range of problems with online content, with social media content in particular.

以下是在线内容特别是社交媒体内容存在的一系列问题的列表。

That’s a lot of problems, many of them tangled together. The worst part is that many of our current proposals don’t actually address them: such as, for example, the suggestion to move from one social media platform to another or to ban certain platforms (like TikTok) from the American market.

那是很多问题,其中许多纠结在一起。 最糟糕的是,我们目前的许多提议实际上并没有解决它们:例如,从一个社交媒体平台转移到另一个社交平台或禁止某些平台(例如TikTok)进入美国市场的建议。

For present purposes, then, let’s focus on more substantive proposals.

因此,出于目前的目的,让我们集中讨论更具实质性的建议。

目前的比赛状态 (The Current State of Play)

To understand what we might change about social media, we need to understand the current state of play.

要了解社交媒体可能发生的变化,我们需要了解当前的发展状况。

There are a number of relevant laws. The one most frequently cited is Section 230 of the Communications Decency Act (CDA). The clause states that:

有许多相关法律。 引用最多的一个是《通信规范法》 (CDA) 第230节 。 该条款指出:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

交互式计算机服务的提供者或用户不得被视为另一信息内容提供者提供的任何信息的发布者或发言人。

Section 230 distinguishes social media sites from publishers. Because it is users who are posting content to Facebook, and because that content is going up at such a high volume, it would be unfair to hold Facebook accountable for every status update that gets posted. Social media sites aren’t like The New York Times; they don’t have editors parsing all of the things that appear on the platform. Liability standards that apply to libel in the Times can’t be applied to Facebook.

第230节将社交媒体网站与发布商区分开。 因为是用户将内容发布到Facebook,并且该内容的数量如此之大,所以要对每个发布的状态更新追究Facebook的责任是不公平的。 社交媒体网站不像“纽约时报” ; 他们没有编辑器来解析平台上显示的所有内容。 《 纽约时报》适用于诽谤的责任标准不适用于Facebook。

One goal of Section 230 was to allow social media companies to exist without the substantial threat of constant and devastating litigation over every post. In the early days of the internet, this was a serious hazard. Indeed, the case of Stratton Oakmont v. Prodigy (1995) demonstrated that a big company could threaten to bankrupt a social media start-up.

第230节的目标是允许社交媒体公司生存,而不会在每个职位上都存在持续不断的毁灭性诉讼的实质威胁。 在互联网的早期,这是一个严重的危害。 确实, 斯特拉顿·奥克蒙特 ( Stratton Oakmont)诉普罗迪 ( Prodigy ) (1995)证明,大公司可能威胁要破产的社交媒体初创公司。

Without Section 230, if some user posted a comment that was libelous, then the party suing for libel could in theory hold the social media company liable too, and sue them as the publisher. The authors of Section 230, then-Reps. Christopher Cox (R-Calif.) and Ron Wyden (D-Ore.), were concerned that allowing litigation against social media companies as publishers would create an enormous legal burden and curb the possibility of success for then-small social media companies.

如果没有第230节,如果某些用户发表的诽谤性评论,则理论上起诉诽谤的一方也可以在理论上追究社交媒体公司的责任,并起诉他们为发布者。 第230节的作者,然后-Reps。 克里斯托弗·考克斯(克里斯托弗·考克斯(R-Calif。))和罗恩·怀登(Ron Wyden(D-Ore。)) 担心 ,允许对社交媒体公司作为发行人提起诉讼会带来巨大的法律负担, 并限制了当时规模较小的社交媒体公司获得成功的可能性

In addition, such a legal burden could preclude internet companies from moderating content at all; indeed, the ruling in Stratton Oakmont v. Prodigy (1995) meant that if an internet company moderated posts (for bad language, say), it could be held liable for content. In addition to concerns about freedom, the authors of Section 230 worried that without it, the internet could also become more of a cesspit, since companies would be forced not to intervene at all, to avoid liability.

此外,这种法律负担可能使互联网公司根本无法审核内容。 实际上,在Stratton Oakmont诉Prodigy (1995)中的裁决意味着,如果一家互联网公司主持帖子(比如说语言不佳),则可能对其内容负责。 除了对自由的担忧之外,第230节的作者担心,如果没有自由,互联网也可能会成为更多的麻烦,因为公司将被迫完全不进行干预以避免责任。

Avoiding situations like Stratton Oakmont v. Prodigy was Cox’s explicit motivation for Section 230. It’s not a hypothetical theory; it happened, and Cox and Wyden wanted to make sure it wouldn’t happen again.

避免诸如Stratton Oakmont诉Prodigy之类的情况是考克斯对第230条的明确动机。 它发生了,Cox和Wyden希望确保不会再次发生。

Section 230 isn’t written willy-nilly. The language of “interactive computer service” and “provided by another information content provider” (emphasis mine) means that a digital publication that has editorial control like a normal publisher (e.g., an entirely online newspaper) wouldn’t get those liability shields.

第230节不是故意写的。 “交互式计算机服务”和“由另一信息内容提供商提供”(强调我的语言)的意思是,像普通出版商这样的具有编辑控制权的数字出版物(例如,完全在线的报纸)不会获得这些责任盾。

As a result of Section 230, the moderation policies of social media companies are largely voluntary decisions. In theory, they could exist as platforms that allow a range of repugnant behavior without incurring liability, because they’re not the publishers; liability falls on the users. And because they’re not in danger of being sued, social media companies can make their own rules.

根据第230节的规定,社交媒体公司的审核政策主要是自愿决定。 从理论上讲,它们可以作为允许各种令人讨厌的行为而又不承担责任的平台而存在,因为它们不是发布者。 用户应承担责任。 而且由于不存在被起诉的危险,社交媒体公司可以制定自己的规则。

There are limits to this; recently SESTA-FOSTA basically removed the liability protection of Sec 230 as it applies to sexually explicit content. There are serious problems with that legislation, as it is too coarse-grained and extends far beyond combating sex trafficking. SESTA/FOSTA is basically a masterclass in how legislating with ignorance of the structure of the internet can produce harmful consequences.

这是有局限性的。 最近, SESTA-FOSTA 基本上取消了Sec 230的责任保护,因为它适用于色情内容。 该立法存在严重的问题 ,因为它过于粗略,远远超出了打击性贩运的范围。 SESTA / FOSTA在不了解互联网结构的立法如何产生有害后果方面基本上是大师班。

Of course, there are ways of discussing the liability of social media companies that would not require treating them as publishers. We could treat them as a new entity, set new rules for them. We could treat them as promoters or distributors of information, and try to bootstrap from existing liability rules. There are a few ways to go about this, and it’s worth getting into the weeds.

当然,有一些方法可以讨论社交媒体公司的责任,而无需将其视为发布者。 我们可以将它们视为新实体,为它们设置新规则。 我们可以将它们视为信息的促进者或分发者,并尝试从现有的责任规则中进行引导。 有几种方法可以解决此问题,这值得除草。

废除第230条? (Repeal Section 230?)

The most popular talking point in the discussion is straightforward: repeal Section 230. Make it the case that social media companies are subject to the same standards that other media companies are.

讨论中最流行的话题很简单: 废除第230节 。 假设社交媒体公司受制于与其他媒体公司相同的标准。

The argument in favor of repealing Section 230 focuses on the observation that the major social media companies are no longer subject to the existential threat of constant litigation. Facebook and Twitter have massive legal operations; they could defend themselves against lawsuits from private individuals or even other major companies. It would cost them money to engage those defenses, but it would not risk the destruction of those platforms. It is reasoned that the explicit purpose of Section 230 to protect the embryonic social media landscape when the CDA was passed simply no longer applies. We should get rid of the law that is no longer appropriate in the present case. Repealing Section 230 would mean that Facebook, Twitter, and other protected companies would have to take some responsibility for the content that they publish, and would be motivated to moderate content.

赞成废止第230条的论点集中在以下观察上:主要的社交媒体公司不再受到持续诉讼的生存威胁。 Facebook和Twitter具有大规模的法律运作; 他们可以为自己免受私人甚至其他主要公司提起的诉讼辩护。 使用这些防御系统将花费他们金钱,但不会冒破坏这些平台的风险。 有理由认为,通过CDA时,第230节保护原始社交媒体格局的明确目的不再适用。 我们应该摆脱在本案中不再适用的法律。 废除第230条将意味着Facebook,Twitter和其他受保护的公司必须对他们发布的内容承担一些责任,并会鼓励他们适度发布内容。

But here’s the rub. While major companies like Facebook and Twitter would be fine, repealing Section 230 would result in anti-competitive market practices and make it impossible for smaller companies to compete with the behemoths. It would create a legal overhead for smaller social media companies that those companies simply couldn’t carry. This is a standard response championed by a range of politicians who think market solutions are a potential check on bad business.

但是,这是摩擦。 尽管像Facebook和Twitter这样的大型公司会没事,但废除第230条将导致反竞争的市场惯例,使较小的公司无法与庞然大物竞争。 这将为小型社交媒体公司带来法律负担,而这些公司根本无法承担。 这是一系列政治家拥护的标准回应,他们认为市场解决方案可以对不良企业进行检查。

These arguments are too simplistic, but they do rest on a truth: Not all companies are like Facebook and Twitter. Changing liability structure because the giants can protect themselves neglects the companies that can’t.

这些论点过于简单,但它们确实基于一个事实:并非所有公司都像Facebook和Twitter。 因为巨人可以保护自己而改变了责任结构,却忽视了那些不能做到的公司。

But the more substantive argument against repealing Section 230 is not really about market versus legal solutions at all.

但是,关于废除第230条的更具实质性的论点,实际上根本不是市场还是法律解决方案。

Repealing Section 230 rests on the idea that we want social media companies to be liable as publishers. By holding companies like Facebook and Twitter to the standard of publishers, we invoke the existing case law surrounding the liability of publishers. That means Curtis Publishing v. Butts, New York Times v. Sullivan, and any potential revisions of American libel law to come (as intimated by Justice Thomas).

废除第230条是基于我们希望社会媒体公司作为出版商承担责任的想法。 通过使像Facebook和Twitter这样的公司符合发布者的标准,我们引用了有关发布者责任的现有判例法。 这意味着Curtis Publishing诉Butts案New York Times诉Sullivan案 ,以及美国诽谤法的任何可能修订版本 (由Thomas Thomas暗示)。

This is a mistake, and a very serious one. The problems with social media and misinformation are not well-served by just throwing them into that case law.

这是一个错误,也是非常严重的错误。 仅仅将社会媒体和错误信息放入判例法中并不能很好地解决这些问题。

There are significant disanalogies between social media companies and traditional news publishers. Traditional publishers (generally) pay writers for the content published and this does not apply to social media companies. What’s more, advertising revenue for papers is substantially different than the pay-per-click models popular in the social media age.

社交媒体公司与传统新闻出版商之间存在重大分歧。 传统的出版商(通常)为发布的内容向作家付费,但这不适用于社交媒体公司。 此外,论文的广告收入与社交媒体时代流行的按点击数付费模式大不相同。

Perhaps the better approach is to create a system of liability and accountability that directly addresses the current state of the internet and how social media companies actually work.

也许更好的方法是创建一个责任和问责制系统,直接解决互联网的当前状况以及社交媒体公司的实际工作方式。

我们(或国会)可以做什么? (What Can We (or Congress) Do?)

The more specific, focused solutions fall into two non-exclusive approaches.

更具体,更集中的解决方案分为两种非独占方法。

The first approach is legislating standards for the content moderation policy of the platform.

第一种方法是为平台的内容审核政策制定标准。

The second approach is legislating standards for the content promotion practices of the platform.

第二种方法是为平台的内容推广实践制定标准。

Both approaches have been subject to substantial discussion by policy experts, and occasional legislative proposals.

两种方法都经过政策专家的大量讨论,并偶尔提出立法建议。

内容审核 (Content Moderation)

Content moderation has to do with the policies governing what content is allowed on a platform and how the platform handles content that isn’t allowed.

内容审核与管理平台上允许的内容以及平台如何处理不允许的内容的策略有关。

Passive moderation is the current industry standard; in passive moderation, a company investigates and removes content flagged by users as being in violation of the terms of service. If someone posts a video of animal cruelty on YouTube, then, on a passive moderation approach, taking that video down would involve users reporting the video for violating terms of use, the company looking at the video, and the company then deciding to take it down.

被动调节是当前的行业标准; 在被动审核下,公司会调查并删除用户标记为违反服务条款的内容。 如果有人在YouTube上发布了有关虐待动物的视频,那么采用被动审核的方式,删除该视频将涉及用户举报该视频违反了使用条款,该公司正在观看该视频,然后该公司决定采取该视频下。

Active moderation, by contrast, is when the social media platform actively seeks out and takes down content that violates its terms of use. YouTube and other platforms do engage in some active moderation, though usually on a focused basis and in short bursts, rather than as a constant measure. The removal of QAnon and other conspiracy theory accounts from Twitter seems to have been active moderation. The removal of ISIL recruitment and execution videos from YouTube required active moderation.

相比之下, 主动审核是指社交媒体平台主动查找并删除违反其使用条款的内容。 YouTube和其他平台确实进行了一些积极的调节,尽管通常是有针对性的,短暂的,而不是作为一种持续的措施。 从Twitter 移除 QAnon和其他阴谋论的说法似乎是积极的节制。 要从YouTube 删除 ISIL招聘和执行视频,必须进行积极审核。

One potential legal step is to create and clarify a set of standards for active moderation. What content is a social media site legally obligated to have moderators hunt down and remove? Some policies already exist, regarding content pertinent to piracy, sex trafficking, and other sorts of crimes, but there aren’t effective and enforced policies that provide specific guidance for companies on dealing with extremism, doxxing, harassment, and so on.

潜在的法律步骤是建立和阐明一套积极适度的标准。 社交媒体网站在法律上有义务让主持人追捕并删除哪些内容? 关于盗版,性贩运和其他类型犯罪的内容,已经存在一些政策,但是还没有有效和强制执行的政策为公司提供处理极端主义,犯罪,骚扰等方面的具体指导。

Clarifying what the obligations of platforms are regarding active moderation would help to specify when companies are and are not responsible for certain noxious content. If the content falls under the purview of such reforms, then companies would have an obligation to locate and get rid of that content. They would also be able to establish what content is only subject to passive moderation, what they might take down if reported but won’t actively seek out. At its best, the law can bring clarity to what the parties governed are obligated to do.

明确平台对主动审核的义务将有助于指定公司何时对某些有害内容负责,何时不负责。 如果内容属于此类改革的范围,那么公司将有义务找到并摆脱该内容。 他们还将能够确定哪些内容仅需进行被动审核,如果报告但不会主动寻找,他们可能会删除哪些内容。 法律充其量可以使受约束的各方有义务做清楚的事情。

Of course, certain views of the law may well hold that the decisions about what should and shouldn’t be on the platforms are better left to private companies. This view serves to maintain the status quo. Given the persistence and severity of the social media problems that we currently face, one could argue this approach simply isn’t working.

当然,某些法律观点很可能会认为,关于哪些内容应该在平台上进行决定,哪些决定不应该由私人公司决定。 这种观点有助于维持现状。 考虑到我们当前面临的社交媒体问题的持续性和严重性,人们可能会认为这种方法根本行不通。

应该限制​​什么内容? (What Content Should Be Restricted?)

But what sorts of content should be subject to such a policy?

但是,什么样的内容应该受到这种政策的约束?

One obvious case is content that incites genocide. Normally, that would go without saying, but Facebook’s recent failures to moderate content directly inciting (and even coordinating) genocide in Myanmar underscores the need for some explicit standard.

一个明显的例子是煽动种族灭绝的内容。 通常,这毋庸置疑,但是Facebook最近未能审核内容,直接煽动(甚至协调了)缅甸的种族灭绝事件,突显了对某些明确标准的需求。

Similarly, explicating policy that requires active moderation of coordinated child abuse, or stalking with intent to harm, seem like useful and easy cases. Requiring that major social media platforms actively prevent the use of their platforms to cause serious harm, or at least make a credible, good-faith effort at prevention, is both useful and feasible. The major platforms are sufficiently massive and resource rich that they can act on it.

同样,制定要求积极缓和协调性虐待儿童行为或故意伤害他人的行贿政策,似乎是有用且容易的案例。 要求主要的社交媒体平台积极阻止其平台的使用造成严重损害,或者至少做出可靠的,真诚的预防工作,既有用又可行。 主要平台足够庞大且资源丰富,可以对其采取行动。

There are going to be tricky cases, both for political and moral reasons. Is all content that is racist sufficiently noxious to be pulled from social media? Do we want the law to be the arbiter of what counts as racist? The limits on free speech and the responsibility of platforms that host such speech are subject to a range of thorny debates.

出于政治和道德原因,将会出现棘手的案例。 所有种族主义的内容是否具有足够的毒性,足以从社交媒体中删除? 我们是否希望法律成为种族主义的仲裁者? 对言论自由限制以及主持此类言论的平台的责任受到一系列棘手的辩论的困扰。

Constitutional law provides some guidance on this question, regarding the (pretty broad) legal permissibility of racist or homophobic hate speech. This provides a substantial issue, but not a prohibitive one.

宪法对这个问题提供了一些指导,涉及(相当广泛的)种族主义或仇恨仇恨言论的法律允许性。 这提供了一个实质性的问题,但不是一个禁止的问题。

But there’s an additional wrinkle in all of this, which shows up very rarely in the political commentary.

但是,所有这一切都有一个额外的皱纹,这在政治评论中很少出现。

American law lays out restrictions on discrimination in public accommodation. Public accommodation includes public facilities like government buildings and institutions open to the public; it also includes businesses. The reason a business cannot refuse service on the basis of race or disability is because the business is a place of public accommodation; it is generally open to the public. Refusing access to a public accommodation on the basis of race is illegal, irrespective of whether the owner of that space is the government or a private company.

美国法律规定了对公共场所歧视的限制。 公共住宿包括公共设施,例如政府大楼和向公众开放的机构; 它还包括企业。 企业之所以不能由于种族或残疾而拒绝服务,是因为该企业是公共场所 ; 它通常向公众开放。 无论种族的所有者是政府还是私人公司,基于种族而拒绝进入公共场所都是非法的。

What is important here is the notion of discrimination on certain grounds in the presence of social media, and whether there is (or should be) a right protecting against discrimination across a range of cases. Social media platforms are places of public accommodation. They’re supposed to be, by design. Their goal is to connect users and part of the way that they achieve this is by keeping their services open to as wide an audience as possible. So, if Facebook were to say, “actually, we’re not going to allow black people on our platform,” then this would be a pretty clear violation of civil rights.

这里重要的是存在社交媒体时基于某些理由的歧视的概念,以及是否存在(或应有)保护免受一系列案件歧视的权利。 社交媒体平台是公共住宿场所。 他们应该是设计使然。 他们的目标是与用户建立联系,而实现这一目标的部分方法是保持其服务向尽可能多的受众开放。 因此,如果Facebook说“实际上,我们不会允许黑人进入我们的平台”,那么这显然是对民权的明显侵犯。

On the other hand, a store is allowed to refuse service to a disruptive prospective customer, ask that customer to leave, or otherwise bar them from the store. Being a jackass or troll is not protected, and so discrimination against an individual on the basis that the individual is being a jackass is allowed.

另一方面,允许商店拒绝向破坏性的潜在顾客提供服务,要求顾客离开或以其他方式禁止他们进入商店。 不受保护是公驴或巨魔,因此允许基于个人是公驴而对个人进行歧视。

The challenge is that sometimes individuals justify their jackassery under membership in a class that would produce such a protection, and this may turn out to be the case with users of internet services that behave inappropriately. We have already seen complaints about moderation practices that they view as discriminating on the basis of their political affiliation or religion (political affiliation is not as strictly protected under this law, but religion absolutely is).

面临的挑战是,有时个人会在会产生此类保护的类中为其成员身份辩护,而事实证明,行为不当的互联网服务用户就是这种情况。 我们已经看到了关于节制习俗的投诉,他们认为这是基于其政治隶属关系或宗教的歧视(政治隶属关系不受该法律的严格保护,而宗教绝对是受保护的)。

If someone is a virulent antisemite and uses Twitter to say that they believe Jews are a profound theological and existential threat, are necessarily evil, and so on, then can their speech be protected against removal (can their accounts be protected from bans) on the basis that such statements are religious?

如果某人是有毒的反犹太人,并使用Twitter称自己相信犹太人是一种深远的神学和生存威胁,必定是邪恶的,依此类推,那么可以保护他们的言论免遭删除(可以保护其帐户免受禁令)。此类陈述是否具有宗教根据?

If we’re going to change the law to make these companies more accountable, then we should be sensitive to how this impacts the problem cases, like the antisemite, homophobe, or racist who argues that their bigotry is a function of and protected by that bigotry’s grounding in religion.

如果我们要修改法律以提高这些公司的责任感,那么我们应该对这将如何影响问题案件保持敏感,例如反犹太主义者,同性恋者或种族主义者认为他们的偏执是受此影响并受其保护的。偏执狂的宗教信仰。

监视 (Surveillance)

One argument for banning some platforms or applications is the security of data gathered by those applications: Ban TikTok because they’re gathering huge amounts of data, probably on behalf of (or at least accessible by) the Chinese government. Same with FaceApp and Russia.

禁止某些平台或应用程序的一个论据是这些应用程序收集的数据的安全性: Ban TikTok,因为它们正在收集大量数据,可能代表中国政府(或至少可以由中国政府访问)。 与FaceApp和俄罗斯相同。

Those arguments are understandable, though they require a squeamish acknowledgement that it’s really just a choice about who we’re allowing to conduct surveillance (the American government and private American corporations, but not China and Russia) and that all of the “acceptable” surveillance is only as secure as the companies’ security around our gathered data … which is not great.

这些论点是可以理解的,尽管它们需要尖叫声承认,这实际上仅是关于我们允许谁进行监视的选择(美国政府和美国私人公司,而不是中国和俄罗斯),以及所有“可接受的”监视仅与公司围绕我们收集的数据的安全性一样安全……这不是很好。

That’s a different can of very rotten worms.

那是一罐非常烂的蠕虫。

As I’ve noted, there’s a strong argument that it would be wrong to hold social media platforms accountable as publishers for content because they just aren’t publishers. The publishing happens when someone posts; the person exercising agency in publication is not an employee of the social media platform, but the user.

正如我已经指出的那样,有一个强烈的论点是,让社交媒体平台对内容的发布者负责是错误的,因为他们不是发布者。 发布发生在有人发布时; 发行人行使代理人不是社交媒体平台的雇员,而是用户。

Having said that, one of the major services (desired or not) offered by social media companies is in selecting what published content is visible. This occurs at a few levels, the most obvious being the paid advertising and promotion of content. Facebook, Twitter, YouTube, etc., all offer some users the ability to pay to increase the visibility of their posts, with those posts showing up in an advertising bar, or in users’ feeds, or even before users can proceed to other content.

话虽如此,社交媒体公司提供的一项主要服务(是否需要)是选择可见的已发布内容 。 这发生在几个层次上,最明显的是付费广告和内容促销。 Facebook,Twitter,YouTube等都为某些用户提供付费功能,以增加其帖子的可见性,这些帖子显示在广告栏或用户的供稿中,甚至在用户可以继续浏览其他内容之前。

Algorithmic promotion is also a standard feature of social media platforms’ newsfeeds. Facebook, for example, does not show every post by every person you follow; rather, it displays a selection of posts determined algorithmically, whether users want that or not.

算法推广也是社交媒体平台新闻源的标准功能。 例如,Facebook不会显示您关注的每个人的每条帖子; 而是显示通过算法确定的帖子选择,无论用户是否想要。

When Twitter changed its interface to function in this way, there was outcry, but Twitter has other features of this service that produce this effect (like recommendations for local trends and trends of interest). This service is not necessarily a bad thing; the companies will argue that it adds value to the product by increasing potential functionality and the ability to satisfy the preferences of the users (at least in theory). The paid promotions also allow social media companies to make money without charging their users for memberships (like Netflix does).

当Twitter更改其界面以这种方式运行时,人们强烈抗议,但是Twitter的其他功能会产生这种效果(例如针对本地趋势和兴趣趋势的建议)。 这项服务不一定是一件坏事。 两家公司将争辩说,通过增加潜在的功能和满足用户偏好的能力(至少在理论上),它可以为产品增加价值。 付费促销还使社交媒体公司无需向用户收取会员费即可赚钱(就像Netflix一样)。

A major worry is that these algorithms can actually have a significant impact on whether or not individuals are aware of certain issues; automating the position of news director. This is a problem when these algorithms are gamed to either increase or decrease the visibility of a story, or when they systematically result in a misinformed public.

一个主要的担忧是,这些算法实际上会对个人是否意识到某些问题产生重大影响。 使新闻总监的职位自动化 。 当这些算法被用来增加或减少故事的可见性时,或者当它们有系统地导致误导公众时,这是一个问题。

There is a Senate proposal to address this issue, put forward by Sens. Wyden (D-Ore.) and Cory Booker (D-N.J.). Politically, the proposal is dead on arrival, given the current makeup of the McConnell Senate. As with the issues of moderation, it is unlikely that there can be a robust and wide-ranging solution in the current partisan environment.

参议员怀登(D-Ore)和科里·布克(Cory Booker)(DN.J.) 提出了解决该问题的参议院提案 。 从政治上讲,鉴于麦康奈尔参议院目前的组成,该提案在到达时就死了。 与节制问题一样,当前的党派环境不可能有一个健壮且广泛的解决方案。

Bad-faith actors, those who demonstrably, repeatedly, and intentionally post content misleading readers simply shouldn’t have their content algorithmically promoted. Perhaps some minimal standards of fact-checking by an independent, non-partisan body might be feasible, if handled with a deft political touch (though First Amendment questions emerge with a vengeance here).

恶意演员,那些明显,反复和故意发布内容误导性读者的人,根本不应该通过算法宣传他们的内容。 如果以敏锐的政治眼光进行处理,那么由一个独立的,无党派的机构进行事实核查的一些最低标准也许是可行的(尽管这里出现了复仇的第一修正案问题)。

The current political environment does not lend itself to substantial, robust, lasting solutions. But perhaps with the right messaging, at the right moment, some simple proposals may just squeak through.

当前的政治环境无法提供实质性,稳健,持久的解决方案。 但是,也许在正确的时机通过正确的消息传递,一些简单的建议可能会被掩盖。

翻译自: https://arcdigital.media/fix-the-internet-7849f7c2ea74

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值