人工智能的无形之手:为什么民主机构需要更多的信息访问权以进行问责

This story was originally published at The Rockefeller Foundation’s website on 8 July 2020. You can view the original story here.

该故事最初于 2020年7月8日 在洛克菲勒基金会的 网站 发布。 您可以在 此处 查看原始故事

道德和自我约束还不够 (Ethics and self-regulation are not enough)

Across the world, artificial intelligence (AI) elicits both hope and fear. AI promises to help find missing children and cure cancer. But concerns over harmful AI-driven outcomes are equally significant. Lethal autonomous weapon systems raise serious questions about the application of armed-conflict rules. Meanwhile, anticipated job losses caused by automation top many governments’ agendas. Effectively, AI models govern significant decisions impacting individuals, but they also ripple through society at large.

在世界各地,人工智能(AI)引发了希望和恐惧。 人工智能有望帮助寻找失踪儿童并治愈癌症。 但是,对由人工智能驱动的有害结果的担忧也同样重要。 致命的自主武器系统对武装冲突规则的适用提出了严峻的问题。 同时,自动化导致的预期工作流失是许多政府的首要任务。 实际上,人工智能模型支配着影响个人的重大决策,但它们也会在整个社会中产生影响。

Yet discussions of AI’s likely impact cannot be only binary — focused on gains and losses, costs and benefits. Getting beyond hope and fear will require a deeper understanding of AI-application-triggered decisions and actions amid their intended and unintended consequences. The troubling reality, however, is that the full impact of the massive use of tech platforms and AI is still largely unknown. But AI is too powerful to remain invisible.

然而,关于AI可能产生的影响的讨论不能仅仅是二元的,而是着眼于损益,成本和收益。 要摆脱希望和恐惧,就需要对AI应用程序触发的决策和行为及其预期和非预期的后果有更深入的了解。 然而,令人不安的现实是,很大程度上仍不清楚大规模使用技术平台和人工智能的全部影响。 但是,AI太强大了,无法保持隐形。

Access to information forms the bedrock of many facets of democracies and the rule of law. Facts inform public debate and evidence-based policymaking. Scrutiny by journalists and parliamentarians and oversight by regulators and judges require transparency. But private companies keep crucial information about the inner workings of AI systems under wraps. The resulting information gap paralyzes lawmakers and other watchdogs, including academics and citizens who are unable to know of or respond to any AI impacts or missteps. And even with equal access to proprietary information, companies examine data through different lenses and with different objectives than those used by democratic institutions which serve and are accountable to the public.

信息的获取是民主和法治许多方面的基石。 事实为公众辩论和循证决策提供了依据。 新闻工作者和议员的审查以及监管机构和法官的监督都要求透明。 但是私有公司保留着有关AI系统内部运行的重要信息。 由此产生的信息鸿沟使立法者和其他监督者陷于瘫痪,其中包括无法了解或应对任何AI影响或失误的学者和公民。 而且,即使平等地获得专有信息,公司也可以通过与服务和对公众负责的民主机构所使用的透镜和目标不同的角度来检查数据。

The starting-point for AI debates is equally flawed. Such conversations often focus on outcomes we can detect. Unintended consequences such as bias and discrimination inadvertently creep into AI algorithms, reflecting our offline world, or erroneous data sets and coding. Many organizations focus on correcting the damage caused by discriminatory algorithms. Yet we must know what we may expect from AI when it works exactly as anticipated. Before addressing the sometimes discriminatory nature of facial recognition technologies, we need to know if the technologies respect the right to privacy.

人工智能辩论的出发点同样存在缺陷。 此类对话通常侧重于我们可以检测到的结果。 诸如偏见和歧视之类的意外后果无意间渗入了AI算法,反映了我们的离线世界,或者错误的数据集和编码。 许多组织专注于纠正由歧视性算法造成的损害。 但是,当AI完全按预期运行时,我们必须知道对AI的期望。 在解决面部识别技术有时具有歧视性的性质之前,我们需要知道这些技术是否尊重隐私权。

But AI and new technologies disrupt not only industries. They also systemically disrupt democratic actors’ and institutions’ ability to play their respective roles.

但是,人工智能和新技术不仅破坏了行业。 它们还系统性地破坏了民主行为者和机构发挥各自作用的能力。

We must devote more attention to actors’ and institutions’ ability to access AI. This is a precondition for evidence-based regulation.

我们必须更加关注参与者和机构访问AI的能力。 这是基于证据的监管的前提。

算法引擎的关键 (The key to the algorithmic hood)

AI engineers admit that no one knows where the heads and tails of algorithms end after endless iterations. But we can know AI’s unintended outcomes only when we know what was intended in the first place. This requires transparency of training data, documentation of intended outcomes, and various iterations of algorithms. Moreover, independent regulators, auditors, and other public officials need mandates and technical training for meaningful access to, and understanding of, algorithms and their implications.

AI工程师承认,无休止的迭代之后,没人知道算法的头尾在哪里结束。 但是,只有当我们首先知道目标是什么时,我们才能知道AI的意外结果。 这要求培训数据透明,预期结果的文档记录以及算法的各种迭代。 此外,独立的监管者,审计师和其他公共官员需要授权和技术培训,才能有意义地访问和理解算法及其含义。

Accountability is particularly urgent when AI-based, government-provided systems are used for tasks or services that encroach into the public sphere. Such outsourced activities include the building and defense of critical infrastructure, the development and deployment of taxpayer databases, the monitoring of traffic, the dispersal of Social Security checks and the ensuring of cybersecurity. Many companies that provide vital technologies for these services process large amounts of data impacting entire societies. Yet the level of transparency required by and applied to democratic governments is not equally applied to the companies behind such services.

当基于AI的,由政府提供的系统用于侵入公共领域的任务或服务时,责任制变得尤为迫切。 此类外包活动包括关键基础设施的构建和防御,纳税人数据库的开发和部署,流量监控,社会保障检查的分散以及网络安全的保证。 许多为这些服务提供重要技术的公司都处理大量影响整个社会的数据。 但是,民主政府要求和适用于民主政府的透明度水平并没有同样适用于此类服务背后的公司。

Algorithms are not merely the secret sauces that enable technology companies to make profits. They form the bedrock of our entire information ecosystem. Algorithmic processing of data impacts economic and democratic processes, fundamental rights, safety, and security. To examine whether principles such as fair competition, non-discrimination, free speech, and access to information are upheld, the proper authorities must have the freedom to look under the algorithmic hood. Self-regulation or ethics frameworks do not make possible independent checks and balances of powerful private systems.

算法不仅是使科技公司获利的秘密武器。 它们构成了我们整个信息生态系统的基石。 数据的算法处理会影响经济和民主流程,基本权利,安全性和安全性。 要检查是否坚持诸如公平竞争,不歧视,言论自由和获取信息之类的原则,适当的主管部门必须具有在算法框架下进行查看的自由。 自我调节或道德框架无法实现对强大私人系统的独立检查和平衡。

This shift to private and opaque governance that lets company code set standards and regulate essential services is one of the most significant consequences of the increased use of AI systems. Election infrastructure, political debates, health information, traffic flows, and natural-disaster warnings are all shaped by companies that are watching and shaping our digital world.

这种向私有和不透明治理的转变,使公司代码可以设置标准并规范必要的服务,这是AI系统使用量增加的最重大后果之一。 选举基础设施,政治辩论,健康信息,交通流量和自然灾害警告都是由关注和塑造我们数字世界的公司塑造的。

Because digitization often equals privatization, it means that the outsourcing of governance to technology companies allows them to benefit from access to data while the public bears the cost of failures like breaches or misinformation campaigns.

因为数字化通常等于私有化,这意味着将治理外包给技术公司可以使他们从访问数据中受益,而公众则承担了破坏或错误宣传活动等失败的代价。

Technologies and algorithms built for profit, efficiency, competitive advantage, or time spent online are not designed to safeguard or strengthen democracy. Their business models have massive privacy, democracy, and competition implications but lack matching levels of oversight. In fact, companies actively prevent insight and oversight by invoking trade-secret protections.

为赢利,效率,竞争优势或在线时间而建立的技术和算法并非旨在维护或加强民主。 他们的商业模式具有大量的隐私,民主和竞争影响,但缺乏相应的监督水平。 实际上,公司通过采用商业秘密保护来积极防止见识和监督。

透明度促进问责制 (Transparency fosters accountability)

Increasingly, trade secret protections hide the world’s most powerful algorithms and business models. These protections also obscure from public oversight the impacts companies have on the public good or the rule of law. To rebalance, we need new laws. For new evidence-based, democratically passed laws, we need meaningful access to information.

商业秘密保护越来越多地隐藏了世界上最强大的算法和商业模型。 这些保护措施也掩盖了公众对公司对公共利益或法治的影响的监督。 为了实现平衡,我们需要新的法律。 对于基于证据,民主通过的新法律,我们需要有意义的信息访问渠道。

A middle way between publishing the details of a business model for everyone to see and applying oversight to algorithms when outcomes have significant public or societal impacts, can and should be found. Frank Pasquale, author of The Black Box Society, sensibly speaks of the concept of qualified transparency, meaning that the levels of scrutiny of algorithms should be determined by the scale of companies processing data and the extent of their impact on the public interest. Failure to address and fix the misuse of trade secret protections for this purpose will lead to the shaping of more and more digitized and automated processes in black boxes.

可以并且应该找到介于发布业务模型的细节以供所有人查看和在对结果产生重大公共或社会影响时对算法进行监督之间的中间方法。 黑匣子协会(The Black Box Society)的作者弗兰克·帕斯夸莱(Frank Pasquale)明智地谈到了合格透明性的概念,这意味着算法的审查水平应由处理数据的公司规模及其对公众利益的影响程度来确定。 无法解决和解决为此目的滥用商业秘密保护的问题,将导致在黑匣子中形成越来越多的数字化和自动化流程。

The level of algorithmic scrutiny should match algorithms’ risks to and impacts on individual and collective rights. So, for example, an AI system used by schools that taps and impacts data on children requires specific oversight. An AI element in industrial processes that examines variations in the color of paint is, by contrast, of a different sensitivity. But AI stretches beyond the physical world — into the inner workings of machine learning, neural networks and algorithmic processing.

算法审查的级别应与算法对个人和集体权利的风险和影响相匹配。 因此,例如,学校使用的AI系统需要挖掘和影响儿童数据,因此需要进行具体监督。 相比之下,检查油漆颜色变化的工业过程中的AI元素具有不同的灵敏度。 但是,AI超越了物理世界,延伸到了机器学习,神经网络和算法处理的内部运作。

Some argue it is too early to regulate artificial intelligence or insist that law inevitably stifles innovation. By empowering existing institutions to exert their oversight roles over increasingly AI-driven activities, these institutions can regulate for antitrust, data-protection, net-neutrality, consumers’ rights, safety and technical standards as well as other fundamental principles.

有人认为,规范人工智能或坚持认为法律不可避免地扼杀创新还为时过早。 通过授权现有机构对越来越多的AI驱动的活动发挥监督作用,这些机构可以针对反托拉斯,数据保护,网络中立性,消费者权利,安全和技术标准以及其他基本原则进行监管。

The question is not whether AI will be regulated but who sets the rules. Nondemocratic governments are moving quickly to fill legal voids in ways that fortify their national interests. In addition to democratic law-making, governments as major procurers of new technological solutions should be responsible buyers and write public accountability into tenders.

问题不是AI是否会受到监管,而是谁来制定规则。 非民主政府正在Swift采取行动,以巩固其国家利益的方式填补法律空白。 除了民主立法之外,作为新技术解决方案主要采购国的政府应负责任的采购商,并在招标书中注明公众责任。

Many agree that lawmakers were too late to regulate online platforms, microtargeting, political ads, data protection, misinformation campaigns and privacy violations. With AI, we have the opportunity to regulate in time. As we saw at Davos, even corporate leaders are calling for rules and guidance from lawmakers. They are coming to appreciate the power of the governance of technologies and how technologies embed values and set standards.

许多人同意,立法者为时已晚,无法规范在线平台,微观定位,政治广告,数据保护,错误信息宣传和侵犯隐私权。 有了AI,我们就有机会及时进行监管。 正如我们在达沃斯会议上看到的那样,甚至企业领导人也都在呼吁立法者制定规则和指导。 他们逐渐意识到技术治理的力量以及技术如何嵌入价值和设定标准。

发挥AI的潜力 (Reaching AI’s potential)

While much remains to be learned and researched about AI’s impact on the world, a few patterns are clear. Digitization often means privatization, andAI will exacerbate that trend. With that comes a redistribution of power and the obscuring of information from the public eye. Already, trade secrets not only shield business secrets from competitors; they also blindside regulators, lawmakers, journalists and law enforcement actors with unexpected outcomes of algorithms based on their hidden instructions. AI’s opaque nature and its many new applications create extraordinary urgency to understand how its invisible power impacts society.

尽管关于AI对世界的影响还有很多要学习和研究的地方,但有几种模式很明显。 数字化通常意味着私有化,而人工智能将加剧这一趋势。 随之而来的是权力的重新分配和公众对信息的模糊。 商业秘密已经不仅可以保护商业秘密免受竞争对手的侵害,而且还可以保护商业秘密。 他们还盲目地向监管者,立法者,新闻工作者和执法人员提供了基于其隐藏指令的算法意外结果。 AI的不透明特性及其许多新应用程序产生了极大的紧迫感,要求了解AI的无形力量如何影响社会。

Only with qualified access to algorithms can we develop proper AI governance policies. Only with meaningful access to AI information can democratic actors ensure that laws apply equally online as they do offline. Promises of better health-care or the just use of AI in extreme circumstances such as war will reach their potentials only with access to algorithmic information. Without transparency, regulation and accountability are impossible.

只有合格地使用算法,我们才能制定适当的AI治理策略。 只有有意义地访问AI信息,民主行动者才能确保法律在网上和离线时均适用。 只有在访问算法信息的情况下,更好的医疗保健或在战争等极端情况下合理使用AI的承诺才能发挥其潜力。 没有透明度,监管和问责就不可能实现。

科技表达了我们的价值。 我们将如何被记住? (Technology expresses our values. How will we be remembered?)

We are at a critical juncture. Our values are coded and embedded into technology applications. Today, companies as well as authoritarian regimes direct the use of technology for good or evil. Will democratic representatives step up and ensure AI’s developments respect the rule of law? We can move beyond hope and fear only when independent researchers, regulators and representatives can look under the algorithmic hood.

我们正处于关键时刻。 我们的价值观被编码并嵌入到技术应用程序中。 如今,公司和专制政权都指示技术的使用是好是坏。 民主代表会加强行动并确保AI的发展尊重法治吗? 只有当独立研究人员,监管者和代表可以在算法框架下观察时,我们才能超越希望和恐惧。

翻译自: https://medium.com/swlh/ais-invisible-hand-fe4be98b27f3

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值