(全文中文约6000字,阅读约需20分钟)
【编者按】最近接了几个元宇宙相关的项目,有的α版已经上线目前在调整完善,有的还在努力中。虽然很尊重大家对于创造和永恒的追求,但私以为灵魂需有所归依、肉身也需照顾好啊。无论是元宇宙还是很多领域的算法应用,都关乎大量与我们密切相关的真实数据。
虽然国内外都有一些勇敢的个体和机构不遗余力地提出很具体的隐私和数据安全需求,但是对于多普通人而言,数据安全依然是非常抽象的概念,加之元宇宙和算法等技术本身可能已经形成高深的壁垒,不要说在加上一层隐秘的法律“密码”。
欧盟委员会此前公布了一份《欧盟人工智能法 (草案)》,虽然该草案在全球现有人工智能监管环境下已经体现出许多创新和进步,也被很多国家和地区宣扬和效仿,但是一贯秉持谨慎的欧盟人民似乎对此仍有颇多意见。2021年11月的最后一天,包括欧盟数字化权利组织(European Digital Rights)、算法监视(Access Now and AlgorithmWatch)等专业人民间团体在内的114家民间机构(名单在文末👇🏻,签署方有120家😀)联合上书,要求欧盟委员会、欧洲议会和各成员国修改草案内容,更多地保护公民的基本权利,防止人工智能被利用危害普通人。
这114家民间机构认为《欧盟人工智能法草案》通过事前将人工智能系统划分为不同风险级别并采取不同监管路径未考虑风险级别也取决于无法事先完全确定的系统部署的环境,因此,他们通过声明提出9大建议,主要包括扩大高风险人工智能系统的禁止性范围、限缩豁免范围、保证公众参与、可归责性、可访问性、明确权利救济措施、无障碍、环保等。
未来的天使和魔鬼会不会大多都是懂技术的人……😀
上书完整原文及中文翻译如下,墙裂建议读一下嚯。
An EU Artificial Intelligence Act for Fundamental Rights:A Civil Society Statement
一部保护基本权利的欧盟人工智能法:一份民间机构的声明
The European Union institutions have taken a globally-significant step with the proposal for an Artificial Intelligence Act (AIA). Insofar as Artificial Intelligence (AI) systems are increasingly used in all areas of public life, it is vital that the AIA addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises the protection of fundamental rights and democratic values.
欧盟机构发布《人工智能法草案》是迈出了全球重要的一步。随着人工智能系统越来越多地应用于公共生活的各个领域,《人工智能法草案》必须解决人工智能的使用对结构、社会、政治和经济的影响,是走向未来的证明,并优先保护基本权利和民主价值观。
We specifically recognise that AI systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, this collective statement sets out the call of 114 civil society organisations towards an Artificial Intelligence Act that foregrounds fundamental rights. The statement outlines central recommendations to guide the European Parliament and Council in amending the European Commission’s proposal for a Regulation,[1] published on the 21st of April 2021.
我们尤其认识到,人工智能系统加剧了权力的结构性失衡,其危害往往由社会最边缘化的群体承担。因此,签署此次集体声明的114个民间社会组织共同呼吁,要求制定一项以基本权利为基础的人工智能法。该声明概述了114个民间组织指导欧洲议会和理事会修改欧盟委员会4月21号提案[1]的主要建议。
We, the undersigned organisations, call on the Council of the European Union, the European Parliament, and all EU member state governments to ensure that the forthcoming Artificial Intelligence Act achieves the following 9 goals:
我们,作为本声明的签署方,呼吁欧盟理事会、欧洲议会和所有欧盟成员国政府确保即将出台的《人工智能法》实现以下9个目标:
-
A cohesive, flexible and future-proof approach to ‘risk’ of AI systems
-
确立应对人工智能系统“风险”的连贯、灵活且经得起未来考验的路径
The current form of the AIA’s risk-based approach is dysfunctional. It delineates four levels of risk: unacceptable risk (Title II), high risk (Title III), systems that pose a risk of manipulation (Title IV), and all other AI systems. This approach of ex ante designating AI systems to different risk categories does not consider that the level of risk also depends on the context in which a system is deployed and cannot be fully determined in advance. Further, whilst the AIA includes a mechanism by which the list of ‘high-risk’ AI systems can be updated, it provides no scope for updating ‘unacceptable’ (Art. 5) and limited risk (Art. 52) lists. In addition, although Annex III can be updated to add new systems to the list of high-risk AI systems, systems can only be added within the scope of the existing eight area headings. Those headings cannot currently be modified within the framework of the AIA. These rigid aspects of the framework undermine the lasting relevance of the AIA, and in particular its capacity to respond to future developments and emerging risks for fundamental rights.
当前《欧盟人工智能法草案》以风险为基础的路径功能失调。首先,它描述了四个风险级别:不可接受的风险(第II章),高风险(第III章),构成操纵风险的系统(第IV章)以及所有其他人工智能系统。这种事前将人工智能系统指定为不同风险类别的方法未考虑风险级别也取决于无法事先完全确定的系统部署的环境。其次,虽然草案包括一种可以更新"高风险"人工智能系统清单的机制,但它没有提供更新"不可接受"(第5条)和有限风险(第52条)清单的范围。最后,虽然该法附录三可以更新,将新系统添加到高风险人工智能系统清单中,但系统只能在现有八个既定领域分类内添加。目前,《欧盟人工智能法草案》不支持更改这些领域分类。该框架在这些方面的僵化破坏《欧盟人工智能法草案》的持续关联性,特别是其应对未来发展和新出现的基本权利风险的能力。
To ensure a future-proof framework, we recommend that the AIA be amended to:
为确保法律框架符合未来,我们建议将《欧盟人工智能法草案》修订为:
- Introduce robust and consistent update mechanisms for ‘unacceptable’ and limited risk AI systems so that the lists of systems falling under these risk categories can be updated as technology develops, using the same update mechanism currently proposed to add new high-risk systems to Annex III (see Title XI). This must allow new systems to be designated as posing unacceptable risk and therefore classified as prohibited practices (Title II, Art. 5), or as posing limited risk / risk of manipulation (Title IV, Art. 52) and therefore be subject to additional transparency obligations;
- 用与目前草案附录三增加新的高风险系统相同的更新机制(见第XI章),为"不可接受"和有限风险的人工智能系统引入强大而一致的更新机制,便于随着技术的发展,对属于这些风险类别的系统清单进行更新。这样一来,就必须允许将新系统认定为构成不可接受的风险,因此被归类为禁止的应用(第二章第5条),或构成有限风险/操纵风险(第四章第52条),因此应当履行额外的透明度义务;
- Include a list of criteria for ‘unacceptable’ and limited risk AI systems under Arts. 5 and 52 respectively, to facilitate the updating process, provide legal certainty to AI developers and promote trust by ensuring that impacted individuals are protected against dangerous and potentially manipulative applications of AI;
- 根据第5条和第52条为"不可接受"和有限风险的人工智能系统提供一系列标准,以促进更新过程,为人工智能开发者提供明确的法律,并通过确保受影响的个人受到保护,免受人工智能的危险和潜在操纵性应用的影响来促进信任;
- Ensure that high-risk ‘areas’ (i.e. the eight area headings) listed in Annex III can be updated or modified under the Title XI mechanism to allow for modifications to the scope of existing area headings, and for new area headings to be included in the scope of ‘standalone’ high-risk AI systems;
- 确保草案附录三所列的高风险"领域"(即八个领域标题)能够根据第XI章规定的机制进行更新或修改,以便修改现有领域标题的范围,并将新的领域标题纳入"独立"高风险人工智能系统的范围内;
- Expand Annex III to include a more comprehensive list of high-risk systems, such as:
- Expanding area heading 1 to all systems which use physical, physiological, behavioural as well as biometric data, including but not limited to biometric identification, categorisation, detection and verification;
- Adding uses of AI for the purposes of conducting predictive analytics of migration;
- Adding new area headings relating to healthcare and insurance.
扩大附录三的范围,列入一份更全面的高风险系统清单,例如:
- 将第1个领域的范围扩大到使用物理、生理、行为和生物特征数据的所有系统,包括但不限于生物特征识别、分类、检测和验证;
- 增加对迁移进行预测性分析的人工智能的使用;
- 增加与医疗保健和保险有关的新领域标题。
2.Prohibitions on all AI systems posing an unacceptable risk to fundamental rights
2.禁止所有对基本权利带来不可接受风险的人工智能系统
Art. 5 of the AIA establishes the principle that some AI practices are incompatible with EU rights, freedoms and values, and should therefore be prohibited. However, in order for the AIA to truly prevent and protect people from the most rights-infringing deployments of AI, vital amendments are needed:
《欧盟人工智能法草案》第5条规定的原则是,某些人工智能实践与欧盟的权利、自由和价值观不相容,因此应予以禁止。然而,为了使《欧盟人工智能法草案》真正防止和保护人们免受最违背人权的人工智能部署,需要作如下重要修改:
- Remove the high threshold for manipulative and exploitative systems under Art. 5 (1)(a) and (b) to prove that the systems operate ‘in a manner that causes or is likely to cause that person or another person physical or psychological harm’. The current framing erroneously implies that a person’s behaviour can be materially distorted or exploited in a way that does not cause harm, whereas such practices are designed and/or used to undermine the essence of our autonomy, which is in itself an impermissible harm;
- 取消第5条第(1)款(a)项和(b)项关于要求证明系统的运作方式"造成或可能造成本人或他人身体或心理伤害"这一操纵和剥削性系统的高门槛。目前的框架错误地暗示,一个人的行为可以被物质歪曲或以不造成伤害的方式加以利用,但这种做法是设计和/或用来破坏我们的自主本质,这本身就是一种不能接受的伤害;
- Expand the scope of Art. 5 (1)(b) to include a comprehensive set of vulnerabilities, rather than limiting it to ‘age, physical or mental disability’. If an AI system exploits the vulnerabilities of a person or group based on any sensitive or protected characteristic, including but not limited to: age, gender and gender identity, racial or ethnic origin, health status, sexual orientation, sex characteristics, social or economic status, worker status, migration status, or disability, it is fundamentally harmful and therefore must be prohibited;
- 扩大第5条第(1)款(b)项以包括全面的弱势状态,而不仅限于“年龄、身体或精神障碍”。 如果人工智能系统基于任何敏感或受保护的特征利用个人或群体的脆弱性,包括但不限于年龄、性别和性别认同、种族或民族血统、健康状况、性取向、性特征、社会或经济地位、工作身份、移民状况或残障,则它从根本上是有害的,因此必须予以禁止;
- Adapt the Art. 5 (1)(c) prohibition on social scoring to extend to the range of harmful social profiling practices currently used in the European context. The prohibition should be extended to also include private actors and a number of problematic criteria must be re- moved, including the temporal limitation ‘over a certain period of time’ and references to ‘trustworthiness’ and ‘single score’;
- 第5条第(1)款(c)项的禁止性规定扩大适用于为扩大目前在欧洲使用的有害社会画像活动的社会评分。该禁止性规定应扩大到包括私人行为者,并且必须重新制定一些有问题的标准,包括将时间限制在“在一段时间内”,以及提及“可信度”和“单一分数”;
- Extend the Art. 5 (1)(d) prohibition on remote biometric identification (RBI) to apply to all actors, not just law enforcement, as well as to both ‘real-time’ and ‘post’ uses, which can be equally harmful. The prohibition should include putting on the market / into service RBI systems that are reasonably foreseeable to be used in prohibited ways. The broad exceptions in Art. 5(1)(d), Art. 5(2) and Art. 5(3) undermine the necessity and proportionality requirements of the EU Charter of Fundamental Rights and should be removed;
- 将第5条第(1)款(d)项条关于禁止执法部门远程生物识别的规定扩大到适用于所有行为者,以及可能一样有害的“实时”和“事后”使用。该禁止性规定应包括将可以合理预见会被禁止使用的生物识别系统投放市场/投入使用。第5条第(1)款(d)项、第5条第(2)款和第5条第(3)款破坏《欧盟基本权利宪章》的必要性和比例原则要求,应予以删除;
- Prohibit the following practices that pose an unacceptable risk to fundamental rights under Art. 5:
- 禁止对第5条规定的基本权利构成不可接受风险的下列应用:
- The use of emotion recognition systems that claim to infer people’s emotions and mental states from physical, physiological, behavioural, as well as biometric data;
- 使用声称可以从物理、生理、行为以及生物特征数据推断出人们的情绪和心理状态的情绪识别系统;
- The use of biometric categorisation systems to track, categorise and / or judge people in publicly accessible spaces; or to categorise people on the basis of special categories of personal data, protected characteristics, or gender identity;
- 使用生物特征分类系统跟踪、分类和/或判断公共空间内的人员;或者根据个人数据的特殊类别、受保护特征或性别身份对人员进行分类;
- Systems which amount to AI physiognomy by using data about our bodies to make problematic inferences about personality, character, political and religious beliefs;
- 通过使用我们身体的数据对人格、性格、政治和宗教信仰做出有问题的推断,构成人工智能地貌的系统
- The use of AI systems by law enforcement and criminal justice authorities to make predictions, profiles or risk assessments for the purpose of predicting crimes;
- 执法和刑事司法当局为了预防犯罪而使用人工智能系统进行预测、概述或风险评估;
- The use of AI systems for immigration enforcement purposes to profile or risk-assess natural persons or groups in a manner that restricts the right to seek asylum and / or prejudices the fairness of migration procedures.
- 为移民执法而使用人工智能系统,以限制寻求庇护权利和/或损害移民程序公平性的方式对自然人或群体进行剖析或风险评估。
3. Obligations on users of high-risk AI systems to facilitate accountability to those impacted by AI systems
3.高风险人工智能系统的使用者有义务促进对受该等系统影响的主体的责任承担
The AIA predominantly imposes obligations on ‘providers’ (developers) rather than on ‘users’ (deployers) of high-risk AI. While some of the risk posed by the systems listed in Annex III comes from how they are designed, significant risks stem from how they are used. This means that providers cannot comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well. To remedy this, we recommend that the AIA is amended to include the following explicit obligations on users of high-risk AI systems:
《欧盟人工智能法草案》主要对高风险人工智能的“提供者”(开发者)而不是“用户”(部署者)施加义务。虽然附录三所列系统造成的部分风险来源于系统的设计方式,但重大风险却来源于系统的使用方式。这意味着供应商无法在合格评定期间全面评估高风险人工智能系统的全部潜在影响,因此高风险人工智能系统的部署者也必须有义务维护基本权利。为了纠正这一点,我们建议修改《欧盟人工智能法草案》,以明确规定高风险人工智能系统部署者的如下义务:
- Include the obligation on users of high-risk AI systems to conduct a fundamental rights impact assessment (FRIA) before deploying any high-risk AI system. For each proposed deployment, users must designate the categories of individuals and groups likely to be impacted by the system, assess the system’s impact on fundamental rights, its accessibility for persons with disabilities, and its impact on the environment and broader public interest;
- 规定高风险人工智能系统部署者在部署任何高风险人工智能系统之前进行基本权利影响评估(FRIA)的义务。对于每项拟议部署,部署者必须指定可能受系统影响的个人和群体类别,评估系统对基本权利的影响、残障人士的无障碍性及其对环境和更广泛公共利益的影响;
- Preliminary assessments for users of non-high-risk AI systems should be encouraged, and support should be provided to users to properly determine risk level;
- 应鼓励对非高风险人工智能系统的部署者进行初步评估,并支持部署者正确确定风险水平;
- Include the obligation on users of high-risk AI systems to verify the compliance of the AI system with this Regulation before putting the system into use;
- 规定高风险人工智能系统部署者在系统投入使用前验证人工智能系统是否符合本法规定的合规义务;
- Include the obligation on users to upload the information produced as part of the impact assessment to the EU database for stand-alone high-risk AI systems (see Section 4 for more details).
- 规定部署者有义务将影响评估的而产生的信息作为评估结果的一部分上传到独立高风险人工智能系统的欧盟数据库(更多详情见第4节)。
4. Consistent and meaningful public transparency
4.一致而有意义的公共透明度
The EU database for stand-alone high-risk AI systems (Art. 60) provides a promising opportunity for increasing the transparency of AI systems vis-à-vis impacted individuals and civil society, and could greatly facilitate public interest research. However, the database currently only contains information on high-risk systems registered by providers, without information on the context of use. This loophole undermines the purpose of the database, as it will prevent the public from finding out where, by whom and for what purpose(s) high-risk AI systems are actually used. Further, the AIA only mandates notification to individuals impacted by AI systems listed in Art. 52. This approach is incoherent because the AIA does not require a parallel obligation to notify people impacted by the use of higher risk AI systems under Annex III.
欧盟独立高风险人工智能系统数据库(第60条)为提高人工智能系统对受影响的个人和民间社会的透明度提供了一个很好的机会,并能够极大地促进公共利益研究。但是,该数据库目前只包含供应商登记的高风险系统信息,而没有关于使用环境的信息。该漏洞破坏了数据库的用途,因为它使公众无法了解高风险人工智能系统的实际使用地点、使用人以及使用目的。此外,《欧盟人工智能法草案》只要求通知受第52条所列人工智能系统影响的个人。这种方法不符合逻辑,因为《欧盟人工智能法草案》并没有规定相应通知受附录III所列高风险人工智能系统影响的主体的义务。
To ensure effective transparency, we recommend amending the AIA to:
为确保透明度的有效性,我们建议对《欧盟人工智能法草案》作如下修改:
- Include an obligation on users to register deployments of high-risk AI systems in the Art. 60 database before putting them into use, and include information in the database on every specific deployment of the system, including:
- 规定部署者在投入使用前,在第60条数据库内注册其部署的本领域高风险人工智能系统的义务,并在数据库中列明系统的每个特定部署信息,包括:
- The identity of the provider and the user; the context and purpose of deployment; the designation of impacted persons; and the results of the impact assessment referred to in Section 3 above;
- 供应商和部署者的身份、部署的背景和目的、指定受影响人员以及上文第3节中提及的影响评估结果;
- Extend the list of information that providers of high-risk AI systems must publish in the Art. 60 database to include the information referred to in Annex IV point 2(b) and point 3, namely design specifications of the high risk AI systems (including the general logic, key design and optimisation choices);
- 扩大高风险人工智能系统供应商必须在第60条数据库内公开的信息列表,包括附录IV第2条(b)项和第3条中提及的信息,即高风险人工智能系统的设计规范(包括一般逻辑、关键设计和优化选择);
- Ensure the Art. 60 public database is user-friendly, freely accessible (including for persons with disabilities), and navigable (including by machines);
- 确保第60条公共数据库对用户友好、可自由访问(包括残障人士),并可导航(包括通过机器导航);
- Extend the transparency obligations specified in Art. 52 to all high-risk AI systems. Notifications presented to individuals should include the information that an AI system is in use, whom its operator is, general information about the purpose of the system, information about the right to request an explanation, as well as – in case of high-risk systems – a reference or link to the relevant entry in the EU database;
- 第52条规定的透明度义务扩大适用于所有高风险人工智能系统。向个人发送的通知应包括人工智能系统正在使用的信息、运营者、系统用途的一般信息、需要解释的权利信息,以及(如果是高风险系统)欧盟数据库中相关条目的参考或链接;
- Remove the exemptions under Art. 52 for manipulative ‘AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences’, as the use of manipulative AI systems in law enforcement and criminal justice contexts poses an acute risk to fundamental rights.
- 取消第52条对 “经法律授权用于侦查、预防、调查和起诉刑事犯罪的操纵性人工智能系统”的豁免,因为在执法和刑事司法环境中使用操纵性人工智能系统对基本权利构成严重风险。
5.Meaningful rights and redress for people impacted by AI systems
5.对受人工智能系统影响的主体有意义的权利和补偿
The AIA currently does not confer individual rights to people impacted by AI systems, nor does it contain any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. As such, the AIA does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed.
《欧盟人工智能法草案》目前未强调受人工智能系统影响的人们的个人权利,也未规定任何个体或集体的救济措施,或人们或民间组织参与高风险人工智能系统调查过程的机制。因此,《欧盟人工智能法草案》没有完全解决人工智能系统部署的不透明性、复杂性、规模和权力不平衡所带来的诸多危害。
To facilitate meaningful redress, we recommend:
为促进有益救济,我们建议:
- Include two individual rights in the AIA as a basis for judicial remedies:
- 在《欧盟人工智能法草案》中明确两项个人权利作为司法救济的基础:
- (a) The right not to be subject to AI systems that pose an unacceptable risk or do not comply with the Act; and
- (a)有权不受不可接受风险或不符合该法规定的人工智能系统的约束;和
- (b) The right to be provided with a clear and intelligible explanation, in a manner that is accessible for persons with disabilities, for decisions taken with the assistance of systems within the scope of the AIA;
- (b)对于在《欧盟人工智能法草案》范围内的系统协助下做出的决定,有权以残障人士可以理解的方式获得清晰易懂的解释;
- Include a right to an effective remedy for those whose rights under the Regulation have been infringed as a result of the putting into service of an AI system. This remedy should be accessible for both individuals and collectives;
- 规定因人工智能系统投入使用而导致该法规定的权利受到侵犯的个人和集体,均有权获得有效补救;
- The creation of a mechanism for public interest organisations to lodge a complaint with national supervisory authorities for a breach of the Regulation or for AI systems which undermine fundamental rights or the public interest. This complaint should trigger an investigation into the system as outlined in Arts. 65 and 67.
- 建立一个保障公共利益组织向国家监管机构投诉违反法规或人工智通系统损害基本权利或公共利益的机制。该投诉应触发第65条和第67条规定的对系统的调查。
6.Accessibility throughout the AI life-cycle
6.整个人工智能生命周期内的可访问性
The AIA lacks mandatory accessibility requirements for AI providers and users. The proposal states that providers of non-high-risk AI systems may create and implement codes of conduct which may include voluntary commitments, including related to accessibility for persons with disabilities (Recital 81, Art 69. (2)).[2] This is an inadequate approach to disability and falls short of obligations laid out in the UN Convention on the Rights of persons with Disabilities (CRPD) and is inconsistent with existing EU legislation such as the European Accessibility Act. The lack of accessibility requirements risks leading to the development and use of AI with further barriers for persons with disabilities.
《欧盟人工智能法草案》缺乏对人工智能提供商和部署者的强制性可访问性要求。该草案规定,非高风险人工智能系统的供应商可制定和实施行为守则,其中可能包括自愿承诺、与残障人士无障碍相关的承诺(序言第81目,第69条第(2)款). [2]这种处理方法显然不适当,既不符合《联合国残疾人权利公约》(CRPD)的规定,也不符合《欧洲无障碍法案》等欧盟现行法。缺乏无障碍要求有可能导致人工智能的开发和使用对残障人士设置更多障碍。
To ensure full accessibility for AI systems, we recommend:
为确保人人都有机会使用人工智能系统,我们建议:
- The inclusion of horizontal and mainstreamed accessibility requirements for AI systems irrespective of level of risk, including for AI-related information and instruction manuals, consistent with the European Accessibility Act.
- 根据《欧洲无障碍法案》,无论人工智能系统的风险等级如何,都应符合横向和主流的无障碍要求,包括与人工智能相关的信息和说明手册。
7.Sustainability and environmental protections
7.可持续性和环境保护
The AIA misses a crucial opportunity to ensure that the development and use of AI systems can be done in a sustainable, resource-friendly way which respects our planetary boundaries. As a first step towards addressing environmental dimensions of sustainability, we need transparency about the level of resources needed to develop and operate AI systems.
《欧盟人工智能法草案》错失良机未能推动人工智能系统的开发和使用以可持续、资源友好的方式进行,并尊重我们的地球环境。作为应对环境可持续性的第一步,我们需要对开发和运行人工智能系统所需的资源水平保持透明。
To address this, we recommend:
为此,我们建议
- The introduction of horizontal, public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems – irrespective of risk level – in relation to design, data management and training, application, and underlying infrastructures (hardware, data centres, etc.).
- 无论人工智能系统的风险水平如何,针对人工智能系统设计、数据管理和培训、应用和基础设施(硬件、数据中心等)有关的资源消耗和温室气体排放影响,引入横向、面向公众的透明度要求。
8.Improved and future-proof standards for AI systems
8.人工智能系统的改进和未来验证标准
The AIA is derived heavily from EU product safety legislation, and as such relies on the development of harmonised standards to facilitate providers’ compliance with the Act. However, the AIA uses these technical standards to delegate key political and legal decisions about AI to the European Standardisation Organisations (Art. 3.27, Art. 40), which are opaque private bodies largely dominated by industry actors.
《欧盟人工智能法草案》在很大程度上源自欧盟产品安全立法,因此,其依赖于调和标准的制定,以促进供应商遵守该法案。但是,《欧盟人工智能法草案》却利用这些技术标准将有关人工智能的关键政治和法律决策委托给由行业参与者主导的、不透明的民间组织欧洲标准化组织(第3.27条、第40条)。
To ensure that political and fundamental rights decisions remain firmly within the democratic scrutiny of EU legislators, we recommend to:
为确保政治和基本权利决策始终受到欧盟立法者的民主审查,我们建议:
- Explicitly limit the harmonised standards established in Art. 40 (for Title III, Chapter 2, Requirements for high-risk AI) solely to genuinely technical aspects, ensuring that the overall authority to set standards and perform oversight of all issues which are not purely technical, such as bias mitigation (Art. 10(2)(f)), remain in the remit of the legislative process;
- 明确限制第40条规定的调和标准(第三章第2节,高风险人工智能的要求),仅适用于真正的技术方面,确保对标准制定和所有非纯技术性问题监督的整体权力仍属于立法程序的职权范围,比如减少偏差(第10条(2)款(f)项);
- Ensure that standards address the needs of all members of society via a universal design approach. For example, to ensure that AI systems and practices are accessible for persons with disabilities, the standards harmonised for the AIA must be consistent with relevant standards harmonised for the European Accessibility Act, at a minimum;
- 确保调和标准通过通用设计方法满足社会所有成员的需求。例如,为确保残障人士可以使用人工智能系统和活动,《欧盟人工智能法草案》调整的标准必须至少与《欧洲无障碍法案》调整的相关标准一致;
- Guarantee that relevant authorities, such data protection authorities and equality bodies, civil society organisations, SMEs and environmental, consumer and social stakeholders are represented and enabled to effectively participate in AI standardisation and specification-setting processes and bodies;
- 保证相关机构、数据保护机构和平权机构、民间协会组织、中小企业和环境、消费者和社会利益相关者都有代表,并能够有效参与到人工智能标准化和规范制定过程和机构中;
- Ensure that harmonisation under the AIA is without prejudice to existing or future national laws relating to transparency, access to information, non-discrimination or other rights, in order to ensure that harmonisation is not misused or extended beyond the specific scope of the AIA.
- 确保《欧盟人工智能法草案》的调整不影响与透明度、信息获取、非歧视或其他权利相关的现有或未来国家法律,以确保规则不被滥用或超出该法案的特定调整范围。
9.A truly comprehensive AIA that works for everyone
9.真正全面且全民适用的《欧盟人工智能法草案》
Despite consistent documentation of the disproportionate negative impact AI systems can cause to already marginalised groups (in particular women*, racialised people, migrants, LGBTIQ+ people, persons with disabilities, sex workers, children and youth, older people, and poor and working class communities) significant changes are required to ensure that the AIA adequately addresses these systematic harms. To ensure the AIA works for everyone, we recommend to:
尽管不断有文件证明人工智能系统会对已经边缘化的群体(特别是女性*、种族歧视者、移民、LGBTIQ+人群、残障人士、性工作者、儿童和青年、老年人、穷人和工人阶级社区)造成不成比例的负面影响并需要作出重大改变,以确保《欧盟人工智能法草案》充分解决这些系统性危害。为确保《欧盟人工智能法草案》适用于所有人,我们建议:
- Ensure data protection and privacy for persons with disabilities. The EU General Data Protection Regulation (GDPR) has rules that apply before processing special category data of persons who are ‘physically or legally incapable of giving consent’ (Art. 9(2)(c) of the GDPR) which may be insufficient to protect the rights of those persons in certain contexts relating to the use of AI:
- 确保残障人士的数据保护和隐私。《欧盟通用数据保护条例》已经有条款规定在处理“身体上或法律上无能力表示同意”的人员的特殊类别数据前,应当遵守的规定。(《欧盟通用数据保护条例》第9条第(2)款(c)项)但这些规定可能不足以在与人工智能使用相关的特定情况下保护这些人的权利:
- The AI Act must therefore ensure that privacy and data protection of all persons, including those under substituted decision-making regimes such as guardianships, are protected when their data are processed by AI systems.
- 因此,《欧盟人工智能法》必须确保所有人的隐私和数据在人工智能系统处理数据时都能够受到保护,包括那些处于监护等替代决策制度下的人。
- Remove the exemption for Large-scale EU IT systems in Art. 83. Existing large-scale IT systems process vast amounts of data at a scale that poses significant risk to fundamental rights. No reasonable justification for this exemption from the AIA’s rules is provided in the legislation or can be given:
- 删除第83条对欧盟大规模互联网技术系统的豁免。现有的大规模互联网技术系统处理大量数据的方式都对基本权利构成重大风险。立法者未给出任何合理的理由,也无法给出在下列情况下豁免《欧盟人工智能法草案》适用的理由:
- Any large-scale IT systems used by the EU must therefore be included in the scope of the AI Act through the deletion of the exclusion in section one of Art. 83.
- 因此,欧盟使用的任何大规模互联网技术系统必须通过删除第83条第一节中的除外条款才服务人工智能法的规定。
- Equip enforcement bodies with the necessary resources. While Art. 59 (4) emphasises the need to equip national authorities with ‘adequate financial and human resources’, according to the Explanatory Memorandum, the Commission currently only foresees 1 to 25 full-time equivalent positions per Member State for national supervisory authorities. This is clearly insufficient:
- 为执法机构配备必要的资源。虽然第59条第(4)款强调需要为国家主管部门配备“充足的财政和人力资源”,但根据解释性备忘录,欧盟委员会目前预计每个成员国的国家监管部门仅设置1至25个全职同等职位。这显然不够:
- The financial implications of the AIA must be reassessed and planned so as to ensure enforcement bodies and other relevant bodies have the resources to meaningfully fulfil their tasks under the AIA.
- 必须重新评估和规划《欧盟人工智能法草案》的财务影响,以确保执法机构和其他相关机构有足够的资源来有意义地完成法案规定的任务。
- Ensure trustworthy European AI beyond the EU. Contrary to the objective of ‘shaping global norms and standards for trustworthy AI consistent with EU values’ as stated in the Explanatory Memorandum of the AIA, its rules currently do not apply to AI providers and users established in the EU when they affect individuals in third countries:
- 确保欧盟以外的欧洲人工智能值得信赖。与《欧盟人工智能法草案》解释性备忘录中所述的“为符合欧盟价值观的可信赖人工智能制定全球规范和标准”的目标相反,《欧盟人工智能法草案》的内容目前不适用于建设在欧盟但其影响第三国个人的人工智能供应商和部署者:
- The AIA should ensure that EU-based AI providers and users whose outputs affect individuals outside of the European Union are subject to the same requirements as those whose outputs affect persons within the Union to avoid risk of discrimination, surveillance, and abuse through technologies developed in the EU.
- 《欧盟人工智能法草案》应确保建设在欧盟、成果影响欧盟以外个体的人工智能供应商和部署者,与成果影响欧盟内部人员的供应商和部署者遵守相同的规定,以避免通过欧盟开发的技术遭到歧视、监视和滥用。
Drafted by: 本声明由如下机构起草:
European Digital Rights (EDRi), Access Now, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, Fair Trials, PICUM, and ANEC (European consumer voice in standardisation).
Signed by: 本声明由如下机构共同作出:
1. European Digital Rights (EDRi) (European)
2. Access Now (International)
3. The App Drivers and Couriers Union (ADCU) (United Kingdom)
4. Algorights (Spain)
5. AlgorithmWatch (European)
6. All Out (International)
7. Amnesty International (International)
8. ARTICLE 19 (International)
9. Asociación Salud y Familia (Spain)
10. Aspiration (United States)
11. Association for action against violence and trafficking in human beings - Open Gate / La Strada Macedonia (North Macedonia)
12. Association for Juridical Studies on Immigration (ASGI) (Italy)
13. Association for Monitoring Equal Rights (Turkey)
14. Association of citizens for promotion and protection of cultural and spiritual values - Legis Skopje (North Macedonia)
15. Associazione Certi Diritti (Italy)
16. Associazione Luca Coscioni (Italy)
17. Baobab Experience (Italy)
18. Belgian Disability Forum asbl (BDF) (Belgium)
19. Big Brother Watch (United Kingdom)
20. Bits of Freedom (The Netherlands)
21. Border Violence Monitoring Network (European)
22. Campagna LasciateCIEntrare (Italy)
23. Center for AI and Digital Policy (CAIDP) (International)
24. Chaos Computer Club (CCC) (Germany)
25. Chaos Computer Club Lëtzebuerg (Luxembourg)
26. CILD – Italian Coalition for Civil Liberties and Rights (Italy)
27. Controle Alt Delete (The Netherlands)
28. D3 - Defesa dos Direitos Digitais (Portugal)
29. D64 - Zentrum für digitalen Fortschritt (Center for Digital Progress) (Germany)
30. DataEthics.eu (European)
31. Digital Defenders Partnership (International)
32. Digitalcourage (Germany)
33. Digitale Freiheit e.V. (Germany)
34. Digitale Gesellschaft (Germany)
35. Digitale Gesellschaft (Schweiz) (Switzerland)
36. DIMMONS Digital Commons Research Group (Spain)
37. Disabled Peoples Organisations (Denmark)
38. DonesTech (Spain)
39. Državljan D / Citizen D (Slovenia)
40. Each One Teach One e.V. (Germany)
41. Elektronisk Forpost Norge (EFN) (Norway)
42. epicenter.works (Austria)
43. Equinox Initiative for Racial Justice (European)
44. Eticas Foundation (Spain)
45. Eumans (European)
46. European Anti-Poverty Network (European)
47. European Center for Not-for-Profit Law Stichting (International)
48. European Civic Forum (European)
49. European Disability Forum (EDF) (European)
50. European Network Against Racism (ENAR) (European)
51. European Network on Religion and Belief (European)
52. European Network on Statelessness (European)
53. European Sex Workers’ Rights Alliance (European)
54. European Youth Forum (European)
55. Fair Trials (European)
56. FAIRWORK Belgium (Belgium)
57. FIDH (International Federation for Human rights) (International)
58. Fundación Secretariado Gitano (Spain)
59. Future of Life Institute (International)
60. GHETT’UP (France)
61. Greek Forum of Migrants (Greece)
62. Greek Forum of Refugees (European)
63. Health Action International (The Netherlands)
64. Helsinki Foundation for Human Rights (Poland)
65. Hermes Center (Italy)
66. Hivos (International)
67. Homo Digitalis (Greece)
68. Human Rights Association (Turkey)
69. Human Rights House Zagreb (Croatia)
70. HumanRights360 (Greece / European)
71. Human Rights Watch (International)
72. ILGA-Europe - The European Region of the International Lesbian, Gay, Bisexual, Trans and Intersex Association (European)
73. Implementation Team of the Decade of People of African Descent (Spain)
74. info.nodes (Italy)
75. Interferencias (Spain)
76. International Commission of Jurists (NJCM) - Dutch Section (The Netherlands)
77. Irish Council for Civil Liberties (ICCL) (Ireland)
78. IT-Pol Denmark (Denmark)
79. JustX (European)
80. JustPeace Labs (International)
81. KOK - German NGO Network against Trafficking in Human Beings (Germany)
82. Lafede.cat – organitzacions per a la justícia global (Spain)
83. Ligue des droits de l'Homme (LDH) (France)
84. Ligue des droits humains (Belgium)
85. Maruf Foundation (The Netherlands)
86. Mediterranea Saving Humans Aps (Italy / International)
87. Melitea (European)
88. Mnemonic (Germany / International)
89. Moje Państwo Foundation (Poland)
90. Montreal AI Ethics Institute (Canada)
91. Movement of Asylum Seekers in Ireland (MASI) (Ireland)
92. Netwerk Democratie (The Netherlands)
93. NOVACT (Spain / International)
94. OMEP - Oraganisation Mondiale pour l'Education Prescolaire / World Organization for Early Childhood Education (International)
95. Open Knowledge Foundation (International)
96. Open Society European Policy Institute (OSEPI) (International)
97. OpenMedia (International)
98. Panoptykon Foundation (Poland)
99. The Platform for International Cooperation on Undocumented Migrants (PICUM) (International)
100. Privacy International (International)
101. Privacy Network (Italy)
102. Racism and Technology Center (The Netherlands)
103. Ranking Digital Rights (International)
104. Refugee Law Lab, York University (International)
105. Refugees in danger (Denmark)
106. Science for Democracy (European)
107. SHARE Foundation (Serbia)
108. SOLIDAR & SOLIDAR Foundation (European)
109. Statewatch (European)
110. Stop Wapenhandel (The Netherlands)
111. StraLi (European)
112. SUPERRR Lab (Germany)
113. Symbiosis-School of Political Studies in Greece, Council of Europe Network (Greece)
114. Taylor Bennett Foundation (United Kingdom)
115. UNI Europa (European)
116. Universidad y Ciencia Somosindicalistas (Spain)
117. Vrijschrift.org (The Netherlands)
118. WeMove Europe (European)
119. Worker Info Exchange (International)
120. Xnet (Spain)
* This statement outlines the baseline agreement amongst the signatory civil society organisations on the EU’s proposed Artificial Intelligence Act. However, some of the signatories have positions that are in places more specific and extensive than those outlined here; this statement does not serve to limit this in any way.
注释:
[1] European Commission, Proposal for Regulation of the European Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final, 21 April 2021.
[2] Important issues such as environmental sustainability, stakeholders’ participation in the design and development of AI systems, and diversity of development teams are also suggested in the AIA as merely voluntary measures. 重要事项比如环境可持续性、人工智能系统设计和开发中的利益相关者参与,以及开发团队的多样性也是《欧盟人工智能法草案》仅建议采取的倡议措施。
参考文献:
[1] 欧盟人工智能法草案原文:EUR-Lex - 52021PC0206 - EN - EUR-Lex.
[2] An EU Artificial Intelligence Act for Fundamental Rights: A Civil Society Statement, https://edri.org/wp-content/uploads/2021/11/Political-statement-on-AI-Act.pdf.