23条阿西洛马人工智能原则

原则目前共23项,分为三大类,分别为:科研问题(Research Issues)、伦理和价值(Ethics and values)、更长期的问题(Longer-term Issues)。

具体如下:

Research Issues科研问题

  1. Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
    研究目的:AI研究的目标,应该是创造有益(于人类)而不是不受(人类)控制的智能。
  2. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
    研究经费:投资AI应该有部份经费()用于研究如何确保有益地使用AI,包括计算机科学、经济学、法律、伦理以及社会研究中的棘手问题,比如:
    How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
    如何使未来的AI系统高度健全(“鲁棒性”),让系统按我们的要求运行,而不会发生故障或遭黑客入侵?
    How can we grow our prosperity through automation while maintaining people’s resources and purpose?
    如何通过自动化提升我们的繁荣程度,同时维持人类的资源和意志?
    How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
    如何改进法制体系使其更公平和高效,能够跟得上AI的发展速度,并且能够控制AI带来的风险?
    What set of values should AI be aligned with, and what legal and ethical status should it have?
    AI应该归属于什么样的价值体系?它该具有何种法律和伦理地位?
  3. Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
    科学与政策的联系:在AI研究者和政策制定者之间应该有建设性的、有益的交流。
  4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
    科研文化:在AI研究者和开发者中应该培养一种合作、信任与透明的人文文化。
  5. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
    避免竞争:AI系统开发团队之间应该积极合作,以避免安全标准上的有机可乘。

Ethics and values伦理和价值

  1. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
    安全性:AI系统在它们整个运行过程中应该是安全和可靠的,而且其可应用性的和可行性应当接受验证。
  2. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
    故障透明性:如果一个AI系统造成了损害,那么造成损害的原因要能被确定。
  3. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
    司法透明性:任何自动系统参与的司法判决都应提供令人满意的司法解释以被相关领域的专家接受。
  4. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
    责任:高级AI系统的设计者和建造者,是AI使用、误用和行为所产生的道德影响的参与者,有责任和机会去塑造那些道德影响。
  5. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
    价值归属:高度自主的AI系统的设计,应该确保它们的目标和行为在整个运行中与人类的价值观相一致。
  6. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
    人类价值观:AI系统应该被设计和操作,以使其和人类尊严、权力、自由和文化多样性的理想相一致。
  7. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
    个人隐私:在给予AI系统以分析和使用数据的能力时,人们应该拥有权力去访问、管理和控制他们产生的数据。
  8. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
    自由和隐私:AI在个人数据上的应用不能充许无理由地剥夺人们真实的或人们能感受到的自由。
  9. Shared Benefit: AI technologies should benefit and empower as many people as possible.
    分享利益:AI科技应该惠及和服务尽可能多的人。
  10. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
    共同繁荣:由AI创造的经济繁荣应该被广泛地分享,惠及全人类。
  11. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
    人类控制:人类应该来选择如何和决定是否让AI系统去完成人类选择的目标。
  12. Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
    非颠覆:高级AI被授予的权力应该尊重和改进健康的社会所依赖的社会和公民秩序,而不是颠覆。
  13. AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
    AI军备竞赛:致命的自动化武器的装备竞赛应该被避免。

Longer-term Issues更长期的问题

  1. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
    能力警惕:我们应该避免关于未来AI能力上限的过高假设,但这一点还没有达成共识。
  2. Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
    重要性:高级AI能够代表地球生命历史的一个深刻变化,人类应该有相应的关切和资源来进行计划和管理。
  3. Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
    风险:AI系统造成的风险,特别是灾难性的或有关人类存亡的风险,必须有针对性地计划和努力减轻可预见的冲击。
  4. Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
    递归的自我提升:被设计成可以迅速提升质量和数量的方式进行递归自我升级或自我复制AI系统,必须受制于严格的安全和控制标准。
  5. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
    公共利益:超级智能的开发是为了服务广泛认可的伦理观念,并且是为了全人类的利益而不是一个国家和组织的利益。
  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值