Pause Giant AI Experiments: An Open Letter (Elon Musk, March 22 2023)【中英双语】

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.(我们呼吁所有人工智能实验室立即暂停至少6个月的培训比 GPT-4更强大的人工智能系统。)

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

具有人类竞争智能的人工智能系统可以对社会和人类构成深远的风险,正如广泛的研究[1]所显示的,并得到顶级人工智能实验室的承认。[2]正如得到广泛认可的《阿西洛马尔人工智能原则》所指出的,高级人工智能可以代表地球生命历史上的一个深刻变化,应该在相应的关心和资源的支持下进行规划和管理。不幸的是,这种规划和管理水平并没有发生,尽管最近几个月人工智能实验室陷入了一场失控的竞赛,开发和部署越来越强大的数字头脑,没有人——甚至是它们的创造者——能够理解、预测或可靠地控制它们。

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

当代的人工智能系统在一般任务上正变得与人类竞争,我们必须扪心自问:

  • 我们是否应该让机器用宣传和谎言淹没我们的信息渠道?
  • 我们是否应该将所有的工作自动化,包括那些令人满意的工作?
  • 我们是否应该发展出可能最终超过我们、智力超过我们、过时并取代我们的非人类思维?
  • 我们应该冒失去文明控制的风险吗?

这样的决定不能委托给未经选举产生的技术领导人。只有当我们确信它们的影响将是积极的并且它们的风险将是可控的时候,才应该开发强大的人工智能系统。这种信心必须是合理的,并随着系统潜在影响的大小而增加。OpenAI 最近关于人工通用智能的声明指出: “在某种程度上,在开始训练未来系统之前得到独立的审查可能是重要的,而且对于最先进的努力来说,同意限制用于创建新模型的计算机的增长速度也是重要的。”我们同意。那就是现在。

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

因此,我们呼吁所有人工智能实验室立即暂停至少6个月的培训比 GPT-4更强大的人工智能系统。这种暂停应该是公开的、可验证的,并包括所有关键参与者。如果这种暂停不能迅速实施,各国政府应该介入并实施暂停。

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

人工智能实验室和独立专家应利用这一暂停时间,共同开发和实施一套高级人工智能设计和开发的共享安全协议,由独立外部专家进行严格审核和监督。这些协议应该确保遵循它们的系统是安全的,不会有任何合理的怀疑。[4]这并不意味着一般意义上的人工智能开发的暂停,只是从危险的竞赛退回到具有紧急能力的更大的不可预测的黑匣子模型。

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

人工智能的研究和发展应该重新聚焦于使当今强大的、最先进的系统更加准确、安全、可解释、透明、健壮、一致、值得信赖和忠诚。

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

与此同时,人工智能开发人员必须与政策制定者合作,大幅加快开发强大的人工智能治理系统。这些措施至少应包括: 专门负责人工智能的新的、有能力的监管机构; 监督和跟踪高能力的人工智能系统和大量计算能力; 帮助区分真实和合成以及跟踪模型泄漏的出处和水印系统; 健全的审计和认证生态系统; 人工智能造成损害的责任; 为人工智能技术安全研究提供充足的公共资金; 以及资源充足的机构,以应对人工智能将造成的巨大经济和政治混乱(尤其是。

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

有了人工智能,人类可以享受一个繁荣的未来。在成功创造出强大的人工智能系统之后,我们现在可以享受一个“人工智能夏天”,在这个夏天里,我们可以收获回报,为所有人的利益设计这些系统,并给社会一个适应的机会。社会对其他可能对社会造成灾难性影响的技术已经暂停使用。[5]我们可以在这里做。让我们享受一个漫长的人工智能夏天,而不是毫无准备地冲进秋天。

Notes and references

  • [1]

    Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).【论随机鹦鹉的危险:语言模型会不会太大?🦜. 《2021 ACM公平、问责制和透明度会议记录》(第610-623页)。】

    Bostrom, N. (2016). Superintelligence. Oxford University Press.【Bostrom,N.(2016)。超级智能。牛津大学出版社。】

    Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).【Bucknall,B.S.和Dori Hacohen,S.(2022年7月)。当前和近期人工智能是一个潜在的生存风险因素。在2022年AAAI/ACM人工智能、伦理和社会会议记录中(第119-129页)。】

    Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.【Carlsmith,J.(2022)。寻求权力的人工智能是一种存在的风险吗?。arXiv预印本arXiv:2206.13533。】

    Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
    【克里斯蒂安,B.(2020)。对齐问题:机器学习和人类价值观。诺顿公司。】

    Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).【Cohen,M.等人(2022)。先进的人工智能体参与提供奖励。AI杂志,43(3)(第282-293页)。】

    Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.【Eloundou,T.等人(2023)。GPT就是GPT们:大型语言模型对劳动力市场影响潜力的早期研究。】

    Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.【Hendrycks,D.和Mazeika,M.(2022)。人工智能研究的X风险分析。arXiv预印本arXiv:2206.05862。】

    Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.【Ngo,R.(2022)。从深度学习的角度来看对齐问题。arXiv预印本arXiv:2209.00626。】

    Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.【Russell,S.(2019)。人类相容性:人工智能与控制问题。维京人。】

    Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.【Tegmark,M.(2017)。生命3.0:人工智能时代的人类。克诺夫。】

    Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.【Weidinger,L.等人(2021)。语言模型带来的道德和社会风险。arXiv预印本arXiv:2112.04359。】

  • [2]

    Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’. ABC News.【Ordonez,V.等人(2023年3月16日)。OpenAI首席执行官Sam Altman表示,人工智能将重塑社会,并承认风险:“对此有点害怕”。美国广播公司新闻。】

    Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.【Perrigo,B.(2023年1月12日)。DeepMind首席执行官德米斯·哈萨比斯敦促谨慎对待人工智能。时间到了。】

  • [3]

    Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.【Bubeck,S.等人(2023)。人工通用智能的火花:GPT-4的早期实验。arXiv:2303.12712。】【P.S. 微软发布的150余页关于GPT-4的测试论文。】

    OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.【OpenAI(2023)。GPT-4技术报告。arXiv:2303.08774。】

  • [4]

    Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems “function appropriately and do not pose unreasonable safety risk”.【有充分的法律先例——例如,广泛采用的经合组织人工智能原则要求人工智能系统“正常运行,不会造成不合理的安全风险”。】

  • [5]

    Examples include human cloning, human germline modification, gain-of-function research, and eugenics.【例子包括人类克隆、人类种系修饰、功能获得研究和优生学。】

P.S. 红色标记的注释部分是个人认为比较重要的论文文本。(2023年4月21日上午11点52分。)

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值