ai人工智能将替代人类_人类可以信任AI吗?

ai人工智能将替代人类

As the applications of artificial intelligence in both business and society become more ambitious, digital ethics has become a growing concern for both individuals and governments. Ethics concern a set of social conventions that develop “our capacity to think critically about moral values and direct our actions in terms of such values”.[i] What is the specificity of digital ethics, and how do these considerations differ from those of ethics in general? These concerns are more a question of degree than of kind, for each new generation of information technology introduces increasingly salient challenges for humanity. In this brief contribution, we survey the origins of four key questions that help frame the current debate.

随着人工智能在商业和社会中的应用变得越来越雄心勃勃,数字伦理已成为个人和政府日益关注的问题。 道德规范涉及一系列社会惯例,这些惯例发展为“我们对道德价值观进行批判性思考并根据此类价值观指导我们的行为的能力”。 [i]数字伦理的特殊性是什么,这些考虑与一般的伦理有何不同? 这些担忧更多是程度上的问题,而不是种类的问题,因为新一代的信息技术给人类带来了日益突出的挑战。 在这一简短的贡献中,我们调查了有助于构筑当前辩论的四个关键问题的起源。

Can AI understand human values?

人工智能可以理解人的价值观吗?

Coming out of the second world war, Norbert Wiener’s work on cybernetics focused on the question of to what extent humanity can trust automation. During his work on antimissile communication systems, Wiener becomes intrigued by the interdependence between systems of communication and control in both humans and machines. Although he was a strong advocate of developing automation, his writing in The Human Use of Human Beings expressed growing concerns about the dehumanization and subordination of our species. He outlined, in particular, the dangers of trusting decisions to computer programs that cannot reason abstractly, and by consequence are highly unlikely to understand the nature of human values.

从第二次世界大战开始,诺伯特·维纳(Norbert Wiener)在控制论方面的工作集中在人类可以在多大程度上信任自动化这一问题上。 在从事反导通信系统的工作时,维纳对人机通信系统和控制系统之间的相互依赖性很感兴趣。 尽管他是大力发展自动化的坚决拥护者,但他在《人类对人类的使用》一书中表达了对我们物种的非人性化和从属的日益关注。 他特别概述了将决策委托给无法抽象推理的计算机程序的危险,因此很难理解人类价值观的本质。

Does information technology create new ethical issues?

信息技术会带来新的道德问题吗?

Thirty years later, Walter Maner, a medical professor and researcher coined the term “computer ethics “to describe the ethical problems “aggravated, transformed or created by computer technology”. In focusing on the specificity of the ethical decisions that arise from the use of computer technology, he argued that computer applications were fundamentally different than previous technological innovations in that their design, complexity, and malleability allowed them to be applied in a countless number of domains. He suggested that the resulting ethical decisions often had to make in policy vacuums whenever government and society feel behind technological innovation. He further argued the nature and the scope of ethical dilemmas were often distorted when discussing computer technology. He concluded that the involvement of computers in human conduct can create entirely new ethical issues.

30年后,医学教授兼研究员沃尔特·曼纳(Walter Maner)创造了“计算机伦理学”一词,用以描述“由计算机技术加剧,改造或制造的”伦理学问题。 他在关注因使用计算机技术而产生的道德决策的特殊性时,认为计算机应用程序与以前的技术创新根本不同,因为它们的设计,复杂性和可延展性使它们可以应用在无数领域中。 他建议,每当政府和社会感到技术创新落后时,由此产生的道德决定往往必须在政策真空中做出。 他进一步指出,在讨论计算机技术时,道德困境的性质和范围经常被扭曲。 他得出的结论是,计算机参与人类行为会产生全新的道德问题。

Can technology be programmed to emulate humanity?

可以对技术进行编程以模仿人类吗?

Is information technology by nature ethically neutral, or does its use inherently produce ethical consequences? In his seminal article in forty years ago, “What is Computer Ethics?”, James Moor summed up his thoughts concerning the ethical footprint of technology. He suggested that at the minimum, computers were ethical impact agents for programs and algorithms have challenge human nature whether this was intended or not. He foresaw that computers could be programmed to be implicit ethical agents, in other words humanity could choose to regulate information technology to avoid unethical outcomes. He also evoked the more challenging possibility of creating computers as explicit ethical agents using algorithms programmed to act ethically. Finally, he envisioned a world of full- ethical agents in which machines would be capable of elucidating ethical choices compatible with humanity.

信息技术从本质上讲在道德上是中立的,还是其使用会固有地产生道德后果? 在40年前的开创性文章中,“ 什么是计算机伦理? ”,詹姆斯·摩尔(James Moor)总结了他对技术的伦理足迹的看法。 他建议,至少,计算机是程序的道德影响因子,而无论是否故意,计算机都会挑战人性。 他预见到,计算机可以被编程为隐性的道德主体,换句话说,人类可以选择规范信息技术以避免不道德的后果。 他还提出了更具挑战性的可能性,即使用编程为符合道德规范的算法将计算机创建为明确的道德规范。 最后,他设想了一个充满道德伦理的世界,其中机器将能够阐明与人类相适应的伦理选择。

Are digital ethics today a proxy for moral conduct?

今天的数字伦理是道德行为的代理吗?

If artificial intelligence is all about context, how does context influence our views of ethics? Krystyna Górniak-Kocikowska’s major contribution to the ethics debate is referred to as the Górniak hypothesis: computer ethics is the future of applied ethics and will soon become the foundations of global ethical systems.[v] Gorniak argued that ethical considerations have in the past reflected local experiences, histories, and customs, which in turn explains why ethical positions often vary from one culture to another. She suggested that computer ethics are not bound by local constraints, i.e. computer logic constitutes a truly universal view of how humans and machines can interact. She concludes that the pervasiveness of automation would lead to “computer ethics” becoming a proxy for ethics as the foundation for moral conduct in the information age. Although the Górniak hypothesis has been contested by authors such as Deborah Johnson, computer ethics is undeniably tied to the evolution of global business, education, and legislation.

如果人工智能只与情境有关,那么情境如何影响我们的伦理观? KrystynaGórniak-Kocikowska对伦理学辩论的主要贡献被称为Górniak假设:计算机伦理学是应用伦理学的未来,并将很快成为全球伦理学体系的基础。 [v]戈尔尼亚克认为,过去的道德考量反映了当地的经验,历史和习俗,这反过来解释了为什么道德立场常常因一种文化而异。 她建议计算机伦理不受本地约束的约束,即计算机逻辑构成了人与机器如何交互的真正普遍观点。 她的结论是,自动化的普遍性将导致“计算机伦理学”成为伦理学的代名词,作为信息时代道德行为的基础。 尽管戈尼亚克假说遭到了诸如黛博拉·约翰逊(Deborah Johnson)之类的学者的质疑,但不可否认的是,计算机伦理与全球商业,教育和立法的发展息息相关。

Framing the current debate over digital ethics

构成当前关于数字道德的辩论

The discussion of digital ethics today takes each of these arguments one step further. The debate goes beyond the ethical consequences of computers and automation to the ethics of data-driven decision-making. The ethical challenges do not arise simply from the quality and the relevance of the data itself, but from how human beings use data to perceive, predict, and evaluate courses of action. Each new generation of information technology enlarges the objectives and application areas of both human and artificial intelligence, and in doing so modifies the context in which ethical choices arise. The finality of digital ethics is not a question of separating right from wrong, but of constructing appropriate frameworks for both developers and end-users of digital technologies to qualify acceptable data practices.

今天对数字道德的讨论使所有这些论点更进一步。 辩论不仅涉及计算机和自动化的伦理后果,还涉及数据驱动决策的伦理问题。 道德挑战不仅来自数据本身的质量和相关性,还源于人类如何使用数据来感知,预测和评估行动方案。 新一代的信息技术扩大了人类和人工智能的目标和应用领域,从而改变了道德选择产生的环境。 数字伦理的最终性不是将对与错分离的问题,而是为数字技术的开发人员和最终用户构建合适的框架以限定可接受的数据实践的问题。

Dr. Lee Schlenker is a Professor of Business Analytics and Digital Transformation, and a Principal in the Business Analytics Institute http://baieurope.com. His LinkedIn profile can be viewed atwww.linkedin.com/in/leeschlenker. You can follow us on Twitter athttps://twitter.com/DSign4Analytics

Lee Schlenker博士是商业分析和数字转换教授,也是商业分析学院的负责人, 网址为http://baieurope.com。 可以在 www.linkedin.com/in/leeschlenker中 查看他的LinkedIn个人资料 您可以在Twitter上关注我们, 网址 https://twitter.com/DSign4Analytics

Interested in learning more about our thoughts on digital ethics? Our recent contributions include :

有兴趣了解更多关于我们的数字伦理思想的信息吗? 我们最近的贡献包括:

Addressing AI’s Hidden Agenda

解决AI的隐藏议程

The Ethics of Data Science

数据科学伦理

Data Science and the DPO

数据科学与DPO

Identity, Trust, and Value(s): the future of Open Banking

身份,信任和价值:开放银行的未来

What does human-centric AI mean to management?

以人为本的人工智能对管理意味着什么?

— — — — — —

— — — — — —

[i] Churchill L.R., Are We Professionals? A Critical Look at the Social Role of Bioethicists”. Daedalus. 1999. pp. 253–274.

[i]丘吉尔LR,我们是专业人士吗? 批判性地考察生物伦理学家的社会角色”。 代达罗斯 1999年,第253-274页。

[ii] Bynum, Terrell, “Computer and Information Ethics”, The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.)

[ii]拜纳姆,特雷尔,“计算机和信息伦理学”,《斯坦福哲学百科全书》(2018年夏季版),爱德华·N·扎尔塔(Edward N. Zalta)(编辑)

翻译自: https://towardsdatascience.com/can-humanity-trust-ai-b1e0fa7b024d

ai人工智能将替代人类

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值