ai人工智能可以干什么_晚上11点,您在哪里,人工智能在做什么?

ai人工智能可以干什么

Deloitte’s State of AI in the Enterprise report highlights ethical and regulatory risks in artificial intelligence adoption.

德勤的《企业中的AI状况》报告重点介绍了人工智能应用中的道德和监管风险。

As artificial intelligence has become more and more pervasive throughout the world, enterprise tech leaders have moved beyond questions of what can we do with this powerful new technology to how will our doing this impact our company and other things we care about like individual privacy, worker jobs, misuse by authoritarian governments, transparency, social responsibility, accountability and even the future of work itself.

随着人工智能在世界范围内变得越来越普遍,企业技术领导者已经不再关注我们可以使用这种强大的新技术做什么的问题,而是我们的这样做将如何影响我们的公司以及我们关心的其他事情,例如个人隐私,员工工作,专制政府的滥用,透明度,社会责任,问责制,甚至工作本身的未来。

In short, some very human elements have now become part of the algorithm. That’s perhaps the key finding of Deloitte’s just published third annual State of AI in the Enterprise, a survey of 2,737 IT and line-of-business executives in nine countries. examining their sentiments and practices regarding AI technologies.

简而言之,一些非常人性化的元素现在已经成为算法的一部分。 这也许是德勤刚刚发布的《企业中的第三次年度人工智能状况》的一项主要发现,该调查对九个国家的2737名IT和业务部门高管进行了调查。 检查他们对AI技术的看法和做法。

The study of enterprise AI adopters found that 95 percent of respondents have concerns about ethical risks of the technology and more than 56 percent agree that their organization is slowing adoption of AI technologies because of emerging risks.

对企业AI采纳者的研究发现,95%的受访者对技术的道德风险感到担忧,超过56%的受访者认为,由于新兴风险,他们的组织正在放缓AI技术的采用。

The authors of the report write:

报告的作者写道

尽管他们对AI的工作表现出极大的热情,但采用者也面临保留。 实际上,他们将管理与AI相关的风险列为其AI计划面临的最大挑战,这与数据管理和将AI集成到公司流程中的持续困难联系在一起。 (Despite strong enthusiasm for their AI efforts, adopters face reservations as well. In fact, they rank managing AI-related risks as the top challenge for their AI initiatives, tied with persistent difficulties of data management and integrating AI into their company’s processes.)

此外,在各种潜在的战略,运营和道德风险方面,采用者的准备工作差距也很大。 超过一半的采用者对他们的AI计划存在这些潜在风险表示“主要”或“极端”担忧,而只有十分之四的采用者认为其组织“为解决这些问题做好了充分准备”。 (Additionally, a troubling preparedness gap exists for adopters across a wide range of these potential strategic, operational, and ethical risks. More than half of adopters report “major” or “extreme” concerns about these potential risks for their AI initiatives while only four in 10 adopters rate their organization as “fully prepared” to address them.)

The high-level of fear of emerging risks appears to be inhibiting adoption of AI. Safety concerns were citied by a quarter of respondents as the single biggest ethical risk. Other concerns include lack of explainability and transparency in AI‐derived decisions, the elimination of jobs due to AI‐driven automation, and using AI to manipulate people’s thinking and behavior.

对新兴风险的高度恐惧似乎正在抑制AI的采用。 四分之一的受访者认为安全隐患是最大的道德风险。 其他问题包括在人工智能衍生的决策中缺乏可解释性和透明度,由于人工智能驱动的自动化而导致的工作减少以及使用人工智能来操纵人们的思维和行为。

Despite these worries, only about a third of adopters are actively addressing the risks — 36 percent are establishing policies or a board to guide AI ethics, and the same portion say they’re collaborating with external parties on leading practices.

尽管存在这些担忧,但只有约三分之一的采用者正在积极应对风险-36%的采用者正在制定政策或董事会来指导AI道德规范,而同一部分人则表示他们正在与外部各方就领先做法进行合作。

You will probably not be surprised to learn that Deloitte is one of those external parties who are ready to lend to a hand.

得知德勤是愿意伸出援手的外部人士之一,您可能不会感到惊讶。

In addition to the new enterprise AI report, the firm has also recently unveiled the Deloitte AI Institute to corral the best thinking and best practices on AI, as well as new “Trustworthy AI” framework to guide organizations on how to apply AI responsibly and ethically within their businesses.

除了新的企业AI报告之外,该公司最近还发布了Deloitte AI Institute,以收集有关AI的最佳思想和最佳实践,以及新的“可信赖AI”框架,以指导组织如何负责任地和道德地应用AI。在他们的业务范围内。

The framework will manage common risks and challenges related to AI ethics and governance, including fair and impartial use checks, implementing transparency and explainable AI, responsibility and accountability, security, reliability and privacy. Said Beena Ammanath, Deloitte AI Institute executive director:

该框架将管理与AI道德和治理相关的常见风险和挑战,包括公平和公正的使用检查,实施透明度和可解释的AI,责任与问责制,安全性,可靠性和隐私性。 德勤AI研究所执行董事said Beena Ammanath:

“准备好接受人工智能的组织必须首先以信任为中心。 我们不仅致力于帮助客户了解AI道德,而且致力于在我们自己的组织内维护道德心态。” (“Organizations ready to embrace AI must start by putting trust at the center. We are devoted to not only helping our clients navigate AI ethics, but also in maintaining an ethical mindset within our own organization.”)

One company cited as getting ethical AI adoption right is Workday, the provider of cloud-based enterprise software for financial management and human capital management. It has committed to a set of principles to ensure that its AI-derived recommendations are impartial and that it is practicing good data stewardship. Workday is also embedding “ethics-by-design controls” into its product development process. Said Barbara Cosgrove, Chief Privacy Officer, Workday:

被称为获得道德认可的AI采用权的公司之一是Workday,这是基于云的企业软件提供商,用于财务管理和人力资本管理。 它承诺遵循一系列原则,以确保其源自AI的建议是公正的,并且正在实践良好的数据管理。 Workday还在其产品开发过程中嵌入了“设计伦理控制”。 工作日首席隐私官Said Barbara Cosgrove

对于工程师和开发人员而言,将“道德规范”集成到技术产品中可能会感到很抽象。 尽管许多技术公司都在以具体和切实的方式独立研究实现此目标的方法,但我们必须突破这些孤岛并分享最佳实践。 通过合作学习彼此,我们可以提高整个行业的门槛,一个好的起点就是着眼于获得信任的事物。 (Integrating ‘ethics’ into technology products can feel abstract for engineers and developers. While many technology companies are working independently on ways to do this in concrete and tangible ways, it is imperative that we break out of those silos and share best practices. By working collaboratively to learn from each other, we can raise the bar for the industry as a whole — and a good place to start is focusing on the things that earn trust.)

带走 (Takeaway)

Of all the modern dual-use technologies, it is probably fair to say that artificial intelligence has the most potential to do both good and evil. The same algorithms that used to run factory floors, automate tedious business processes, help farmers be more productive, support science and innovation, monitor extreme weather and climate change, improve health delivery, support safety and thousands of other useful tools can also be used to invade and track the behavior of private citizens. It is a dream tool for law enforcement and authoritarian regimes who want to keep their knees on the necks of their people. It is also biased in dangerous ways by the assumptions that are built-in either accidentally or on purpose.

在所有现代两用技术中,可以说人工智能在善与恶中都具有最大的潜力。 用来运行工厂车间,使繁琐的业务流程自动化,帮助农民提高生产力,支持科学和创新,监视极端天气和气候变化,改善健康状况,支持安全性以及数千种其他有用工具的算法也可以用于入侵并跟踪私人公民的行为。 对于希望屈膝于人民的执法和独裁政权来说,这是一个理想的工具。 由于意外或故意内置的假设,它也以危险的方式出现偏差。

And, it is everywhere. One of the funny/not funny findings of the Deloitte report is that many organizations have no idea how much or where their organization is using AI:

而且,它无处不在。 Deloitte报告有趣/不有趣的发现之一是,许多组织不知道他们的组织使用AI的数量或位置:

知道人工智能的存在是管理其风险的前提。 减轻风险的一个关键步骤是保持组织中所有AI模型,算法和系统的正式清单。 对于公司而言,跟踪所有AI的使用可能会很困难-一家银行“对使用高级或AI驱动算法的所有模型进行了盘点,发现总数惊人地达到了20,000。” (Knowing where AI exists is a prerequisite to managing its risks. One key step for mitigating risk is to keep a formal inventory of all of the organization’s AI models, algorithms, and systems. It can be difficult for companies to track all uses of AI — one bank “made an inventory of all their models that use advanced or AI-powered algorithms and found a staggering total of 20,000.”)

That is truly frightening.

这确实令人恐惧。

Gain Access to Expert View — Subscribe to DDI Intel

获得访问专家视图的权限- 订阅DDI Intel

翻译自: https://medium.com/datadriveninvestor/its-11-pm-do-you-where-your-ai-is-and-what-it-s-doing-a0ddc69bc1e6

ai人工智能可以干什么

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值