人工智能+智能运维解决方案_如何建立对人工智能解决方案的信任

人工智能+智能运维解决方案

I interviewed Marisa Tschopp who is an organizational psychologist conducting research about Artificial Intelligence from a humanities perspective with a focus on psychological and ethical questions. She is also a corporate researcher at scip AG, a technology and cybersecurity company based in Zurich. And she is the Women in AI Ambassador for Switzerland.

我采访了玛丽莎·特肖普 ( Marisa Tschopp) ,他是一名组织心理学家,从人文学科角度研究人工智能,重点关注心理和伦理问题。 她还是位于苏黎世的技术和网络安全公司scip AG的公司研究员。 她是AI的瑞士女性大使。

Image for post

Please describe who you are in 2–3 sentences.

请用2-3句话描述您的身份。

Currently, I am focusing on trust in AI, Autonomous Weapons Systems, and our AIQ project, which is a psychometric method to measure the skills of digital assistants (conversational AI), like Siri or Alexa.

目前,我主要关注对AI,自动武器系统和我们的AIQ项目的信任,这是一种心理测量方法,用于测量Siri或Alexa等数字助理(会话式AI)的技能。

So, obviously, I’m a researcher, but I’m also a mother of two toddlers, a wife, a daughter, a sister, a volleyball player, hopefully, a fun friend to be with, an activist, an idealist, a collaborator, and a semi-professional Sherpa (I love hiking in the Swiss Alps and therefore have to carry my kids on my back!).

因此,显然,我是一名研究人员,但我还是两个孩子的母亲,一个妻子,一个女儿,一个姐姐,一个排球运动员,希望是一个有趣的朋友,一个激进主义者,理想主义者,合作者和半专业的夏尔巴人(我喜欢在瑞士阿尔卑斯山远足,因此必须将孩子背在背上!)。

Let us start with understanding trust better. What is trust and why is it important, especially in the context of AI?

让我们从更好地了解信任开始。 什么是信任,信任为什么重要,尤其是在人工智能的背景下?

In the context of AI, there is a critical underlying assumption: “No trust, No Use”. Since AI holds great promises (as well as dangers), tech-companies and AI enthusiasts are especially concerned about how to build trust in AI to foster adoption or usage.

在人工智能的背景下,有一个关键的基本假设:“不信任,不使用”。 由于AI有着巨大的前景(包括危险),因此科技公司和AI爱好者特别关注如何建立对AI的信任以促进采用或使用。

Trust seems like the lasting, kind of mysterious, competitive edge.

信任似乎是一种持久的,神秘的竞争优势。

Without trust, there would be no family, no houses, no markets, no religion, no politics, no rocket science.

没有信任,就不会有家庭,房子,市场,宗教,政治,火箭科学。

According to trust researcher Rachel Botsman,

根据信托研究员Rachel Botsman的说法

Trust is the social glue that enables humankind to progress through interaction with each other and the environment, including technology.

信任是使人类能够通过彼此之间以及与环境(包括技术)的互动而进步的社会粘合剂。

Trust can be seen as a psychological mechanism to cope with uncertainty and is located somewhere between the known and the unknown.

信任可以看作是应对不确定性的一种心理机制,它位于已知和未知之间。

Image for post
Rachel Botsman Rachel Botsman

Trust is deeply ingrained in our personality. We are basically born with a tendency to trust or distrust people (or animals or any other things).

信任在我们的性格中根深蒂固。 基本上,我们天生就有信任或不信任人(或动物或任何其他事物)的倾向。

Take for example this random picture of a woman: Do you trust her? Please rate below.

例如,这张女人的随机照片:您信任她吗? 请在下面评分。

Image for post
Nannie Doss Nannie Doss
Image for post

We, humans, have the unique capacity to tell in a snapshot if we trust this person or not. We look at the facial expression, body posture, or the context (background, surroundings, etc.). We compare it with memories or past experiences in split seconds, such as “she reminds me of my grandmother.”

我们人类具有独特的能力,可以在快照中告诉我们是否信任此人。 我们看一下面部表情,身体姿势或背景(背景,周围环境等)。 我们将它与瞬间的记忆或过去的经历进行了比较,例如“她让我想起了祖母”。

Generally speaking, what we know is that we tend to trust people more, who are more like ourselves. One reason is, that it is easier for us to predict future behavior or reactions of persons who are alike, which lowers the emotional risk for us of being hurt.

一般而言,我们所知道的是,我们倾向于信任更多的人,他们更像我们自己。 原因之一是,对于我们而言,更容易预测相似人士的未来行为或React,从而降低了我们受到伤害的情感风险。

What we do not know is, how accurate our intuition is. Did you trust this woman above? Maybe yes, because she is smiling, relaxed. Maybe no, because you are already expecting some kind of trick here, as I am a psychologist.

我们不知道的是,我们的直觉有多精确。 你相信上面的这个女人吗? 也许是的,因为她在微笑,放松了。 也许不是,因为您已经在期待某种技巧,因为我是一名心理学家。

This woman is not very trustworthy. She died several years ago in prison as one of the most famous female serial killers.

这个女人不是很值得信赖。 几年前,她以最著名的女性连环杀手之一在监狱中去世。

In the context of AI, if you ask the question can we trust in AI as a technology, then compared to other technologies, it is decisive to understand that often AI (for example Machine Learning, let’s say image classification), does not behave exactly the way it is intended, makes mistakes, or performs unethically. For example, when black people are classified as gorillas or birds as missiles.

在AI的背景下,如果您问的问题是我们是否可以相信AI是一种技术,那么与其他技术相比,决定性的是要了解AI经常无法准确地表现(例如,机器学习,例如图像分类)预期的方式,犯错误或不道德地执行。 例如,当黑人被分类为大猩猩或鸟类被分类为导弹时。

过程和结果很难解释,有时甚至根本不为人所知,因此无法很好地预测。 信任此技术会带来更高的风险。 (The processes and outcomes are hard to explain, sometimes not known at all, hence, not well predictable. Trusting this technology incorporates a way higher risk.)

So far, research has agreed upon three main pillars that need to be answered to build trust,

到目前为止,研究已经商定了建立信任的三个主要Struts,

1.) Performance: Does it perform well? Is it safe? Is it built correctly?

1.)性能:性能良好吗? 安全吗? 是否正确构建?

2.) Process: Does it perform the way we intended? Can we predict the outcome?

2.)流程:它是否按照我们的预期方式进行? 我们可以预测结果吗?

3.) Purpose: Do I have a good feeling about the intent of the program and the provider? Does it adhere to ethical standards? Is it trustworthy?

3.)目的:我对计划和提供者的意图有很好的感觉吗? 是否遵守道德标准? 值得信赖吗?

It is often said, that AI positively transforms almost every sector from medicine to urban planning, but very importantly, it also brings questionable or even dangerous implications with it. From super-precise hacking of data platforms to the surveillance state and loss of privacy without opportunities for public consent. So next to technical issues like a lack of predictability and explainability, the notion of negative outcomes, hype, complexity, and disagreement within definitions and applications, leads to skepticism and distrust.

人们常说,人工智能正在积极地将几乎每个领域从医学转变为城市规划,但是非常重要的是,它也带来了可疑甚至危险的含义。 从数据平台的超精确黑客攻击到监视状态,以及在没有征得公众同意的情况下失去隐私的情况。 因此,除了缺乏可预测性和可解释性的技术问题之外,负面结果,炒作,复杂性以及定义和应用中的分歧等概念也导致了怀疑和不信任。

How should non-experts and business owners, etc. approach this topic?

非专家和企业主等应如何处理此主题?

AI is already part of our daily lives, it is already increasingly being used in decision-making when it comes to education, police, justice, recruitment, or health.

人工智能已经成为我们日常生活的一部分,在教育,警察,司法,招聘或健康方面,它已越来越多地用于决策中。

I do not have a tech-background as well, I am a psychologist so I see things from a different perspective, and it may be easier for me to feel empathy with the majority of people, who have no idea how to code or what an algorithm is.

我也没有技术背景,我是心理学家,所以我从不同的角度看待事情,对于大多数不知道如何编码或不知道如何编码的人,我可能更容易感到同情。算法是。

What fascinates me most and drives my research, is the question, how trust is established in the first place. You don’t really know the person or the product, its values or competencies. This first little glimpse of saying “yes, I’ll go for it”.

最让我着迷并推动我的研究的是一个问题,即信任是如何建立的。 您实际上并不了解人员或产品,其价值或能力。 最初的一瞥就是说“是的,我会去做”。

It is still a little mysterious, how this trust develops in the first place.

这种信任最初是如何发展的,仍然有些神秘。

How can we best cope with it? I think it is all about education, communication, and critical thinking. But there is something restricting these skills or our will to engage in discussions about AI.

我们如何最好地应对呢? 我认为这全都与教育,沟通和批判性思维有关。 但是有些限制了这些技能,或者限制了我们参与有关AI讨论的意愿。

From a psychological perspective, this is one of the big problems: we are lacking cognitive freedom of choice. What concerns me, is that we are moving towards a do-or-die relationship with AI. It will be almost impossible to get away from AI, as much as we cannot get away from climate change.

从心理学的角度来看,这是一个大问题:我们缺乏选择的认知自由。 让我担心的是,我们正在与AI建立起“成败”关系。 摆脱AI几乎是不可能的,就像我们无法摆脱气候变化一样。

The fact that we are forced or threatened, like the threatening terminator images or the constant man losing against machine news, leads to resistance, denial, cynicism, and downplay. This is called reactance, a psychological phenomenon. When reactance occurs — we choose these behaviors — even if they are totally irrational — to simply restore our cognitive freedom of choice and take back our sense of control.

我们被迫或受到威胁的事实(例如威胁性的终结者图像或不断遭受机械新闻困扰的人)会导致抵抗,拒绝,犬儒主义和轻描淡写。 这被称为电抗,一种心理现象。 当发生电抗时,即使它们完全不合理,我们也会选择这些行为, 以简单地恢复我们选择的认知自由并收回我们的控制感。

This can be a big challenge, especially in consumer psychology when you aim to convince customers to buy your products, whether it’s a car or a robotic vacuum cleaner.

这可能是一个很大的挑战,尤其是在消费者心理上,当您要说服客户购买您的产品时,无论是汽车还是机器人吸尘器。

像所有人一样的消费者都希望有选择的自由,我们需要找出方法,使人们想要自己探索AI,而不是因为他们被迫这样做。 (Consumers like all human beings want the freedom of choice and we need to figure out ways, to make people want to explore AI by themselves, not because they are forced to do so.)

That is why management often applies bottom-up approaches within their company, rather than top-down decisions.

因此,管理层经常在公司内部采用自下而上的方法,而不是自上而下的决策。

Through this participative way of decision making, you aim to have all people aboard and share your vision and goals.

通过这种参与性的决策方式,您的目标是让所有人都参与进来并分享您的愿景和目标。

Right now, one of the key issues is to change the way we talk about AI. I think we massively have to change the tone of the conversation about AI. We must move away from the hype, threat, and fear towards clear facts, vision, and why to create our own relationship with AI, and thus a new level of trust.

目前,关键问题之一是改变我们谈论人工智能的方式。 我认为我们必须极大地改变有关AI的讨论语气。 我们必须从炒作,威胁和恐惧转移到明确的事实,愿景以及为什么要与AI建立我们自己的关系,从而提高信任水平。

This is also my vision as the ambassador for the Women in AI network, a nonprofit working towards a gender-inclusive AI that benefits global society.

这也是我作为AI妇女网络大使的愿景,这是一个致力于推动对社会有利的性别包容性AI的非营利组织。

Imagine a company is building a Machine Learning based product and just started prototyping. What steps would you suggest from a trust-building perspective?

想象一下,一家公司正在构建基于机器学习的产品,并且刚刚开始制作原型。 从建立信任的角度来看,您建议采取什么步骤?

What I learned from a philosopher is to always ask why, from the beginning to the very end, and continuously at all milestones of the project.

我从一位哲学家那里学到的东西是,总是问为什么从头到尾,并在项目的所有里程碑中持续不断。

What are the intended consequences and speculate about all possible unintended consequences?

预期的结果是什么,并推测所有可能的意外结果?

From the design perspective, it is all about aligning your design to at least minimal ethical standards, to make sure you are building a trustworthy product. However, keeping in mind that technical performance (quality), security and safety, are all indispensable prerequisites.

从设计的角度来看,这就是使您的设计至少符合最低的道德标准,以确保您正在构建值得信赖的产品。 但是,请记住,技术性能(质量),安全性和安全性都是必不可少的先决条件。

Going back to the beginning it is having the three pillars process, performance, purpose constantly in mind. Ethics in AI is all about integrity and authenticity.

回到开始始终牢记三个Struts过程,绩效和宗旨 。 人工智能中的道德原则是关于完整性和真实性的。

In the end, the task is to build a great, safe, and ethically correct product. The focus naturally is at building a good product first, then comes security and this ethical stuff.

最后,任务是构建一个优质,安全和符合道德的正确产品 。 重点自然是首先制造出优质的产品,然后是安全性和这种道德标准。

It is natural to focus on the technical requirements first, whilst counterintuitively, the latter should rather be looked at first. Two years ago, when we started our trust research, our idea was to have like a proof of quality to signal users or customers, that this is a trustworthy product. That is why we invented the AIQ, a psychometric measurement method to state, compare, and track the skills of digital assistants. However, we were a bit too fast as the market is still in a development phase rather than actually improving existing conversational AI. We too, focused on the technical skillsets at first, rather than the actual decisive soft factors of how trust is built and developed.

很自然,首先要关注技术要求,而与直觉相反,应该首先关注后者。 两年前,当我们开始进行信任研究时,我们的想法是像质量证明一样向用户或客户发出信号,这是值得信赖的产品。 因此,我们发明了AIQ,这是一种心理测量方法,用于陈述,比较和跟踪数字助理的技能。 但是,我们有点太快了,因为市场仍处于开发阶段,而不是实际改善现有的对话式AI。 我们也首先关注技术技能,而不是建立和发展信任的实际决定性软因素。

Image for post
Swiss Cognitive 瑞士认知

Here is a podcast episode that talks about the topic in more detail.

这是一个播客片段,详细讨论了该主题。

Now we are stepping back and focus on the less obvious factors that influence trust-building in the context of AI. These are the fine influencing factors on a micro level of perception, from personality traits to bias, to past experiences, to socialization and upbringing. We are just gathering data to explore these antecedents of trust in AI through associations, qualitative and quantitative methods.

现在,我们退后一步,将重点放在影响AI上下文中建立信任的不太明显的因素上。 从人格特质到偏见,从过去的经历,到社会化和养育,这些都是从微观角度来看的良好影响因素。 我们只是在收集数据,以通过关联,定性和定量方法来探索这些对AI信任的前提。

Image for post
Marisa Tschopp Marisa Tschopp

Is it possible to change the image of AI or influence consumer behavior after product launch?

产品发布后是否有可能改变AI形象或影响消费者行为?

If you want to explore your trust image, you need to look at questions and definitions from various perspectives: you may want to look at the individual person (like characteristics of your target group or employees), you can look at the process from building, maintaining, and developing trust from a consumer perspective, as well as destroying and regaining trust. You have to be clear of the actors and roles (who is to be trusted?) and the situation: is it a high-risk situation like for example self-driving cars, or are we talking about an AI-driven chatbot in customer service?

如果您想探索自己的信任形象,则需要从各种角度审视问题和定义:您可能要审视个人(例如目标群体或员工的特征),可以从建立,从消费者的角度维护和发展信任,以及破坏和重新获得信任。 您必须清楚参与者和角色(谁值得信任?)和情况:这是高风险的情况,例如无人驾驶汽车,还是我们在谈论客户服务中由AI驱动的聊天机器人?

In the end, the answer is yes, however, in both directions, for better or worse. We have to be very sensitive, neutral, or as Hans Roslin says “factful”, when we talk about AI. Research is pretty clear on what to do to sustain a relationship or how to act, if you have broken a trust relationship, I am not sure if AI is then any different than other technologies. A breach is a breach, whether it is Facebook’s data breach or a misguided missile.

最后,无论是好还是坏,答案都是肯定的 。 当我们谈论AI时,我们必须非常敏感,中立,或者像Hans Roslin所说的“事实”。 关于维持关系或如何采取行动的研究非常清楚,如果您破坏了信任关系,我不确定AI是否与其他技术有所不同。 无论是Facebook的数据泄露还是导弹失控,破坏都是一种破坏。

If there was a trust breach you must communicate instantly, directly, clearly what happened, explain yourself without being defensive, be authentic, truthful, and ask for what is needed to get another chance.

如果发生信任违约,您必须立即直接进行沟通,清楚地说明发生了什么,在没有防御的情况下向自己解释,要真实,诚实,并要求获得新的机会。

What books and other resources would you recommend for a business owner or product manager to learn about trust-building in AI?

您会为企业主或产品经理推荐哪些书籍和其他资源,以了解AI方面的信任建立?

I would suggest checking the European High-Level Expert Group on AI. They just released a framework for building trustworthy AI. The framework has three main pieces comprising lawful AI, Ethical AI, and Robust AI. The key points of the two latter are discussed in the report.

我建议检查欧洲 AI 高级专家组 。 他们刚刚发布了构建可信赖的AI的框架 该框架包含三个主要部分,包括合法AI,道德AI和健壮AI。 报告中讨论了后两者的要点。

Another great set of comprehensive, crowd-sourced standards comes from the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which is called Ethically Aligned Design.

另一套全面的,众包的标准来自IEEE全球自主和智能系统伦理计划 ,该计划被称为“伦理统一设计”。

Rachel Botsman: Who can you trust? She writes about how trust is built, lost, and restored in the digital age, she also has several highly recommended TED Talks.

Rachel Botsman:您可以信任谁? 她撰写了有关数字时代如何建立,丢失和恢复信任的文章,并且她还推荐了一些TED演讲。

This interview was done with Michael Burkhardt from Omdena — an innovation platform where AI engineers and domain experts collaborate to build solutions to real-world problems.

此次采访是由Omdena的 Michael Burkhardt完成的, Omdena是一个创新平台,人工智能工程师和领域专家可以在此平台上共同构建针对实际问题的解决方案。

翻译自: https://towardsdatascience.com/how-to-build-trust-in-artificial-intelligence-solutions-83ca20c39f0

人工智能+智能运维解决方案

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值