ai人工智能测面相 准吗_对AI的信任值得信赖吗?

ai人工智能测面相 准吗

In my last article on Trust in AI, I wrote about how building trust in AI needs to include both 1) the people and institutions behind the technology and AI (those selling, making, using it) and 2) the technology of AI systems and solutions. But before we run off and collectively open shop for the “trust” business or lay out a blueprint and start coding trust into our behavior or our technology, let’s take the time to understand trust.

在上一篇关于“ 对人工智能的信任”的文章 ,我写了关于如何建立对AI的信任需要包括1)技术和AI背后的人员和机构以及AI(那些销售,制造,使用它的人)以及2)AI系统的技术以及解决方案。 但是,在我们开始为“信托”业务开张并集体开店或制定蓝图并开始将信任编码到我们的行为或技术中之前,让我们花些时间来了解信任。

The primary accountability and responsibility for trust should and always does lie with the first group, the people. Why? Because no matter what tools and methodologies we build into our technology, directly or indirectly, they are always a product of our goals. Take for example Microsoft’s recent spinoff of Xiaoice (or it’s earlier incarnation, Zo), a problematic chat bot with a teenage girl persona. Over five years were invested in developing this chat bot’s several incarnations. Why wasn’t anyone who was leading the charge on trustworthy AI in that ecosystem able to raise enough concerns about modeling a chat bot after teenage girls? These products take not only Microsoft but the entire chat bot industry — and humanity — further away from trust. Couldn’t they have innovated, shown off their brilliance, and even made money using a different, less problematic and opportunistic persona?

信任的主要责任和责任应始终始终存在于第一类人中 。 为什么? 因为无论我们直接或间接将何种工具和方法构建到技术中,它们始终是我们目标的产物。 以微软最近分拆的小冰 (或更早的化身为Zo ) 为例 ,这是一个有问题的聊天机器人,具有十几岁的女孩的性格。 五年多的时间用于开发此聊天机器人的多个版本。 为什么在那个生态系统中负责可信赖的AI的人为什么没有引起足够的关注,以模仿十几岁的女孩为聊天机器人? 这些产品不仅使Microsoft,而且使整个聊天机器人行业以及整个人类远离信任。 他们难道不能创新,炫耀自己的才华,甚至利用另一种,问题少且机会主义的人物来赚钱吗?

Why do we keep getting this wrong? Is it because we don’t understand trust? Is it because people making technology and business decisions are caught up in their insular world, unable to perceive anything beyond its shiny possibilities? Because accountability and responsibility, outside the legal domain, are not part of the technology building ecosystem? Even when we are interjecting every city, every home, every hiring decision, the criminal justice system, with more and more interaction and transactions with AI, it does not occur to us to take the time to “listen” to the people we want to serve?

为什么我们会不断出错呢? 是因为我们不了解信任吗? 是因为人们在做出技术和商业决策时陷入了孤立的世界,无法感知超出其闪亮可能性的任何事物? 因为法律范围之外的问责制和责任不属于技术构建生态系统的一部分? 即使我们在干扰每个城市,每个房屋,每个招聘决定,刑事司法系统以及与AI进行越来越多的互动和交易的时候,我们也不会花时间“倾听”我们想要的人服务?

Image for post
Photo by Ali Pazani from Pexels
PexelsAli Pazani

According to Edelman’s 2020 Trust Barometer global survey:

根据爱德曼(Edelman)的2020年信任晴雨表全球调查:

  • 61% of people felt that the pace of change technology is too fast and that governments do not understand new tech enough to regulate them effectively.

    61%的人认为变革技术的步伐太快,政府对新技术的了解不足以有效地对其进行监管。
  • 66% worried that technology will make it impossible to know if what people are seeing or hearing is real.

    66%的人担心技术将使人们无法知道人们所看到或听到的是真实的东西。

With AI, we are innovating reality itself. The greatest risk of all is that we who innovate will lose the public’s trust forever — and at some point there will be no way for us to correct course. If we do not reconsider the necessity of trust in tech now, we may not get a second chance.

借助AI,我们正在创新现实本身。 所有人面临的最大风险是,创新的人们将永远失去公众的信任,而在某个时候,我们将无法纠正这一做法。 如果我们现在不重新考虑对技术的信任的必要性,我们可能不会再有机会了。

信任的复杂性 (The Complex Nature of Trust)

Trust is about meeting the expectations we set. So it’s about intention, communication, clarity, discipline, culture, habit. Lots of stuff that is hard to pinpoint or define. Trust is also like listening or understanding. We want it more than we are willing to give it, so it takes effort.

信任就是要达到我们设定的期望。 因此,这与意图,沟通,清晰,纪律,文化,习惯有关。 许多难以查明或定义的东西。 信任也像倾听或理解。 我们想要的比我们愿意提供的要多,因此需要付出努力。

Trust is a conscious and subconscious calculation: We trust when the perceived cost of trust is less than the perceived cost of not trusting. We trust when the perceived value of trust is greater than the perceived gains from or value of not trusting. It’s a belief in the alignment of self-interests.

信任是一种有意识和潜意识的计算: 当信任的感知成本小于不信任的感知成本时,我们就会信任。 当信任的感知价值大于从不信任获得的感知价值或信任的价值时,我们就会信任。 这是对个人利益的信念。

And how exactly do we figure out these perceived costs and values? Try this exercise. Think about someone or something you greatly trust. What exactly made you trust them? What would make you lose your trust? Has your trust changed or evolved over time? Here is a list of characteristics that I have gathered from my observations and analysis that makes trust exciting, valuable, and tricky to navigate:

以及我们如何准确地找出这些可感知的成本和价值? 试试这个练习。 考虑一下您非常信任的人或事物。 是什么让您信任他们呢? 是什么让您失去信任? 您的信任是否随着时间而改变或发展? 这是我从观察和分析中收集到的特征列表,这些特征使信任令人兴奋,有价值并且难以驾驭:

  1. Trust is a gamble. It requires us to guess, have faith or believe.

    信任是一场赌博。 它要求我们猜测,信仰或相信。
  2. It’s uneven. We want to have it more than we want to give it.

    不均匀 我们想要拥有比提供更多的东西。
  3. Trust takes time and attention. It can’t be rushed.

    信任需要时间和注意力。 不能着急。
  4. It takes work. Thoughtful and mindful work.

    这需要工作。 周到细致的工作。
  5. Trust evolves with time and interactions. It changes as we change or learn.

    信任随着时间和互动而发展。 它随着我们改变或学习而改变。
  6. It’s fragile. It’s easier to break and harder to repair.

    很脆弱。 它更容易断裂,更难维修。
  7. It’s not entirely in your control. It’s contextual and interdependent. Other parties have to be willing and ready.

    这并不完全由您控制。 它是上下文相关和相互依存的。 其他各方必须愿意并做好准备。
  8. Trust has future and social implications — both for gain and cost.

    信任对收益和成本都有未来和社会的影响。
  9. Trust is not everything. Sometimes excitement or a good deal or rewards or survival matters more than trust

    信任不是一切。 有时候,兴奋,大笔交易,奖励或生存比信任更重要
  10. Trust is different than caring. You can care about someone and not trust them or vice versa.

    信任与关怀不同。 您可以关心某人,而不信任他们,反之亦然。
  11. Trust is about authenticity (someone’s alignment with values) more than honesty. Trust is about understanding; and, if needed, letting you keep your secrets.

    信任比诚实更重要的是真实性(人与价值观的一致性)。 信任就是理解。 并且,如果需要,可以让您保守秘密。
  12. Trust is elusive. The more heavy-handed we get with it, the more it eludes us.

    信任是难以捉摸的。 我们对它的态度越强硬,就越难以捉摸。

I have a separate deeper analysis of these characteristics, but for this discussion, we can group them into three key takeaways. Trust involves:

我对这些特征进行了单独的更深入的分析,但是在本次讨论中,我们可以将它们分为三个关键部分。 信任涉及:

  1. Lack of certainty and a high level of variability.

    缺乏确定性和高度可变性。

  2. Need for self-awareness, and awareness of others and future impact.

    需要自我意识,对他人的意识以及对未来的影响。

  3. Discipline with flexibility, a willingness to put the effort and yet relinquish the desire to control.

    灵活纪律,愿意付出努力,却放弃了控制欲。

That’s tricky work. Why do we even bother with trust? Because trust can be invaluable. It enables faster and less risky decision-making. It enables diverse groups with different self-interests and goals to collaborate toward more collective value and opportunity. Trust can generate novel ideas and form the ecosystem to put ideas into action and at scale.

那是棘手的工作。 为什么我们还要信任呢? 因为信任是无价的。 它使决策更快,风险更低。 它使具有不同个人利益和目标的不同群体能够合作,以实现更大的集体价值和机会。 信任可以产生新颖的想法,并形成生态系统,以将想法付诸实践并大规模实施。

When does trust matter? It matters when we make decisions in the middle of uncertainty or with groups of people or institutions we are uncertain about. Think about where you go for information and guidance about coronavirus, resilience at work, or homeschooling. We navigate the “unknown” elements based on what we know. Basically, we make a decision about the future based on what we can predict or deduce from the present and the past. Trust helps us make a calculated bet to navigate the risks and rewards. Which is what makes trust so valuable, scary, and exciting.

信任什么时候重要? 当我们在不确定的环境中或与不确定的人群或机构进行决策时,这很重要。 考虑一下您要去哪里获得有关冠状病毒,工作中的适应力或家庭学习的信息和指导。 我们根据了解的内容浏览“未知”元素。 基本上,我们根据对现在和过去的预测或推断得出未来的决定。 信任可以帮助我们进行有计划的下注,以应对风险和回报。 这就是使信任如此宝贵,令人恐惧和令人兴奋的原因。

What’s key to fully understand trust is the “perceived” value or cost. Remember those last two lines on the trust characteristics list? It’s hard for us to ever have a complete picture of the complex interconnections and every perspective of a situation. Our level of trust is based on what we can see and comprehend about our reality, with our limitations and biases. Our perceptions. That’s why we say hindsight is 20/20. That’s why we have buyer’s remorse after a big purchase or realize later that what we perceived as a good deal, good job, or a good partner, wasn’t.

充分理解信任的关键是“感知”的价值或成本。 还记得信任特征列表上的最后两行吗? 对于我们而言,很难全面了解复杂的互连关系和情况的各个角度。 我们的信任度基于我们对现实的了解和理解,以及我们的局限性和偏见。 我们的看法。 这就是为什么我们说事后洞察力是20/20。 这就是为什么在大笔购买后我们会受到买方的re悔,或者后来才意识到我们所认为的一笔好交易,一份好工作或一个好的合作伙伴并没有。

We don’t want to be made a fool of, but more than that, we don’t want everyone to find out that we made a fool of ourselves. That social perception of our vulnerability bothers us even more. We lose trust in our ability to trust. That is why the cost of broken trust is so high and hard to repair.

我们不想被愚弄,但更重要的是,我们不希望每个人都发现自己被愚弄了。 社会对我们的脆弱性的看法使我们更加困扰。 我们对信任的能力失去信任。 这就是为什么破坏信任的成本如此之高且难以修复的原因。

整rick骗子 (The Tricky Trickster)

Most discussions I have with tech decision-makers seem to focus on:

我与技术决策者进行的大多数讨论似乎都集中在:

  1. How do I separate reality from hype? That is, which tech (5G, Conversational AI, Differential Privacy) is ready for adoption and for which use case?

    如何将现实与炒作区分开? 也就是说,哪种技术(5G,对话式AI,差异性隐私)已准备好采用,并且适合哪种用例?
  2. What companies and tools should I buy or invest in?

    我应该购买或投资哪些公司和工具?
  3. What strategies can “get” consumers and enterprises to trust my products and “keep us out of trouble?”

    哪些策略可以“使”消费者和企业信任我的产品并“使我们摆脱麻烦”?
  4. How do I separate myself from the “bad actors” or “mistakes” without slowing down growth?

    如何在不减慢增长速度的情况下将自己与“不良行为者”或“错误行为”区分开来?

All practical and fair questions. We need to answer them to make our day-to-day decisions. But another important set of questions that no one has asked me: Am I trustworthy? Should the users or the public trust me? When should or shouldn’t they trust me?

所有实际和公正的问题。 我们需要回答他们以做出日常决策。 但是没有人问过我另一个重要的问题: 我值得信赖吗? 用户或公众应该信任我吗? 他们什么时候不应该信任我?

Let’s unpack this. When are we trustworthy? Basically, individual or collective alignment of self-interest is the biggest motivator for both trusting and being trustworthy. This self-interest could be intangible like our values, social standing, brand, reputation or something tangible like job, property, settlement, business stake. But both costs and values are perceived. Trust further depends on how and with whom we populate this formula. Is it habitual and automatic, business as usual? Is it thoughtful and reflective with multiple stakeholders and long-term impact factored along with short-term? Right now, AI innovation is all about automation, adoption rates, and valuations. Where are the variables that can lead us to trust?

让我们打开包装。 我们什么时候值得信赖? 基本上,个人或集体的利己主义是信任和值得信赖的最大动力。 这种自我利益可能是无形的,例如我们的价值观,社会地位,品牌,声誉或诸如工作,财产,安置,商业利益等有形的东西。 但是成本和价值都是可以感知的。 信任进一步取决于我们如何以及与谁一起填充此公式。 它是惯常且自动的,照常营业吗? 它是否在多方利益相关者的考虑下具有深思熟虑和反思能力,并在短期内考虑了长期影响? 目前,人工智能的创新在于自动化,采用率和估值。 能使我们信任的变量在哪里?

We need to stop using the Turing test as the goal for AI and as a way to grab news headlines. The ultimate desire for AI shouldn’t be its ability to dupe us. We should focus on its potential to assist us, understand us, and respond to our needs. In the Xiaoice and Zo teenage chat bot example, the focus seems to have been to show off a commercially viable chat bot that appears to be a person. A chat bot that writes poetry, holds art exhibitions, is “sassy,” wears a school uniform, and doesn’t mind adult men confessing love to her. In the cleverness of technology, the long-term cost got lost. Consider the misuse of the teenage girl persona and its inherent gender bias in a highly funded and publicized product that was over five years in the making. There was so much time to change or correct course. And we wonder why we don’t have more women and girls interested in tech.

我们需要停止将Turing测试用作AI的目标,并以此作为抢占新闻头条的一种方式。 对AI的最终渴望不应该是欺骗我们的能力。 我们应该专注于其潜力,以帮助我们,了解我们并响应我们的需求。 在XiaoiceZo青少年聊天机器人的示例中,重点似乎是炫耀似乎是一个人的商业上可行的聊天机器人。 一个聊天机器人,写诗,举办艺术展览,“活泼”,穿着校服,不介意成年男子向她坦白。 在技​​术的聪明中,长期成本损失了。 考虑在制作了五年以上的,获得大量资助和宣传的产品中,滥用少女角色及其固有的性别偏见。 有太多时间来更改或纠正路线。 而且我们想知道为什么我们没有更多对科技感兴趣的妇女和女孩。

How do we increase the cost of breaking this basic trust and decrease the value for products, technology, and businesses that target vulnerable groups or take the easy way out? Currently, they prosper because there is a group or demographic that is willing to pay top dollar to use these products. Until we solve this, how can we begin to trust the people behind the AI? In these hands, would “Trustworthy AI” labels serve as convenient covers and create confusion, and would be anything but trustworthy?

我们如何增加打破这种基本信任的成本,并降低针对弱势群体或采取简易对策的产品,技术和企业的价值? 当前,它们之所以兴旺发达是因为有一群人或人群愿意为使用这些产品付出高昂的代价。 在解决此问题之前,我们如何开始信任AI背后的人们? 在这些人手中,“可信赖的AI”标签会用作方便的封面并造成混乱吗?

我们要去信任所有错误吗? (Are We Going about Trust All Wrong?)

Image for post
Photo by Bernard Hermant on Unsplash
照片由 Bernard HermantUnsplash拍摄

We all know that giving a persuasion blueprint to a smart person without constraints is like handing them a how-to-manipulate guide. How do you think we all got “hooked” and “addicted” to technology in the first place? Concerns around trust have entered the marketplace. Products, consultancy services, technology tools, and leadership coaches are getting into the business and strategy of trust.

我们都知道,向没有约束的聪明人提供说服蓝图就像向他们提供了操作指南。 您如何看待我们所有人对技术的“迷恋”和“迷恋”? 有关信任的担忧已进入市场。 产品,咨询服务,技术工具和领导教练正在进入信任的业务战略

The fact is, trust without caring and empathy can lead to trust in self-interest, meaning we can trust that people will be guided by the instinct to self-protect even at others’ expense. Trust without respect leads to arrogance and manipulation. Trust without awareness can be dangerous. Trust without delight, boring.

事实是,没有关怀和同情的信任会导致对自身利益的信任,这意味着我们可以相信,即使不付出他人的代价,人们也会受到本能指导的自我保护。 没有尊重的信任会导致自大和操纵。 没有意识的信任可能很危险。 信任无喜悦,无聊。

Here is some collective feedback to NPS designers and some of the diversity and inclusion programs — most people get it. They get the difference between what they need to say in surveys and training (explicit culture) and how they can or should behave (implicit culture). Heck, we teach kids that. We say “don’t lie” and then lie about our age, salary, and why we were late to someone right in front of them. How does trust work in such an environment?

这是一些给NPS设计师的集体反馈以及一些多元化和包容性计划-大多数人都可以得到。 他们在调查和培训中需要说的话(显性文化)与行为方式(内隐的文化)之间有区别。 哎呀,我们教孩子们。 我们说“不撒谎”,然后撒谎说我们的年龄,薪水以及为什么我们迟到了一个在他们面前的人。 信任在这样的环境中如何运作?

Employees trust the implicit culture — by watching what the leaders are doing. When to clap and stay on their PR coached point if interviewed or when speaking in public. But the Edelman survey results show that the general public knows it, too. Eventually, we all figure it out. The question is about what other options do we have, and how much we are jolted out of our habitual patterns to do something about it.

员工通过观察领导者的所作所为来信任内在的文化。 面试时或公开演讲时何时鼓掌并停留在其PR指导点上。 但是爱德曼的调查结果表明,公众也知道这一点。 最终,我们都知道了。 问题是关于我们还有其他选择,还有多少人因我们的习惯模式而被甩出来以做一些事情。

Remember, trust is about our or the other party’s perceived value and perceived cost. What we can understand and see, as our gain and loss. Sometimes it serves to go with the flow and follow the crowd. We can’t assume we have their trust. Have you heard someone say, “it came out of nowhere”? But you saw it well before they did? Very few things come out of nowhere — it just depends who and what we were tracking.

请记住,信任是关于我们或另一方的感知价值和感知成本。 我们能理解和看到的,是我们的得失。 有时,它可以顺其自然,跟随人群。 我们不能认为我们有他们的信任。 您是否听说过有人说:“它突然冒出来”? 但是您看得比他们还早吗? 很少有什么事发生的,这取决于我们跟踪的对象和对象。

这是什么意思呢? (What Does All This Mean?)

If someone is selling you trust, run. Or slow down enough to understand the alignment of interests. Because they may likely have a conflict of interest. If someone is showing you how to trick people into trust, run. Unless you are a nomad and plan to make a quick buck and hide out. In that case, I’m not the person to advise you. But if you are really thinking about trust thoughtfully, then first start with the question: Do I trust myself? Why or when am I not trustworthy? Who or what can help figure out what’s missing? Practice your muscle of gauging trust with yourself.

如果有人在卖给您信任,请运行。 或者放慢脚步,以了解利益的契合。 因为它们可能存在利益冲突。 如果有人向您展示如何诱使人们信任,请运行。 除非你是一个Nomad民族,并计划快速赚钱并躲藏起来。 在这种情况下,我不是建议您的人。 但是,如果您真的在考虑周到的信任,那么请首先从以下问题开始:我是否信任自己? 为什么或什么时候我不值得信任? 谁或什么可以帮助找出缺失的东西? 练习自己掌握信任的力量。

The question and the answers may feel uncomfortable at first. But I can tell you from experience they will also give you a sense of relief. Or at least clarity. You don’t have to go confess it. Definitely, don’t tweet it. But know your goals. And if you are in the business of making or buying or using technology, especially AI, during the ideation or decision making phase, ask these important questions. You have a choice. AI systems can be both complex and reliable; and use thoroughly tested, ethically collected and accurate data.That said, if they are not designed to flag biases or anonymize or be transparent about the data source, the systems will not automatically point out these problems. Or course that is not the end. AI systems are evolving. And even something that is built with great resources and attention to detail, can be rebuilt or hacked or scrambled by other people with different systems and different goals.

首先,问题和答案可能会让您感到不舒服。 但我可以根据经验告诉您,它们也可以使您感到宽慰。 或至少清晰。 您不必去承认。 绝对不要发推文。 但是要知道你的目标。 而且,如果您在构思或决策阶段从事制造,购买或使用技术(尤其是AI)的业务,请提出以下重要问题。 您可以选择。 人工智能系统既复杂又可靠; 并使用经过全面测试,符合道德规范的收集和准确数据。也就是说,如果这些数据并非旨在标记偏差或匿名或对数据源透明,则系统不会自动指出这些问题。 或当然,这还没有结束。 人工智能系统正在发展。 甚至具有丰富资源和对细节的关注的东西,也可以由具有不同系统和不同目标的其他人重建或入侵。

We have scientific and architecture reviews and they are good models for reflection. But there is often not even cognitive diversity. I had some no-nonsense, tough critics review this article. I also had someone, who doesn’t come from the tech world, review it for clarity. Is that enough?

我们进行了科学和建筑审查,它们是反思的良好典范。 但是通常甚至没有认知多样性。 我有一些废话,强硬的评论家评论这篇文章。 我还有一个不是来自科技界的人,为了清楚起见,对其进行了审查。 够了吗?

Trust is the result. It is a decision, a measure, a gauge. Rather than trying to build trust, what if we designed with responsibility and trust? Designing “AI with trust” is about a consistent, responsible decision-making framework that keeps in mind the considerations about the people who are impacted.

信任是结果。 这是一个决定,一个量度,一个量表。 与其尝试建立信任,不如我们以负责任和信任的态度进行设计? 设计“信任的AI”是一个一致,负责任的决策框架,该框架要牢记有关受影响人员的考虑因素。

Instead of “How do we make people trust 5G or AI so they will adopt it faster?”, what if we asked:

代替“我们如何使人们信任5G或AI,以便他们更快地采用它?”,如果我们问:

  • Why don’t people trust the use of a certain technology — 5G, AI, Data Analytics, or Neurotech?

    人们为什么不信任使用某些技术(5G,AI,数据分析或Neurotech)?
  • Who or which groups don’t trust it?

    谁或哪个团体不信任它?
  • Why should they not trust it?

    他们为什么不信任它?
  • When should they not trust it?

    他们什么时候不应该信任它?
  • What do they have to lose? What are their concerns?

    他们要失去什么? 他们有什么担心?
  • Is that group or advocates of that group consulted?

    是否征求了该团体或该团体的拥护者的意见?
  • How can we innovate to address their concerns?

    我们如何创新以解决他们的担忧?

我们可以做什么? (What Can We Do?)

I often speak to motivated and concerned AI experts, business leaders, researchers, educators, engineers, and product managers who ask: But what can I do? They are ambitious, they want to do well financially and professionally. But they are tired of having to compromise their values or do things in ways they find fundamentally flawed. Why do we keep making the world worse, they ask me. They are looking for alternatives. They want to revisit our technology-making framework without a cost to their aspirations. They want their leaders to make this shift a priority. They want the metrics to change. They want others to change. I know, I have been and am, in many ways, one of those people.

我经常和那些积极主动且关心自己的AI专家,业务负责人,研究人员,教育工作者,工程师和产品经理交谈,他们问:但是我该怎么办? 他们雄心勃勃,他们希望在财务和职业上都做得很好。 但是他们厌倦了必须折衷自己的价值观或以发现根本有缺陷的方式做事。 他们问我,为什么我们让世界变得更糟。 他们正在寻找替代品。 他们希望重新审视我们的技术制造框架,而不会付出任何代价。 他们希望领导人将这一转变作为优先事项。 他们希望更改指标。 他们希望别人改变。 我知道,在很多方面,我曾经是,现在也是其中之一。

This is what I tell myself: AI is expected to add more than $13 trillion dollars to the global economy by 2030. That means it will touch every part of the human system and the environment, will have a long-lasting impact, and will generate enough revenue that we don’t have an excuse to not invest in building with responsibility and trust. Regulatory oversight will trail innovation. And we haven’t even mentioned the nexus of funding that is at the root of so many conflicts of interest. Stop feeling guilty about driving accountability, as if we are somehow betraying the companies or the economy. We are helping them and us by going back to the fundamentals — our values. What all this is really for — the people.

这就是我对自己说的: 2030年, 人工智能有望为全球经济增加超过13万亿美元。这意味着它将触及人类系统和环境的各个部分,将产生长期影响,并将产生足够的收入,我们没有借口投资于责任心和信任。 监管监督将落后于创新。 而且,我们甚至都没有提到作为众多利益冲突根源的资金关系。 不要对推动问责制感到内,就好像我们在某种程度上背叛了公司或经济一样。 我们通过回归基本面即我们的价值观来帮助他们和我们 。 这一切真正是为了什么? 人民

This is what I tell the tech and business leaders: What worked for me was the awareness of what is missing and reframing the problem statement. We are heading into a cognitive overload. We need trust to help us navigate our world. Whether it’s AI or another technology, if we don’t have a way to see or understand the impact to all the stakeholders as essential, we are going to make terrible stuff. I tell them not to underestimate or overestimate the resources and power and skills they have. Whatever they know or have, use it. Learn from others or exchange ideas. Join a community of people with diverse ideas who share their values. It’s OK to care. It’s important to care. Let’s make it acceptable to care. Balance it with our needs. Figure out how to feed our professional and intellectual drive to succeed and innovate a balanced diet. Let’s become the people we want to become and make things for the kind of future that we want to make. Ask, what am I missing? What might make me more trustworthy? Who can help me figure that out?

这是我告诉技术和商业领袖的事情:对我有用的是对丢失的内容的意识并重新编写问题说明。 我们正在走向认知超负荷。 我们需要信任来帮助我们驾驭世界。 无论是AI还是另一种技术,如果我们没有办法看到或理解对所有利益相关者的影响至关重要,那么我们将制造出可怕的东西。 我告诉他们不要低估或高估他们拥有的资源,力量和技能。 无论他们知道或拥有,都可以使用它。 向他人学习或交流想法。 加入一个拥有不同想法并分享其价值观的人们社区。 可以的。 护理很重要。 让我们接受护理吧。 平衡我们的需求。 找出如何养活我们的专业和知识动力,以成功并创新均衡饮食。 让我们成为我们想要成为的人,为我们想要创造的未来创造事物。 问,我想念什么? 是什么让我更值得信赖? 谁能帮我解决这个问题?

This is what inspired me to start The Responsible Innovation Project and create a framework for product and technology ideation, development and assessment. But that is only a start. We can’t do this alone or in isolation. A shift from surviving to thriving must become tech culture’s norm. If we are going to make trustworthy technology or AI, we have to integrate trust into our processes and make it in a trustworthy way. But before we do, we have to take the time to understand trust and ask what we are missing. Take responsibility for it. That is the only shift we can trust.

这就是促使我启动“负责任的创新项目”并为产品和技术构想,开发和评估创建框架的原因。 但这仅仅是一个开始。 我们不能独自或孤立地做到这一点。 从生存到繁荣的转变必须成为科技文化的规范。 如果我们要制造可信赖的技术或AI,则必须将信任整合到我们的流程中,并以可信赖的方式进行。 但是在我们这样做之前,我们必须花时间了解信任,并问我们缺少什么。 对此负责。 这是我们可以信任的唯一转变。

翻译自: https://towardsdatascience.com/is-trust-in-ai-trustworthy-88e2eb2ae5d6

ai人工智能测面相 准吗

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值