中国发展及其人自动化_设计自动化时将面临的3个问题-及其解决方法

中国发展及其人自动化

I have researched automation for some time now.

我已经研究自动化一段时间了。

First, as part of my master thesis in cognitive psychology. I was researching how to design user interfaces that would make it easier for operators to monitor autonomous ships. Afterwards, I co-founded a company that is currently researching and developing a new, autonomous product.

首先,作为我的认知心理学硕士论文的一部分。 我正在研究如何设计用户界面,使操作员可以更轻松地监视自主船。 之后,我与他人共同创立了一家公司,该公司目前正在研究和开发一种新的自主产品。

Throughout this journey, I have developed a somewhat ambiguous relationship with automation.

在整个旅程中,我与自动化之间建立了某种模糊的关系。

On one hand, automation is a truly magnificent thing. It gives us possibilities which seemed like far-fetched dreams, mere years and decades ago. It allows us to spend more time doing tasks we find meaningful. And it rarely fails.

一方面,自动化确实是一件宏伟的事情。 仅仅几年和几十年前,它给我们带来了似乎遥不可及的梦想的可能性。 它使我们可以将更多的时间花费在执行有意义的任务上。 而且很少失败。

However, it does fail eventually. And when it does, it can be quite dangerous. Onnasch (2014) called this “the Lumberjack effect”; the higher the tree, the farther it falls. Put into other words, when automation does fail, it does so spectacularly.

但是,它最终确实会失败。 如果这样做,可能会很危险。 Onnasch(2014)将此称为“伐木工人效应”; 树越高,它落得越远。 换句话说,当自动化确实失败时,它就会如此惊人。

Therefore, while we aim to maximize the advantages of automation, we must design it with the utmost care to avoid potential disasters.

因此,尽管我们旨在最大程度地发挥自动化的优势,但我们必须格外谨慎地进行设计,以避免潜在的灾难。

Following, are the three major problems that I keep coming across while designing automation.

接下来,是我在设计自动化时遇到的三个主要问题。

Problem 1: Our brains were not built for monitoring automation

问题1:我们的大脑不是为监视自动化而构建的

Picture of a bored man
Photo by Siavash Ghanbari on Unsplash
Siavash GhanbariUnsplash拍摄的照片

Human beings are great at so many things. Walking, talking, tweeting, selfie-taking. However, monitoring is not one of those things.

人类擅长于很多事情。 散步,交谈,发推文,自拍照。 但是,监视不是其中之一。

In his classical experiment, Mackworth (1951) had participants monitor a clock. The clock would skip a second at random intervals. The participants were tasked to make a note every time a second was skipped.

Mackworth(1951)在他的经典实验中,让参与者监视时钟。 时钟将以随机间隔跳过一秒钟。 参与者的任务是每跳过一秒钟便做一次笔记。

The participants started great, but about half an hour into the experiment, performance dropped significantly.

参与者起步不错,但是实验进行了大约半小时,表现却明显下降。

Essentially, as humans, we are only able to do slow, boring observations for short periods of time. After about 30 minutes, we stop paying attention.

本质上,作为人类,我们只能在短时间内进行缓慢而乏味的观察。 大约30分钟后,我们不再关注。

Also, what I just described is a situation where the participants know that errors are about to happen. Yet they still only managed to pay attention for about 30 minutes. In other words, this is a biological limitation; our cognition, our brain, is simply not built for more than that.

另外,我刚才描述的是参与者知道错误即将发生的情况。 然而,他们仍然只能注意大约30分钟。 换句话说,这是生物学上的限制; 我们的认知,我们的大脑,根本不仅仅局限于此。

So, why is this a problem? Well, imagine those same humans monitoring their self-driving car. Which they do not expect to fail. But then it does.

那么,为什么这是一个问题呢? 好吧,想象那些同样的人正在监视他们的自动驾驶汽车。 他们不希望失败。 但是,确实如此。

The higher the tree, the longer they fall. When automation fails, it does so catastrophically.

树越高,它们倒下的时间就越长。 当自动化失败时,它会造成灾难性的后果。

However, our relationship is affected by more than our biology. It is also massively influenced by psychology. In particular, by something we refer to as automation bias.

但是,我们的关系不仅受到生物学的影响。 它也受到心理学的极大影响。 特别是,我们称之为自动化偏差。

问题2:我们对自动化过于信任 (Problem 2: We put too much faith in automation)

We tend to trust automation. A lot.

我们倾向于信任自动化。 很多。

Imagine that you are driving home and want to take the highway. Your satnav, however, tells you that crossing the bridge would be quicker. You are pretty likely to trust your satnav. And why wouldn’t you? After all, the satnav seems quite advanced. It probably makes calculations based on tons of data.

想象一下,您要开车回家并想上高速公路。 但是,您的卫星导航告诉您过桥会更快。 您很有可能会相信您的卫星导航。 而你为什么不呢? 毕竟,卫星导航似乎相当先进。 它可能基于大量数据进行计算。

The SatNav could be correct. Or it could be incorrect. Either way, we are very likely to believe whatever an automated system tells us. And we are unlikely to notice if it gives us bad advice. In psychology, we call this automation bias.

SatNav可能是正确的。 否则可能不正确。 无论哪种方式,我们都很可能相信自动化系统告诉我们的一切。 而且我们不太可能注意到它是否给了我们不好的建议。 在心理学上,我们称这种自动化为偏见。

Automation bias is the tendency to be overly reliant and/or complacent when interacting with automation (see Wiener and Curry, 1980; Parasuraman & Riley, 1997). Since automation seems to be working perfectly, we do not feel like we need to monitor it closely.

自动化偏差是与自动化交互时过于依赖和/或自满的趋势(参见Wiener和Curry,1980; Parasuraman和Riley,1997)。 由于自动化似乎运行良好,因此我们不需要密切监视它。

And most of the time, automation does work perfectly. However, ever so often, an error will occur. And when they do, we likely will not realise.

而且在大多数情况下,自动化确实可以完美运行。 但是,经常会发生错误。 而当他们这样做时,我们可能不会意识到。

This puts a lot of pressure on us, as designers. If we design something badly, it could be causing problems for a long time before anyone notices. And by the time they do, it could be costly to fix it.

作为设计师,这给我们带来了很大压力。 如果我们设计不好,可能会导致很长一段时间的问题,直到有人注意到。 到他们这样做的时候,修复它可能会很昂贵。

Imagine that you design a running app that tracks how long people run. Suddenly, you realise that the GPS was a bit off, and it has actually been overestimating running distance by 10%.

想象一下,您设计了一个正在运行的应用程序,该应用程序可以跟踪人们的运行时间。 突然,您意识到GPS有点偏离了,实际上它高估了行驶距离10%。

What are the chances your users manually checked the distances, to see if they truly ran 5k? Pretty slim.

您的用户有多少机会手动检查距离,看看他们是否真正跑了5k? 苗条

What are the chances that your users will be angry when they eventually realise that their best 5ks was actually 4.5k? Pretty high.

当用户最终意识到自己的最佳5k实际上是4.5k时,他们有什么机会生气? 很高

This also leads us to the next topic; the myth of automation removing human error.

这也将我们引向下一个主题。 消除人为错误的自动化神话。

问题3:自动化不能消除人为错误 (Problem 3: Automation does not eliminate human error)

It is a common myth that automation eliminates human error. However, there are two main reasons why this is wrong.

自动化消除人为错误是一个普遍的神话。 但是,这有两个主要原因。

First, most products tend to interact with humans at some point. And those humans can make errors when providing input.

首先,大多数产品都倾向于与人类互动。 这些人在提供输入时可能会犯错误。

The second reason, and perhaps one that is easier to forget, is that automated products are created by a human. And the human that created the product probably made some errors at some point. We call those human errors.

第二个原因,也许是更容易忘记的一个原因是,自动化产品是由人类创造的。 而创造产品的人可能在某些时候犯了一些错误。 我们称这些人为错误。

So what does automation give us, then?

那么,自动化给我们带来了什么呢?

Well, automation does eliminate “concurrent errors” or “operator errors”. That is, a car does not crash because a person confuses the gas and brake pedal. The automated system does what it was programmed to do.

好吧,自动化确实可以消除“并发错误”或“操作员错误”。 也就是说,汽车不会因人混淆油门踏板和制动踏板而撞车。 自动化系统执行其编程要执行的操作。

However, it is impossible to predict any scenario the machine will encounter. Therefore, although the automated system is pretty smart, it probably won’t have an answer for absolutely every scenario. Especially freak scenarios.

但是,无法预测机器将遇到的任何情况。 因此,尽管自动化系统非常智能,但它可能无法针对每种情况提供答案。 特别是异常情况。

Like a parachuter misjudging the wind, and having to land on a highway. A human driver might be able to see what is happening and avoid him. A self-driving car, on the other hand, would struggle.

就像跳伞员判断风向,不得不降落在高速公路上一样。 驾驶员可能能够看到正在发生的事情并避开他。 另一方面,自动驾驶汽车会很困难。

At a basic level, the car would probably lack the camera angle to notice the parachuter. But even if it did notice, the car would probably lack the fluid intelligence to understand the situation and come up with a solution.

在基本层面上,汽车可能会缺少摄像机角度来注意到跳伞者。 但是,即使它确实注意到了,汽车也可能缺乏流体智能来了解情况并提出解决方案。

Or, an even more bizarre example: How about an automatic sandwich-maker, failing to stop when it starts to attract aggressive seagulls at sea. I stole that example from a Norwegian advertisement:

或更奇怪的例子是:自动三明治机怎么样,当它开始在海上吸引凶猛的海鸥时却停不下来。 我从挪威的广告中偷走了这个例子:

Advertisement by REMA 1000
REMA 1000刊登的广告

Therefore, the human error still exists. Error, in the sense that automation would fail to deal appropriately with a given scenario, however bizarre.

因此,人为错误仍然存​​在。 从某种意义上说,错误是指自动化将无法适当地处理给定的场景,尽管这很奇怪。

This type of error we would call “latent error”, or a “designer error”.

我们将这种类型的错误称为“潜在错误”或“设计者错误”。

Are automatic systems with designer errors better than manual systems with operator errors?

具有设计者错误的自动系统是否比具有操作者错误的手动系统更好?

It depends.

这取决于。

Is it better to have only a few, but pretty major accidents? In that case, the automatic system is better.

只发生几次但相当大的事故会更好吗? 在这种情况下,自动系统会更好。

Is it better to have many, smaller accidents? Then the manual system is the better option.

发生很多较小的事故更好吗? 那么手动系统是更好的选择。

解决方案 (The solution)

Well, this sounds rather hopeless. Should we just stop automating things?

好吧,这听起来很绝望。 我们应该停止自动化吗?

No, absolutely not! This is not hopeless, it is simply a design challenge.

不,绝对不是! 这并非没有希望,这只是设计挑战。

Well, how do you fix all this?

好吧,您如何解决所有这些问题?

Through the magic of psychology and design.

通过魔术的心理学与设计。

There is no quick-fix for our cognitive limitations. Our brains evolve quite slowly. However, how much we rely on machines psychologically depends on design.

对于我们的认知局限性没有快速解决方案。 我们的大脑发展非常缓慢。 但是,我们在心理上对机器的依赖程度取决于设计。

One of the reasons we become so reliant on automation, is the fact that we do not really understand how it works.

我们之所以如此依赖自动化的原因之一是,我们并不真正了解自动化的工作原理。

Often, we give an automated product some input, and it gives us an answer. However, if we do not understand how it arrived at this conclusion, we cannot verify how accurate it is.

通常,我们给自动化产品一些输入,这给了我们答案。 但是,如果我们不明白它是如何得出这个结论的,那么我们将无法验证它的准确性。

It is like a math test in school. Normally, just giving the answer is not acceptable to get full marks. You need to show your math, explaining how you arrived at your conclusion.

就像学校里的数学考试一样。 通常,仅给出答案是不能获得满分的。 您需要展示数学,解释得出结论的方式。

We can address this lack of understanding by designing automation to be transparent (Endsley, 2017). Transparency means that we design a product where the users can see how the system arrived at a conclusion.

我们可以通过将自动化设计为透明来解决这种缺乏理解的问题(Endsley,2017)。 透明度意味着我们设计的产品使用户可以看到系统如何得出结论。

This also helps to make it predictable. And once we have good transparency and predictability, we can add an option for a manual bypass. That is, letting the user skip results or alter the calculations so that the automation arrives at the correct result.

这也有助于使其可预测 并且一旦我们有了良好的透明度和可预测性,我们就可以添加一个手动绕过的选项 也就是说,让用户跳过结果或更改计算,以便自动化获得正确的结果。

The Nike Running app is a good example of successfully implementing these principles. After a run, the user is provided with a map of their run, so they can check that the app tracked them correctly (transparency). The user can also change details such as distance and speed manually (manual bypass).

Nike Running应用是成功实施这些原则的一个很好的例子。 运行后,系统会向用户提供其运行图,以便他们可以检查应用程序是否正确跟踪了它们(透明度)。 用户还可以手动更改详细信息,例如距离和速度(手动旁路)。

Screenshot of nike running app
Available at news.nike.com
可在news.nike.com上获得

This is where the interaction between psychology and design becomes apparent. By designing true transparency, the user is given the opportunity to verify the result. This provides the confidence to make changes if something is wrong.

在这里,心理学和设计之间的相互作用变得显而易见。 通过设计真实的透明度,可以使用户有机会验证结果。 如果出现问题,这将使您有信心进行更改。

This creates a sense of trust between the user and the app. Small errors are easier to forgive if you can easily notice and correct them yourself. This is how good automation design can translate into great user experience.

这在用户和应用程序之间建立了信任感。 如果您可以自己轻松地注意到并纠正这些错误,则更容易原谅。 这就是好的自动化设计可以如何转化为出色的用户体验。

(If you are interested in learning more about these principles, feel free to read this article where I discuss them at length.)

(如果您有兴趣学习更多有关这些原理的知识,请随时阅读本文,在这里我将详细讨论它们 。)

结论 (Conclusion)

Automation is a topic as old as time, and it is becoming ever-more relevant as AI makes it way into new products. Still, even the world’s biggest tech companies, like Google, Spotify and Facebook, struggle to get it right.

自动化是一个古老的话题,随着AI进入新产品,它变得越来越重要。 尽管如此,即使是世界上最大的科技公司,例如Google,Spotify和Facebook,也都在努力使其正确。

That is also why this is such an exciting part of design. It is unknown territory. The principles for solutions I propose in this article are a good start, but it is not a definitive answer. We can be part of solving this unsolved challenge.

这就是为什么这是设计中如此令人兴奋的部分。 这是未知的领土。 我在本文中提出的解决方案原则是一个良好的开端,但这不是一个明确的答案。 我们可以解决这一尚未解决的挑战。

Whoever solves this design challenge successfully, will become one of the most important and influential designers of this century.

成功解决此设计难题的人将成为本世纪最重要,最有影响力的设计师之一。

Are you up for the challenge?

你准备好接受挑战了吗?

翻译自: https://uxdesign.cc/3-problems-youll-face-while-designing-automation-and-how-to-solve-them-d9ae2d440103

中国发展及其人自动化

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值