点击量作弊真的可以检测吗_作弊道德可以使自动驾驶汽车可行吗?

点击量作弊真的可以检测吗

Effortless, automated, and personalized travel that is on demand and safe for the world has been the dream for many drivers since the inception of mass produced vehicles. The current automotive ecosystem, made of incumbents and disruptive entrants, is advancing technology to realize the dream but are slow in coalescing offerings with broad artificial intelligence capabilities. For the last decade, pundits have cast autonomous driving obstacles as a perfunctory matter, proclaiming we will all be taking naps on the way to our destinations as our role in transit become that of a “permanent backseat driver”. As these predictions suffer inevitable delays, forecasts are revised and the AV remains a futuristic target. Before we can achieve a ubiquitous driverless, accident-free wonderland, we must overcome persistent issues holding back adoption. Primary obstacles to an autonomous nirvana; (1) enabling vehicles with acceptable decision-making capabilities and driving technologies, and (2) rationalizing perennial issues surrounding the agency of artificial intelligence.

自大规模生产的车辆问世以来,对于全世界的驾驶员而言,按需进行的便捷,自动化和个性化旅行一直是许多驾驶员的梦想。 当前的汽车生态系统由在位者和破坏性进入者组成,正在推进技术以实现梦想,但在结合具有广泛人工智能功能的产品方面进展缓慢。 在过去的十年中,专家们将自动驾驶障碍视为一项敷衍了事,宣布我们所有人在前往目的地的途中都会小睡一会,因为我们在过境中的作用已成为“ 永久后座驾驶员 ”。 由于这些预测不可避免地会出现延迟,因此对预测进行了修订,并且AV仍然是未来的目标。 在我们实现无人驾驶,无事故的无处不在的仙境之前,我们必须克服阻碍采纳的持久性问题。 自主涅磐的主要障碍; (1)使车辆具有可接受的决策能力和驾驶技术,以及(2)使围绕人工智能机构的长期问题合理化。

The theoretical nature of the trolley problem is one place to start when exploring AI operating in a human world, but is it enough?
手推车问题的理论本质是探索在人类世界中运行AI的起点之一,但这足够了吗?

A popular academic approach is to explore these challenges is via the trolley problem, a thought experiment used in ethics and leveraged to present a no-win scenario that AVs might face in practice. The experiment often arises in creator circles and more recently consumer circles as a method to frame the ethics for the design of autonomous vehicles. As the promise of AV technology has become more realistic, the trolley problem and other variants have been called into service as a way to research moral conundrums humans perceive in enabling AI as their driver. These methods are important because they often identify well-known sources of ethical theory and can be helpful in establishing new moral practices.

一种流行的学术方法是通过手推车问题来探索这些挑战,这是伦理学中使用的思想实验,并被利用来提出视音频实践中可能面临的双赢局面。 作为创建自动驾驶汽车设计道德规范的一种方法,该实验通常在创建者圈子和最近的消费者圈子中进行。 随着视音频技术的前景变得更加现实,手推车问题和其他变体已投入使用,作为研究人类将人工智能作为驱动力的道德难题的一种方法。 这些方法之所以重要,是因为它们经常识别出著名的伦理学理论渊源,并有助于建立新的道德实践。

Practical experimentation with the Trolley Problem while interesting, still isn’t enough to flesh out the most tangible issues facing autonomous vehicles.
进行无轨电车问题的实际实验虽然很有趣,但仍不足以充实自动驾驶汽车面临的最明显问题。

Unfortunately, this approach can also limit a productive discussion, no-win scenarios are just that, unwinnable and lacking a practical resolution. While this approach is important to constructive dialog, it can also incite uninitiated activists to boil an ocean of ethical issues instead of isolating pertinent issues to mobility and limiting experimentation and iteration within AV.

不幸的是,这种方法也可能会限制富有成效的讨论,没有制胜的局面就是这样,无法制胜并且缺乏实际的解决方案。 尽管此方法对进行建设性对话很重要,但它也可以激发未曾采取行动的激进主义者沸腾道德问题的海洋,而不是将相关问题隔离为可移动性并限制AV中的实验和迭代。

Academic exercises like the trolley problem motivates us to question whether we should continue to limit ourselves exclusively to formal ethical models as ways to contemplate AV and solve the issues discussed above. In his book, A Theory of Justice, John Rawls posits that morality issues, like the one explored within the trolley problem, places us behind a “veil of ignorance” and limits the consideration set to solve a moral problem. With the trolley problem, the decision maker has limited information about potential victims affected by their choice. The handful of situational factors provided in a classic trolley problem may limit the stakeholders to a single person, the driver, and leaves out the other actors in the scenario as well as any outsiders that might provide input if they were able to weigh in.

诸如电车问题之类的学术活动促使我们提出质疑,我们是否应该继续将自己仅限于正式的道德模型,以作为考虑AV和解决上述问题的方法。 约翰·罗尔斯(John Rawls)在他的《 正义论》一书中提出,道德问题,就像手推车问题中探讨的问题一样,将我们置于“无知之幕”的后面,并限制了解决道德问题的考虑。 对于手推车问题,决策者对于受其选择影响的潜在受害者的信息有限。 经典手推车问题中提供的少数情况因素可能会将利益相关者限制为一个人,驾驶员,并排除场景中的其他参与者以及可能提供意见的任何局外人。

The Problem With The Trolley Problem

台车问题

The trolley problem is a tool for evaluating the ethics of AVs. The hypothetical dilemma is a thought exercise which presents a set of mutually conflicting yet dependent conditions around an autonomous vehicle, presenting a no-win scenario. There are no right answers pre se but the scenario can provoke rational and irrational responses from participants. In this exercise respondents are encouraged to find a solution and often will posit out of the box conditions that often “break” the simulation.

手推车问题是评估AV伦理的工具。 假设的困境是一种思想练习,它在自动驾驶汽车周围提出了一组相互矛盾但又相互依存的条件,提出了双赢的局面。 目前尚无正确答案,但这种情况可能会引起参与者的理性和非理性React。 在此练习中,鼓励受访者找到解决方案,并且往往会提出经常“破坏”模拟的开箱即用条件。

Image for post

A similar, imaginary no-win exercise can be found in science fiction; the Kobayashi Maru. This problem is detailed in Star Trek lore via a training exercise and test of character for Starfleet officers. The simulation involves the rescue of a disabled federation ship, the Kobayashi Maru, from a demilitarized area of space adjacent to a notorious enemy; the Klingons. The captain of this digital rescue ship has two choices, enter the neutral zone to attempt rescue, triggering a treaty violation and guaranteeing a deadly retaliation and interstellar war. The other choice; leave the shipwrecked crew to face a certain death but avoiding war and guaranteeing the safety of the starfleet crew. Captain James T. Kirk famously took the test three times and in his last attempt, secretly reprogrammed the simulation to enable a narrow window to save the disabled ship and its crew.

在科幻小说中可以找到类似的,虚构的双赢练习。 小林丸。 《星际迷航》中的知识通过Starfleet军官的培训和品格测试得到了详细说明。 模拟包括从一个臭名昭著的敌人附近的一个非军事空间救出一艘残障的联邦军舰小林丸; 克林贡人。 这艘数字救援船的船长有两种选择,进入中立区尝试营救,触发违反条约并保证进行致命的报复和星际战争。 另一个选择; 让沉船的船员面临一定的死亡,但要避免战争,并保证星舰船员的安全。 詹姆斯·T·柯克船长(James T. Kirk)著名地参加了3次测试,在他的最后一次尝试中,秘密地对模拟程序进行了重新编程,以使狭窄的窗户能够拯救残疾的船舶及其船员。

Kirk cheated, he changed the variables so that a winning scenario could be achieved. As part of the story, Kirk is even awarded a commendation for altering the conditions of the test, lauded for “original thinking.” When criticized later for never having to face a no-win situation Kirk waxes his philosophy; he doesn’t believe no-win scenarios are realistic and a solution is always achievable. A counter argument is proffered by his friend Spock, the intent of the test is not to win, but face the fear of failure and the possibility of a tragic outcome. As with the Kobayashi Maru, the trolley problem asserts the prospect of tragic loss of life at the hands of an impassable dilemma. There are lessons to learn in contemplating the trolley problem but is it a realistic method to determine societal readiness for autonomous vehicles?

柯克(Kirk)作弊,他改变了变量,从而实现了成功。 作为故事的一部分,柯克甚至因改变测试条件而获得表彰,并因“原创思维”而受到称赞。 后来因从未面对非赢局面而受到批评时,柯克(Kirk)提出了自己的理念。 他不认为不赢的局面是现实的,解决方案始终是可以实现的。 他的朋友斯波克(Spock)提出了相反的论点,测试的目的不是赢,而是面对失败的恐惧和悲惨结局的可能性。 与小林丸一样,手推车问题断定,在无法逾越的困境中可能会不幸丧生。 在思考手推车问题时需要学习一些经验教训,但是确定自动驾驶汽车的社会准备程度是否是一种现实的方法?

Ethical Sandboxes

道德沙箱

My father once told me the reason for the sandbox in our backyard was to have a place to play that isolated me from the vegetable garden, apparently a favorite place for me to dig holes as a young child. The isolation concept behind a child’s sandbox is also used in software development, where a virtual environment is utilized to isolate the execution of software or programs and allow for independent evaluation, monitoring or testing. Sandboxing has also been used to refine business practices, typically leveraged to create a builder’s space for analysis of new processes and concepts. A conceptual sandbox can easily include all of the tools needed to conduct any conceivable analysis begging the question; can we use moral sandboxes to test, fail and learn successful AV products?

父亲曾经告诉我,我们后院放置沙箱的原因是要有一个玩耍的地方,使我与菜园隔离,这显然是我小时候最喜欢挖洞的地方。 儿童沙箱背后的隔离概念也用于软件开发中,其中利用虚拟环境隔离软件或程序的执行,并允许独立评估,监视或测试。 沙盒也已用于完善业务实践,通常用于创建构建者的空间来分析新流程和概念。 概念性沙箱可以轻松地包含进行任何可能的问题分析所需的所有工具; 我们可以使用道德沙箱来测试,失败和学习成功的AV产品吗?

In a study conducted at Osnabrück University, Dr. Lasse Bergmann isolated several popular ethical dilemmas to explore public perception and provide a starting point for further discussion and experimentation of AVs and ethics. Dr. Bergmann posits “Applied ethics is not solely a priori inquiry. Well-reasoned positions need to be developed and intuitions need to adapt to new circumstances.” In testing this approach against the trolley problem Dr. Bergmann’s results were germane to utilitarian and deontological theories and established political norms. Alternative dataset testing however, including allowing more choices such as self-sacrifice, demographic data on potential victims, even alternatives to killing anyone and elicited choices that were more conducive to codification of moral mapping in AV programming.

奥斯纳布吕克大学进行的一项研究中 ,拉塞·伯格曼(Lasse Bergmann)博士隔离了几个普遍存在的伦理困境,以探索公众的观念,并为进一步讨论和试验视听和伦理学提供了起点。 伯格曼博士认为:“应用伦理学不只是先验问题。 需要建立合理的立场,直觉需要适应新情况。” 在测试这种方法来解决手推车问题时,伯格曼博士的研究结果与功利主义和道义论理论以及已确立的政治规范息息相关。 然而,替代性的数据集测试包括允许更多选择,例如自我牺牲,潜在受害者的人口统计数据,甚至是杀死任何人的替代性选择,并引发了更有助于在视音频编程中将道德制图编纂的选择。

So, can we use a sandbox to isolate ethics dilemmas in testing environment and cheat our way to a practical solution? A common theme emerges when considering the use of an ethical sandbox to advance AV. The approach can provide an efficient safe-zone for experiments with concepts that present as impassable due to the perceivably large scope of ethics related to AV. A Hakernoon article explores this concept and poses “The biggest challenge the engineering world will face — or rather, is facing — is to incorporate morality and ethical values while both designing an engineered product as well as while engineering a product from scratch.” Clearly there are models and appetite for alternatives to strict ethical frameworks. Advocacy for the creation of sandboxes to allow safe testing of muddy moral and ethical issues within AV is a start but how would it work?.

那么,我们是否可以使用沙盒隔离测试环境中的道德困境,并欺骗我们寻求实用解决方案的方法? 当考虑使用道德沙箱来推进视音频时,会出现一个共同的主题。 这种方法可以为实验提供有效的安全区,因为由于与视听相关的伦理学范围广,这些概念无法实现。 Hakernoon的一篇文章探讨了这个概念,并提出“ 工程界将面临或将要面临的最大挑战是在设计工程产品以及从头开始设计产品时融入道德和道德价值观 。” 显然,有替代严格的道德框架的模型和需求。 倡导创建沙箱以允许对AV中的泥泞的道德和道德问题进行安全测试是一个开始,但是它将如何工作?

What’s in the Sandbox?

沙盒中有什么?

Image for post

Ethical issues must be tested and solved to improve the quality of AV performance to viable level. Technology can execute driving features and existing capabilities achieve partially driverless vehicles, save our ability to coexist with AI and the enigmas it presents to our own agency. We are still an ocean away from AI being able to make serious autonomous decisions, much less drive a car. Ethical testing much change if we are to find tangible AI solution that we can live with (pun intended).

必须测试并解决道德问题,才能将AV性能提高到可行的水平。 技术可以执行驾驶功能,现有功能可以实现部分无人驾驶的车辆,从而节省了我们与AI及其给我们自己的代理机构带来的谜团共存的能力。 我们距离AI仍然可以做出认真的自主决策,更不用说开车了。 如果我们要找到可以忍受的实际AI解决方案,那么道德测试会发生很大的变化(双关语)。

Two effective approaches for entrepreneurs facing business challenges are zooming in and/or out of the scope of our focus. This approach can help us increase our scope as we zoom out to observe the needs of a larger than anticipated market or reduce focus (zoom in) to explore the unique needs of a niche market. As we think about AV, a byproduct of AI, we begin to zoom in somewhat to conceptualize a smaller landscape of mobility. Ultimately, we zoom out again to wrestle with the heady issues involved with AI, perhaps becoming lost in its scope. The conclusion offered here is that zooming in, and playing with scenarios and even breaking them, has value in advancing tangible solutions alongside sharpening thought on macro issues. Sandboxing is a viable strategy for AI, AV and many other solutions that seem out of reach. As long as we don’t allow the scope of a sandbox to limit us from zooming in and out, we can find the same “original thinking” that Captain Kirk used to beat his non-win scenario.

对于面临业务挑战的企业家,两种有效的方法是扩大和/或缩小我们的关注范围。 这种方法可以帮助我们扩大范围,因为我们可以观察到比预期大的市场的需求,也可以减少关注(扩大)以探索利基市场的独特需求。 考虑到AI的副产品AV时,我们开始进行某种程度的放大以概念化较小的移动性。 最终,我们再次缩小以解决与AI相关的令人头疼的问题,也许会迷失于其范围之内。 这里提供的结论是,放大,玩弄场景甚至打破场景,对于推进切实的解决方案以及对宏观问题的敏锐思考具有价值。 沙盒是AI,AV和许多其他似乎无法实现的解决方案的可行策略。 只要我们不允许沙盒的范围限制我们进行放大和缩小,我们就可以找到与柯克船长击败他的非赢局面相同的“原始思维”。

Jeff Heinzelman is the founder of MostlyWest with 25+ years of experience in leadership, business process, customer experience and product innovation. I have led teams in many sectors, relying on a personal philosophy of people, process, and technology to deliver innovative products. I am an advocate of customer-focused product management connected to data-driven results. I am also a husband and father of two boys, and live in Austin, Texas where I enjoy Tex-Mex, BBQ, and football. Not necessarily in that order.

Jeff Heinzelman是 MostlyWest 的创始人, 在领导力,业务流程,客户体验和产品创新 方面 拥有25年以上的经验。 我领导着许多部门的团队,依靠人员,流程和技术的个人哲学来交付创新产品。 我主张将客户导向的产品管理与数据驱动的结果联系起来。 我也是两个男孩的丈夫和父亲,住在德克萨斯州的奥斯丁,在那里我喜欢德克萨斯州墨西哥人,烧烤和足球。 不一定按此顺序。

翻译自: https://medium.com/swlh/can-cheating-ethics-make-autonomous-vehicles-viable-20ed55aa4ca5

点击量作弊真的可以检测吗

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值