医疗中的ai_医疗保健中自主AI的障碍

医疗中的ai

介绍 (Introduction)

Artificial Intelligence is everywhere in today’s society — it speaks to us through our phones, recommends new shows for us to watch, and it filters out content that is irrelevant to us. It’s so ubiquitous that most of us go about our daily routines without comprehending its role in our lives. But it hasn’t yet altered the face of the healthcare industry in the same ways it’s revolutionized our shopping. Why is that?

当今社会无处不在,有一个 Rtificial Intelligence,它通过我们的电话与我们对话,为我们推荐新的节目供我们观看,并过滤掉与我们无关的内容。 它无处不在,以至于我们大多数人都在不了解日常生活中所扮演的角色的情况下进行日常工作。 但这并没有像彻底改变我们的购物方式一样改变医疗保健行业的面貌。 这是为什么?

There’s more than one answer. As I see it there are at least three good reasons that you don’t see widespread AI utilization in healthcare now. Given enough time, though, I believe we will overcome all these barriers.

答案不只一个。 正如我所看到的,至少有三个很好的理由使您现在看不到医疗保健中广泛使用的AI。 但是,如果有足够的时间,我相信我们将克服所有这些障碍。

To be clear, the type of AI I am discussing in this article is the kind which acts in place of a healthcare professional. Passive Artificial Intelligence, which simply helps support providers in their decision-making process, is already being heavily researched and changing the way we approach healthcare.

明确地说,我在本文中讨论的AI的类型是代替医疗保健专业人员的类型。 被动人工智能仅在支持提供者的决策过程中提供帮助,目前已经在大量研究中,并且正在改变我们处理医疗保健的方式。

联邦法规 (Federal Regulations)

Image for post
Tingey Injury Law Firm on 廷吉伤害律师事务所的不 Unsplash 飞溅照片

One of the largest hurdles that AI has to overcome to be relevant in the healthcare space is the multitude of federal regulations designed to protect consumers. While there are many governing bodies unique to different countries, I will narrow the scope of this topic to the U.S. FDA. According to the official FDA guidelines, there are several different classes of medical devices [1, 2].

人工智能必须克服的最大障碍之一是与医疗保健领域相关,旨在保护消费者的众多联邦法规。 尽管有许多不同国家/地区独特的理事机构,但我将本主题的范围缩小到美国FDA。 根据FDA的官方指南,医疗设备有几种不同的类别[1、2]。

Class I—This category of device is defined as minimal risk, which is to say, the designer of the product can easily demonstrate to the FDA that the device in question either poses no threat of harm to consumers or very closely resembles one that has already been approved by the FDA. Approximately 47% of medical devices are in this category.

I类-此设备类别定义为最小风险 ,也就是说,产品的设计者可以轻松地向FDA证明所涉及的设备不会对消费者造成伤害威胁,或者非常类似于已经存在的设备被FDA批准。 大约47%的医疗设备属于此类别。

Class II—This category of device is defined as moderate risk. About 43% of medical devices are in this category. You can think of this category as medical devices which resemble pre-existing products but have some unique feature to them that could potentially hurt the consumer. One example of this would be a powered wheelchair, as it closely resembles prior art (i.e. non-powered wheelchairs) but has electronic components which, if they malfunction, could harm the user.

II类-此设备类别定义为中等风险 。 大约43%的医疗设备属于此类。 您可以将此类别视为类似于既有产品但具有某些独特功能的医疗设备,这可能会伤害消费者。 一个例子是电动轮椅,因为它与现有技术非常相似(即非电动轮椅),但是具有电子组件,如果它们发生故障,可能会伤害使用者。

Class III—This is reserved for the remaining 10% of medical devices that pose a high risk to consumers. These kinds of devices can kill people if they malfunction (e.g. pacemakers).

III类 -保留给对消费者构成高风险的其余10%的医疗设备。 这些类型的设备如果发生故障,可能会杀死人(例如起搏器)。

Autonomous Artificial Intelligence applications in healthcare mostly reside within Class III. A device that a nurse can use to identify melanoma without needing to consult an expert? An algorithm that automatically detects breast cancer? A neural network that prioritizes patients for doctors? All Class III.

医疗保健领域的自主人工智能应用大部分位于III类之内。 护士无需咨询专家即可用于识别黑色素瘤的设备? 自动检测乳腺癌的算法? 优先考虑患者的神经网络? 所有三级。

While it could be argued that each of these examples might be used to assist medical personnel instead of replacing experts, it’s impossible to say whether these devices might override the judgment of healthcare professionals. Sure, the radiologist could manually look over patient imaging like she is supposed to, but when the tool seems to be right most of the time, she may become complacent in exercising her judgment—and that can cost lives.

尽管可能会争辩到这些示例中的每一个都可以用来帮助医务人员而不是替代专家,但无法断言这些设备是否会超越医疗保健专业人员的判断。 当然,放射科医生可以按照自己的设想手动检查患者的影像,但是当该工具在大多数情况下似乎是正确的时,她可能会因为做出判断而自满,这可能会导致生命损失。

But that leads me to the next hurdle: even if the FDA approves these medical devices, will healthcare providers and their patients trust them?

但这使我进入下一个障碍:即使FDA批准了这些医疗设备,医疗保健提供者及其患者也会信任它们吗?

患者和提供者信任 (Patient and Provider Trust)

Image for post
National Cancer Institute on 美国国家癌症研究所Unsplash 照片

Let’s start by walking through an example of a bad implementation of AI in healthcare. Imagine you are a doctor. Like the rest of your colleagues, you spent over a decade taking challenging university classes, struggling through your residencies, and otherwise working your fingers to the bone to succeed in your profession. After years of toil, you finally made it. You’re a respected healthcare professional at a reputable hospital, you frequently read up on the latest innovations in your field, and you know how to prioritize the needs of your patients.

让我们从一个在医疗保健中错误实施AI的例子开始。 想象你是一名医生。 像其他同事一样,您花了十多年的时间参加具有挑战性的大学课程,在居民中挣扎,或者用手指指骨以取得成功。 经过多年的辛劳,您终于做到了。 您是一家信誉卓著的医院中受人尊敬的医疗保健专业人员,经常阅读有关该领域的最新创新知识,并且知道如何优先考虑患者的需求。

Suddenly, Artificial Intelligence, which you are familiar with from the many scholarly journals you read, starts being used in your hospital. In this particular case, maybe it predicts the length of a patient’s stay so you can better plan for clinical trials. You notice that, from time to time, it is just dead wrong in its predictions. You even start to suspect the algorithm may have suffered one of the many issues which plague AI, such as model drift. You don’t trust it and begin to override its judgment in favor of your own — you’re the doctor, after all, and this is what you’ve studied for! It’s your job to give excellent care to your patients, and ultimately, you feel the algorithm doesn’t allow you to do that.

突然之间,您在阅读的许多学术期刊中都熟悉的人工智能开始在您的医院中使用。 在这种情况下,它可以预测患者的住院时间,因此您可以更好地计划临床试验。 您会发现,它的预测有时是完全错误的。 您甚至开始怀疑该算法可能遭受了困扰AI的众多问题之一,例如模型漂移。 您不信任它,而是开始超越它的判断来支持自己的判断-毕竟,您是医生,这就是您学习的目的! 为患者提供出色的护理是您的工作,最终,您觉得该算法不允许您这样做。

What happens when Artificial Intelligence tries to do more of your job for you? Will you trust it?

当人工智能试图为您做更多工作时会发生什么? 你会相信吗?

The heart of the problem in the above scenario is the opaqueness of the model described. In that situation, the developer behind the algorithm didn’t consider that both doctors and patients want to know the why as well as the what. Even if the Artificial Intelligence implementation described above actually gave very accurate assessments of a patient’s length of stay, it never tried to back up the reasoning behind its prediction.

在上述情况下,问题的核心是所描述模型的不透明性。 在这种情况下,算法背后的开发人员并不认为医生和患者都想知道原因以及原因 。 即使上述人工智能实现实际上对患者的住院时间进行了非常准确的评估,它也从未尝试过支持其预测背后的理由。

A more ideal model would incorporate something like SHAP values, which you can read more about in this excellent article by Dr. Dataman. Essentially, they allow the model to provide what’s called local feature importance, which in plain English means an estimation of why this particular case has the predicted outcome that it does [3]. Even though it doesn’t change the algorithm itself in any way, it gives both the provider and the patient insight into its judgment, and in an evidence-based industry like healthcare, that is invaluable.

一个更理想的模型将包含SHAP值之类的内容,您可以在Dataman博士的这篇出色文章中了解更多信息。 从本质上讲,它们使模型能够提供所谓的局部特征重要性 ,用简单的英语来说,意味着对这种特殊情况为何具有其预期结果的估计[3]。 即使它不会以任何方式改变算法本身,它也可以使提供者和患者都对它的判断有深刻的了解,在医疗等基于证据的行业中,这是无价的。

When Artificial Intelligence explains its decision-making process, it is easier for patients and providers to trust it.

当人工智能解释其决策过程时,患者和提供者就更容易信任它。

Indeed, AI has vast capabilities to aid in clinical decision-making. It’s capable of picking up on complex patterns that only become apparent when patient data is viewed in aggregate—things that would be impossible for us to reasonably expect a human doctor to detect. And while they can’t take the place of a personal healthcare provider, they can provide decision support and help health workers see trends they wouldn’t have noticed otherwise. But this decision support is only possible when the algorithm explains itself.

确实,人工智能具有协助临床决策的巨大能力。 它能够选择复杂的模式,而这些模式只有在汇总查看患者数据时才会显现出来,而这是我们无法合理预期人类医生能够检测到的。 尽管他们无法代替个人医疗保健提供者,但他们可以提供决策支持并帮助医护人员了解他们原本不会注意到的趋势。 但是,只有当算法说明自身时,这种决策支持才有可能。

人工智能驱动医疗保健的伦理 (The Ethics of AI-Driven Healthcare)

Image for post
Photo by Arnold Francisca on Unsplash
照片由 Arnold FranciscaUnsplash拍摄

Artificial Intelligence is a complex topic, not just in its implementation but also in the ethical dilemmas it poses to us. In 2015, Google’s machine learning-driven photo tagger sparked controversy when it incorrectly labeled a black woman as a gorilla [4]. The social media backlash was justifiably enormous. A single misclassification was enough to draw the eyes of the world to the state of artificial intelligence. Who is responsible when things like this happen? How are we supposed to respond?

人工智能是一个复杂的话题,不仅在实现上,而且在它给我们带来的道德困境中。 2015年,Google的机器学习驱动的照片标记器引发误解,原因是它错误地将黑人女性标记为大猩猩[4]。 社交媒体的强烈反对是巨大的。 一次错误的分类就足以引起全世界对人工智能状态的关注。 这样的事情发生时,谁负责? 我们应该如何回应?

Unfortunately, issues like this are not uncommon even 5 years later. Just within the past month, MIT has had to take down the widely-used Tiny Images data set because of racist and offensive content that was discovered inside of it [5]. How many algorithms that are in use now learned from this data?

不幸的是,即使在5年后,类似的问题也并不少见。 仅仅在过去的一个月之内,麻省理工学院就不得不删除广泛使用的Tiny Images数据集,因为其中发现了种族主义和令人反感的内容[5]。 现在从该数据中学到了多少种正在使用的算法?

These issues may, on their face, seem irrelevant to AI in healthcare, but I bring them up for a couple reasons:

从表面上看,这些问题似乎与医疗保健中的AI无关,但我提出这些问题有两个原因:

  1. They demonstrate that, even despite our best intentions, biases can manifest themselves inside our model

    他们表明,即使我们有最好的意愿,偏见也会在我们的模型中显现出来
  2. It frequently happens that these biases only become apparent once the model is already released into the world and has made a mistake

    经常发生的是,这些偏见只有在模型已经发布并犯错后才变得明显

Can we confidently say that this will not also happen in the healthcare space? I don’t believe we can. As researchers, there is more work to be done to filter out inherent biases in our data, in our pre-processing techniques, in our models, and in ourselves. The biggest barrier to AI in healthcare is the lack of a guarantee on any given model’s equitability, safety, and effectiveness in all potential use cases.

我们能否自信地说在医疗保健领域也不会发生这种情况? 我不相信我们可以。 作为研究人员,还有更多工作要做,以过滤掉我们的数据,预处理技术,模型以及我们自己的内在偏差。 医疗保健中AI的最大障碍是在所有潜在用例中都无法保证任何给定模型的公平性,安全性和有效性。

There are already great efforts to improve this situation. Libraries like Aequitas are making it easier than ever for developers to test the biases of their models and their data [6]. Along with this, researchers and developers alike are becoming more aware of the effects of model bias, which will lead to further development of tools, techniques, and best practices for detecting and handling model biases. AI may not be ready for prime time in healthcare today, but I, along with many others, will be working hard to get it there. Given the proper care and attention, I believe that AI has the power to change the face of healthcare as we know it.

已经有很大的努力来改善这种状况。 像Aequitas这样的库使开发人员比以往任何时候都更容易测试模型和数据的偏差[6]。 随之而来的是,研究人员和开发人员都越来越意识到模型偏差的影响,这将导致进一步开发用于检测和处理模型偏差的工具,技术和最佳实践。 人工智能可能尚未为医疗保健的黄金时间做好准备,但我将与其他许多人一起努力实现这一目标。 有了适当的照顾和关注,我相信AI可以改变我们所知道的医疗保健的面貌。

关于我 (About Me)

My name is Josh Cardosi, and I am a Master’s student studying Artificial Intelligence applications in healthcare. You can read more about how I got here in this article. While the issues I talked about above are very real and will need to be addressed, I strongly believe that we will overcome them and, in doing so, change the state of healthcare for the better. I believe it will lead to better health service utilization, decreased patient mortality, and higher patient and provider confidence in treatment plans.

我叫Josh Cardosi,我是研究人工智能在医疗保健中应用的硕士生。 您可以在本文中了解有关我如何到达这里的更多信息。 尽管我在上面谈论的问题是非常实际的,需要解决,但我坚信我们将克服这些问题,并在此过程中改善医疗状况。 我相信这将导致更好地利用卫生服务,降低患者死亡率以及提高患者和医护人员对治疗计划的信心。

Feel free to connect with me on LinkedIn. I love reading your messages and chatting about AI in healthcare or machine learning in general.

随时在LinkedIn上与我联系。 我喜欢阅读您的消息并就医疗保健或机器学习中的AI进行聊天。

[1] Classify Your Medical Device (2020), U.S. Food and Drug Administration

[1]美国食品药品监督管理局(2020)对医疗设备进行分类

[2] What’s the Difference Between the FDA Medical Device Classes? (2020), BMP Medical

[2] FDA医疗器械类别之间有何区别? (2020),BMP Medical

[3] Dataman, Explain Your Model with the SHAP Values (2019), Towards Data Science

[3] Dataman, 使用SHAP值解释模型 (2019年),迈向数据科学

[4] J. Snow, Google Photos Still Has a Problem with Gorillas (2018), MIT Technology Review

[4] J. Snow,《 谷歌照片仍然对大猩猩有问题》 (2018年),麻省理工学院技术评论

[5] K. Johnson, MIT takes down 80 Million Tiny Images data set due to racist and offensive content (2020), Venture Beat

[5] 麻省理工学院的 K. Johnson, 由于种族主义和令人反感的内容 (2020),Venture Beat删除了8000万个Tiny Images数据集

[6] The Bias and Fairness Audit Toolkit for Machine Learning (2018), Center for Data Science and Public Policy

[6] 机器学习偏差与公平审计工具包 (2018年),数据科学与公共政策中心

翻译自: https://towardsdatascience.com/barriers-to-ai-in-healthcare-41892611c84a

医疗中的ai

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值