自我监督学习和无监督学习_弱和自我监督的学习-第2部分

自我监督学习和无监督学习

有关深层学习的FAU讲义 (FAU LECTURE NOTES ON DEEP LEARNING)

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. Try it yourself! If you spot mistakes, please let us know!

这些是FAU YouTube讲座“ 深度学习 ”的 讲义 这是演讲视频和匹配幻灯片的完整记录。 我们希望您喜欢这些视频。 当然,此成绩单是使用深度学习技术自动创建的,并且仅进行了较小的手动修改。 自己尝试! 如果发现错误,请告诉我们!

导航 (Navigation)

Previous Lecture / Watch this Video / Top Level / Next Lecture

上一个讲座 / 观看此视频 / 顶级 / 下一个讲座

Image for post
In the wild data can also be employed for training weakly-supervised 3D systems. Image created using gifify. Source: YouTube.
在野外,数据还可以用于训练弱监督3D系统。 使用 gifify创建的 图像 。 资料来源: YouTube

Welcome back to deep learning! So today, we want to continue talking about weekly annotating examples. Today’s topic will be particularly focusing on 3-D annotations.

欢迎回到深度学习! 因此,今天,我们要继续谈论每周的注释示例。 今天的主题将特别关注3D注释。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

So welcome back to our lectures on weakly and self-supervised learning, the second part: From sparse annotations to dense segmentations. Now, what means dense, here? “You dense …”. Probably not.

因此,欢迎回到我们关于弱和自我监督学习的讲座,第二部分:从稀疏注释到密集分割。 现在,这里是什么意思? “你稠密的……”。 可能不是。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

What we are actually interested in is a dense 3-D segmentation. Here, you have an example of a volumetric image and you can see that we have a couple of slices that we visualize on the left-hand side. Then, we can annotate each of these slices, and then we can use them for training, for example, of a 3-D U-net to produce a full 3-D segmentation. As you already have might have guessed annotating all of the slices subsequently probably with different orientations in order to get at rid of bias that is introduced by the slice orientation is extremely tedious. So, you don’t want to do that. What we will look into in the next couple of minutes is talking about how to use sparsely sampled slices in order to get a full automatic 3-D segmentation. Also, this approach is interesting because it allows for interactive correction.

我们实际上感兴趣的是密集的3-D分割。 在这里,您有一个体积图像的示例,您可以看到我们在左侧看到了几个切片。 然后,我们可以注释这些切片中的每个切片,然后将它们用于训练(例如,对3-D U-net进行训练)以产生完整的3-D分割。 正如您可能已经猜到的,随后注释所有切片可能具有不同的方向,以消除由切片方向引入的偏差非常繁琐。 因此,您不想这样做。 在接下来的几分钟中,我们将讨论如何使用稀疏采样的切片以获得全自动3D分割。 同样,这种方法很有趣,因为它允许交互式校正。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

Let’s look into this idea. You train with sparse labels. Typically, we have these one-hot labels, essentially being one if you are part of the segmentation mask. The mask is either true or false. Then, you get this cross-entropy loss where you essentially then backpropagate with. You use exactly the label that returned one because it was annotated. But, of course, that’s not true for the non annotated samples. What you can do is you can develop this further to something that is called a weighted cross-entropy loss. Here, you multiply the original cross-entropy with an additional weight w. w is set in a way that it’s zero if it’s not labeled. You can assign a weight that’s greater than zero otherwise. By the way, if you have this w, you can also extend it to be interactive by updating y and the w(y). So, this means that you update the labels over the iterations with users. If you do so then, you can actually work with sparsely annotated 3-D volumes and train algorithms to produce complete 3-D segmentations.

让我们研究一下这个想法。 您使用稀疏标签进行训练。 通常,我们有这些一键式标签,如果您是分段蒙版的一部分,则实际上就是一个。 掩码为true或false。 然后,您将获得这种交叉熵损失,然后在其中进行反向传播。 您正好使用了返回的标签,因为它已被注释。 但是,当然,对于未注释的样本而言并非如此。 您可以做的是,可以将其进一步发展为加权的交叉熵损失。 在这里,您将原始交叉熵乘以额外的权重w 。 如果未标记,则将w设置为零。 否则,您可以分配一个大于零的权重。 顺便说一下,如果您有w ,则还可以通过更新yw ( y )将其扩展为交互式的。 因此,这意味着您需要与用户一起在迭代中更新标签。 如果这样做,则实际上可以使用稀疏注释的3D体积并训练算法以生成完整的3D分割。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

Let’s look at some takeaway messages. Weakly supervised learning is actually an approach to omit fine-grained labels because they’re expensive. We try to get away with something that is much cheaper. The core idea is that the label has less detail than the target and the methods essentially depend on prior knowledge, like knowledge about the object, knowledge about the distribution, or even a prior algorithm that can help you with the refinement of the labels, and weak labels that we called hints earlier.

让我们看一些外卖消息。 弱监督学习实际上是一种忽略细粒度标签的方法,因为它们价格昂贵。 我们试图摆脱便宜得多的东西。 核心思想是标签的细节要少于目标,并且方法基本上取决于先验知识,例如有关对象的知识,有关分布的知识,甚至是可以帮助您优化标签的先验算法,以及我们之前称为提示的弱标签。

Image for post
We can also use audio cues for weak supervision. Image created using gifify. Source: YouTube.
我们还可以使用音频提示进行弱监督。 使用 gifify创建的 图像 。 资料来源: YouTube

Typically this is inferior to fully supervised training, but it’s highly relevant in practice because annotations are very very costly. Don’t forget about transfer learning. This can also help you. We discussed this already quite a bit in earlier lectures. What we’ve seen here is, of course, related to semi-supervised learning and self-supervised learning.

通常,这要比完全监督的培训逊色,但由于注释的成本非常高,因此在实践中非常相关。 不要忘记转学。 这也可以为您提供帮助。 我们在较早的讲座中已经对此进行了很多讨论。 当然,我们在这里看到的与半监督学习和自我监督学习有关。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

This is also the reason why we talk next time about the topic of self-supervision and how these ideas have sparked quite some boost in the field over the last couple of years. So, thank you very much for listening and looking forward to seeing you in the next video. Bye-bye!

这也是我们下次讨论自我监督主题的原因,以及这些想法在过去几年中如何在该领域激发了一定的推动作用。 因此,非常感谢您收听并期待在下一个视频中见到您。 再见!

Image for post
However, audio may be ambiguous at occasions. Image created using gifify. Source: YouTube.
但是,有时音频可能会模棱两可。 使用 gifify创建的 图像 。 资料来源: YouTube

If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep LearningLecture. I would also appreciate a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.

如果你喜欢这篇文章,你可以找到这里更多的文章 ,更多的教育材料,机器学习在这里 ,或看看我们的深入 学习 讲座 。 如果您希望将来了解更多文章,视频和研究信息,也欢迎关注YouTubeTwitterFacebookLinkedIn 。 本文是根据知识共享4.0署名许可发布的 ,如果引用,可以重新打印和修改。 如果您对从视频讲座中生成成绩单感兴趣,请尝试使用AutoBlog

翻译自: https://towardsdatascience.com/weakly-and-self-supervised-learning-part-1-ddfdf8377f1d

自我监督学习和无监督学习

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值