深度堆叠卷积神经网络_堆叠卷积神经网络

深度堆叠卷积神经网络

Introduction

介绍

“Hello, world!” The name’s Matthew, and this is my first entry in the world of Medium. I’m a data scientist who builds things with numbers and computers. No need to bother with my life story; none of you came here for that.

“你好,世界!” 名字是Matthew,这是我在Medium领域的第一个条目。 我是一位数据科学家,他用数字和计算机来构建事物。 无需理会我的生活故事; 你们没人来这儿。

This article aims to hit two main points:

本文旨在强调两个主要观点:

  1. Developing an intuition for stacking models. I have seen so many good explanations-for-five-year-olds of bagging and boosting but seldom for stacking.

    为堆叠模型开发直觉 。 对于5岁的装袋和提起玩具,我见过很多很好的解释,但很少有人进行堆叠。

  2. Providing a guided example to implement a stacking neural net using Keras’ subclassing technique.

    提供一个指导性示例,以使用Keras的子分类技术实现堆叠神经网络。

Without further ado…

无需再费周折…

直觉和元学习者 (Intuition and Meta-learners)

Group of friends studying on laptops
Alyssa, Bobby, and Calvin (Photo by Brooke Cagle on Unsplash)
艾丽莎(Alyssa),鲍比(Bobby)和卡尔文(Calvin)(照片由 布鲁克· 卡格勒 ( Brooke Cagle)Unsplash拍摄 )

Story time

讲故事的时间

Let’s take ourselves back to the days of schooling and imagine a nice, young man named Calvin. His take-home calculus exam is due tomorrow, and, although the teacher warned that working in groups will weaken his education as a whole, we all know what he’s going to do: later that day we’ll find him with his friends, Alyssa and Bobby, scrambling through the exam as one.

让我们回到上学的日子,想象一下一个名叫卡尔文的好年轻人。 他的带回家的微积分考试将于明天到期,尽管老师警告说,与小组合作会削弱他的整体教育,但我们都知道他会做什么:那天晚些时候,我们将和他的朋友Alyssa一起找到他。和鲍比(Bobby)一起参加考试。

First, let’s shed some light on this group of friends. Alyssa is a stand-up student who gets A’s on every exam, regardless of the subject. She is, however, quite arrogant about her intellect and is unlikely to take someone else’s advice or opinion. Bobby is a bit more average, but she has a certain gift for series convergence, one of the topics being covered on this particular test. And Calvin? Well I hate to to say it, but he would be flunking if it weren’t for his two buddies. Thankfully, they don’t mind helping him along.

首先,让我们对这组朋友进行一些说明。 阿丽莎(Alyssa)是一个独立的学生,每门考试都获得A(无论科目)。 但是,她对自己的才智非常自大,不太可能接受别人的建议或意见。 Bobby的平均水平更高一些,但是她对系列收敛有一定的天赋,该特定测试涵盖了其中一个主题。 还有卡尔文? 好吧,我不想这么说,但是如果不是他的两个哥们,他会失败的。 值得庆幸的是,他们不介意帮助他。

Days later the test scores are handed back:

几天后,考试成绩还给您:

Alyssa: 92%

艾丽莎:92%

Bobby: 83%

鲍比:83%

Calvin: 94%

加尔文:94%

In a rage, Alyssa yells at him, “how is this possible!? You didn’t even learn the material you numskull!” Chuckling under his breath, Calvin responds, “but I certainly learned you two.” He copied most of Alyssa’s answers, but when her series-related answers differed from Bobby’s, he sided with Bobby. Now he just has to hope there isn’t a pop quiz…

阿丽莎大怒地对他吼道:“怎么可能!? 您甚至都没有学过胡说八道的材料!” 卡尔文轻笑着笑着,“但我当然学会了你们两个。” 他复制了Alyssa的大部分答案,但是当她与系列相关的答案与Bobby的答案有所不同时,他支持Bobby。 现在他只希望没有一个流行测验……

Meta-learning and Ensembles

元学习和合奏

Lets start to generalize here: Model A = Alyssa, Model B = Bobby, and, drum-roll please… Model C = Calvin. Both Alyssa and Bobby learned the more traditional way from textbooks and practice problems, but Calvin took a different approach to learning. Instead of learning the material, he learned the details and intricacies of how Alyssa and Bobby had learned and performed! This is precisely what meta-learning is: learning about learning.

让我们在这里开始概括一下:模型A = Alyssa,模型B = Bobby,然后鼓式转鼓……模型C =卡尔文。 艾丽莎和鲍比都从教科书和习题中学习了更传统的方式,但加尔文采取了不同的学习方式。 他没有学习材料,而是了解了Alyssa和Bobby如何学习和表演的细节和错综复杂的知识! 这正是元学习: 学习学习。

Crossing the bridge to machine learning, we can utilize this concept in a direct way. When a data scientist is poking around the vast space of potential models to use for a classification problem, it can become a daunting task to find something suitable. Even experts who can pinpoint the better models will have several degrees of freedom to explore via hyper-parameters. At the end of the day, we’ll likely have a handful of different models that have varying degrees of performance. Now, how to choose?

跨过机器学习的桥梁,我们可以直接使用此概念。 当数据科学家四处寻找用于分类问题的大量潜在模型时,寻找合适的东西可能成为一项艰巨的任务。 即使是能够找到更好模型的专家,也可以通过超参数探索多个自由度。 归根结底,我们可能会有少数几种不同的模型,它们具有不同程度的性能。 现在,如何选择?

Although it’s tempting to just pick the model that has performed the best, know that accuracy (or whatever metric you are using for “model goodness”) is not a full measure of what your entire set of models has learned. Yes, Alyssa may be better than Bobby overall, but Bobby has his strengths, and if we can figure out how to incorporate them together with Alyssa’s, we can potentially outshine both of their individual performances.

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值