元学习(Meta-learning)

元学习(Meta-learning)

概述

1. 定义:
传统的机器学习研究模式是:获取特定任务的大型数据集,然后用这个数据集从头开始训练模型。很明显,这和人类利用以往经验,仅仅通过少量样本就迅速完成学习的情况相差甚远。
元学习(Meta Learning)或者叫做“学会学习”(Learning to learn),它是要“学会如何学习”,即利用以往的知识经验来指导新任务的学习,具有学会学习的能力。

2. 目标:
meta learner 的目标是在各种不同的学习任务上学出一个模型,使得可以仅用少量的样本就能解决一些新的学习任务。这种任务的挑战是模型需要结合之前的经验和当前新任务的少量样本信息,并避免在新数据上过拟合。

3.元学习的使用:
元学习通常被用在:优化超参数和神经网络、探索好的网络结构、小样本图像识别和快速强化学习等。

4. 元学习的分类:
在元学习过程中,训练模型以学习 meta-training set 中的任务,这其中有两个优化在起作用:learner:学习新任务;meta-learner:训练 learner。
元学习的方法通常分为三类:
(1)recurrent models
(2)metric learning
(3)learning optimizers
当然,根据不同的标准,分类不同。

总结

元学习是目前机器学习领域一个令人振奋的研究趋势,它解决的是学习如何学习的问题。现在元学习广泛应用在图像、自然语言处理等领域。

元学习(Meta Learning)最全论文、视频、书籍资源整理

经典论文和代码:

Zero-Shot / One-Shot / Few-Shot 学习
Siamese Neural Networks for One-shot Image Recognition, (2015), Gregory Koch, Richard Zemel, Ruslan Salakhutdinov.
Prototypical Networks for Few-shot Learning, (2017), Jake Snell, Kevin Swersky, Richard S. Zemel.
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot (2017), Stanislav Fort.
Matching Networks for One Shot Learning, (2017), Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra.
Learning to Compare: Relation Network for Few-Shot Learning, (2017), Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, Timothy M. Hospedales.
One-shot Learning with Memory-Augmented Neural Networks, (2016), Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap.
Optimization as a Model for Few-Shot Learning, (2016), Sachin Ravi and Hugo Larochelle.
An embarrassingly simple approach to zero-shot learning, (2015), B Romera-Paredes, Philip H. S. Torr.
Low-shot Learning by Shrinking and Hallucinating Features, (2017), Bharath Hariharan, Ross Girshick.
Low-shot learning with large-scale diffusion, (2018), Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou.
Low-Shot Learning with Imprinted Weights, (2018), Hang Qi, Matthew Brown, David G. Lowe.
One-Shot Video Object Segmentation, (2017), S. Caelles and K.K. Maninis and J. Pont-Tuset and L. Leal-Taixe’ and D. Cremers and L. Van Gool.
One-Shot Learning for Semantic Segmentation, (2017), Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa, Byron Boots.
Few-Shot Segmentation Propagation with Guided Networks, (2018), Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alexei A. Efros, Sergey Levine.
Few-Shot Semantic Segmentation with Prototype Learning, (2018), Nanqing Dong and Eric P. Xing.
Dynamic Few-Shot Visual Learning without Forgetting, (2018), Spyros Gidaris, Nikos Komodakis.
Feature Generating Networks for Zero-Shot Learning, (2017), Yongqin Xian, Tobias Lorenz, Bernt Schiele, Zeynep Akata.
Meta-Learning Deep Visual Words for Fast Video Object Segmentation, (2019), Harkirat Singh Behl, Mohammad Najafi, Anurag Arnab, Philip H.S. Torr.

模型无关元学习(Model Agnostic Meta Learning)
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, (2017), Chelsea Finn, Pieter Abbeel, Sergey Levine.
Adversarial Meta-Learning, (2018), Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang.
On First-Order Meta-Learning Algorithms, (2018), Alex Nichol, Joshua Achiam, John Schulman.
Meta-SGD: Learning to Learn Quickly for Few-Shot Learning, (2017), Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li.
Gradient Agreement as an Optimization Objective for Meta-Learning, (2018), Amir Erfan Eshratifar, David Eigen, Massoud Pedram.
Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace, (2018), Yoonho Lee, Seungjin Choi.
A Simple Neural Attentive Meta-Learner, (2018), Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel.
Personalizing Dialogue Agents via Meta-Learning, (2019), Zhaojiang Lin, Andrea Madotto, Chien-Sheng Wu, Pascale Fung.
How to train your MAML, (2019), Antreas Antoniou, Harrison Edwards, Amos Storkey.
Learning to learn by gradient descent by gradient descent, (206), Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas.
Unsupervised Learning via Meta-Learning, (2019), Kyle Hsu, Sergey Levine, Chelsea Finn.
Few-Shot Image Recognition by Predicting Parameters from Activations, (2018), Siyuan Qiao, Chenxi Liu, Wei Shen, Alan Yuille.
One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning, (2018), Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Pieter Abbeel, Sergey Levine,
MetaGAN: An Adversarial Approach to Few-Shot Learning, (2018), ZHANG, Ruixiang and Che, Tong and Ghahramani, Zoubin and Bengio, Yoshua and Song, Yangqiu.
Fast Parameter Adaptation for Few-shot Image Captioning and Visual Question Answering,(2018), Xuanyi Dong, Linchao Zhu, De Zhang, Yi Yang, Fei Wu.
CAML: Fast Context Adaptation via Meta-Learning, (2019), Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson.
Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems, (2019), Fei Mi, Minlie Huang, Jiyong Zhang, Boi Faltings.
MIND: Model Independent Neural Decoder, (2019), Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan.
Toward Multimodal Model-Agnostic Meta-Learning, (2018), Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim.
Alpha MAML: Adaptive Model-Agnostic Meta-Learning, (2019), Harkirat Singh Behl, Atılım Güneş Baydin, Philip H. S. Torr.
Online Meta-Learning, (2019), Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine.

元强化学习(Meta Reinforcement Learning)
Generalizing Skills with Semi-Supervised Reinforcement Learning, (2017), Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine.
Guided Meta-Policy Search, (2019), Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn.
End-to-End Robotic Reinforcement Learning without Reward Engineering, (2019), Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, Sergey Levine.
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables, (2019), Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine.
Task-Agnostic Dynamics Priors for Deep Reinforcement Learning, (2019), Yilun Du, Karthik Narasimhan.
Meta Reinforcement Learning with Task Embedding and Shared Policy,(2019), Lin Lan, Zhenguo Li, Xiaohong Guan, Pinghui Wang.
NoRML: No-Reward Meta Learning, (2019), Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn.
Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning, (2019), Raghuram Bharadwaj Diddigi, Sai Koti Reddy Danda, Prabuchandran K. J., Shalabh Bhatnagar.
Adaptive Guidance and Integrated Navigation with Reinforcement Meta-Learning, (2019), Brian Gaudet, Richard Linares, Roberto Furfaro.
Watch, Try, Learn: Meta-Learning from Demonstrations and Reward, (2019), Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn.
Options as responses: Grounding behavioural hierarchies in multi-agent RL, (2019), Alexander Sasha Vezhnevets, Yuhuai Wu, Remi Leblond, Joel Z. Leibo.
Learning latent state representation for speeding up exploration, (2019), Giulia Vezzani, Abhishek Gupta, Lorenzo Natale, Pieter Abbeel.
Beyond Exponentially Discounted Sum: Automatic Learning of Return Function, (2019), Yufei Wang, Qiwei Ye, Tie-Yan Liu.
Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy, (2019), Ruihan Yang, Qiwei Ye, Tie-Yan Liu.
Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning, (2019), Georgios Papoudakis, Filippos Christianos, Arrasy Rahman, Stefano V. Albrecht.
Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning, (2019), Yufei Wang, Ziju Shen, Zichao Long, Bin Dong. 书籍 Hands-On Meta Learning with Python: Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow, (2019), Sudharsan Ravichandiran.

博客
Berkeley Artificial Intelligence Research blog
Meta-Learning: Learning to Learn Fast
Meta-Reinforcement Learning
How to train your MAML: A step by step approach
An Introduction to Meta-Learning
From zero to research — An introduction to Meta-learning
What’s New in Deep Learning Research: Understanding Meta-Learning视频教程 Chelsea Finn: Building Unsupervised Versatile Agents with Meta-Learning
Sam Ritter: Meta-Learning to Make Smart Inferences from Small Data
Model Agnostic Meta Learning by Siavash Khodadadeh
Meta Learning by Siraj Raval
Meta Learning by Hugo Larochelle
Meta Learning and One-Shot Learning
数据集 最常用的数据集列表:
Omniglot
mini-ImageNet
ILSVRC
FGVC aircraft
Caltech-UCSD Birds-200-2011
Check several other datasets by Google here.

元学习(Meta Learning)最全论文、视频、书籍资源整理链接 link
https://zhuanlan.zhihu.com/p/70044607

  • 1
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
MATLAB是一种高级编程语言和环境,被广泛应用于科研、工程和数据分析领域。而meta-learning是一种机器学习的方法,旨在通过学习一系列不同任务的经验,来改善学习算法的性能。那么,MATLAB meta-learning是指在MATLAB环境下进行meta-learning的实践和应用。 在MATLAB中,可以利用丰富的机器学习工具箱和函数,实现不同的meta-learning算法。首先,可以使用MATLAB提供的数据预处理函数来准备输入数据,比如对数据进行清洗、归一化和特征选择等操作。然后,可以使用MATLAB的分类、回归或聚类算法,将数据分为训练集和测试集,并训练学习模型。 在meta-learning中,通常需要通过学习一系列不同任务的经验,来得到适用于新任务的学习模型。MATLAB提供了一些学习框架和算法,如Adaptive Boosting、Gradient Boosting和Random Forest等。这些算法可以通过集成或组合基本学习算法,来改善整体学习性能。 使用MATLAB进行meta-learning的好处是,它提供了丰富的工具和函数,可以减少编程的复杂性,并实现高效的数据处理和模型训练。此外,MATLAB还支持可视化和结果分析工具,可以直观地展示模型的性能和预测结果。 总而言之,MATLAB meta-learning是指在MATLAB环境下实践和应用meta-learning的方法。通过使用MATLAB的机器学习工具箱和函数,可以实现数据预处理、模型训练和结果分析等操作,从而改善学习算法的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值