![](https://img-blog.csdnimg.cn/direct/fad29cc66ed549528443de7b7423c5ae.jpeg?x-oss-process=image/resize,m_fixed,h_224,w_224)
论文阅读
文章平均质量分 91
收录自己的论文阅读笔记
wzc-run
不学习就会被AI替代
展开
-
论文笔记|OUTRAGEOUSLY LARGE NEURAL NETWORKS- THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
论文阅读笔记,ICLR 2017的OUTRAGEOUSLY LARGE NEURAL NETWORKS- THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER,引入了稀疏门控专家混合层(MoE),由多达数千个前馈子网络组成。可训练的门控网络确定用于每个示例的这些专家的稀疏组合。原创 2024-01-08 13:30:26 · 1908 阅读 · 0 评论 -
论文笔记|Not All Tasks Are Equally Difficult MultiTask Reinforcement Learning with Dynamic Depth Routing
论文笔记 Not All Tasks Are Equally Difficult-- Multi-Task Reinforcement Learning with Dynamic Depth Routing提出了一种动态深度路由(D2R)框架,该框架学习策略性地跳过某些中间模块,从而为每个任务灵活选择不同数量的模块;进一步引入了 ResRouting 方法来解决离策略训练期间行为和目标策略之间不同的路由路径问题原创 2024-01-14 20:17:40 · 922 阅读 · 1 评论