博士科研
文章平均质量分 50
focus_clam
这个作者很懒,什么都没留下…
展开
-
常用工程技术积累-(5)
神经网络的logits参考资料1logits就是最终的全连接层的输出(在未经过softmax之前的,就是未归一化的概率)Pytorch踩坑记:赋值、浅拷贝、深拷贝三者的区别以及model.state_dict()和model.load_state_dict()的坑点参考链接2对python直接赋值、浅拷贝和深度拷贝也要了解python中view和pytorch的view和view_as参考资料3pytorch保存模型python三元表达式i = 5 if a > 7 else原创 2022-04-06 21:02:15 · 1269 阅读 · 0 评论 -
ICLR2021-Contextual Transformation Networks for Online Continual Learning
Introductionthe motivation is online continual learning with a fixed architecture by modeling the task-specific features.Contextual transformation network (a base network and a controller)什么是episodic memory? Episodic memory是用于base model的训练什么是semantic .原创 2022-03-24 11:20:49 · 235 阅读 · 0 评论 -
常用技术积累-(4)
attention和self-attention和co-attentionAttention和self-attention其具体计算过程是一样的,只是计算对象发生了变化而已。attention是source对target的attention,而self attention 是source 对source的attention。参考材料1co-attention和attention参考材料2*args和**kwargsresnet32在cifar100上训练参考材料3参考材原创 2022-03-24 11:17:29 · 2685 阅读 · 0 评论 -
代码实现-CVPR2020-Dynamic Convolution: Attention over Convolution Kernels
代码参考链接链接强调注意看issue部分,动态卷积的参数初始化很重要,Bias初始化的部分有错误。需要根据issue进行修改。self.bias = nn.Parameter(torch.zeros(K, out_planes))论文理解材料链接核心思想,原来是多个样本采用一组固定的卷积参数,现在是每个样本输入,对应生成一组卷积参数,训练的时候有K个卷积核参数,通过attention模块产生K个参数,组合叠加生成对应的一组卷积参数。另外,高维特征适合作为动态卷积的输入,需要根据实际任务场景进行微原创 2022-03-01 10:22:30 · 365 阅读 · 0 评论 -
常用科研工具分享-(1)
阅读论文工具readpaper平台,高效阅读论文链接mendeley原创 2022-03-01 10:13:58 · 139 阅读 · 0 评论 -
NeurIPS2021-DualNet: Continual Learning, Fast and Slow
Abstract神经科学的理论complementary learning systems; fast学当前具体任务信息(supervised),slow学通用的(self-supervised)Introductionthe main focus of this study is exploring how the CLS theory can motivate a general continual learning framework with a better trade-off betwe.原创 2022-02-24 13:19:24 · 2350 阅读 · 0 评论 -
论文阅读-TNNLS2021-Elastic Knowledge Distillation by Learning from Recollection
Abstract训练过程中历史记忆中的有效信息,来帮助模型学习;recollection构建和使用;不同capacity和不同训练阶段来建立不同的recollection;similarity-based elastic KD 算法(弹性知识蒸馏);Introduction之前每个training epoch的输出(training history),每一个对应一个recollection; How to build a suitable recollection;两点观点: 1. resNet-1原创 2022-02-22 09:41:33 · 2300 阅读 · 0 评论 -
互联网求职(算法)-中文简历制作
互联网求职(算法)-中文简历制作原创 2022-02-15 23:31:29 · 514 阅读 · 0 评论 -
words and sentences
2021-01-04mingle(混合,联合) the features of current task with the features of all previous tasksknowledge recitation (知识背诵,朗诵)omit(删除)the encoder and maps an arbitrary sample ID to the corresponding feature map directly...原创 2021-01-04 15:20:12 · 254 阅读 · 0 评论 -
2020-12-28 zotero (zotfile+onedrive)+windows+ipad(apple pencil)+mac 文献阅读同步工具
在网上搜索了很多教程,总结下经验和实验成功的比较好的教程。下面给出了链接,最终配置成功的是教程一的zotfile配合同步盘(同步盘采用的是OneDrive)。OneDrive可以使用教育用户注册,空间更大。教程一教程二...原创 2020-12-28 19:56:23 · 2171 阅读 · 0 评论 -
2020-12-17 前辈科研体会及博客分享
昨夜无眠程代展老师博客我为什么逃离科研科学网博客,全球华人科学博客圈原创 2020-12-17 21:30:51 · 145 阅读 · 0 评论 -
持续学习-Towards reusable network components by learning compatible representations-arxiv2020
AbstractThis paper proposed to make a first step towards compatible and hence reusable network components. Split a network into two components: a feature extractor and a target task head. 最终验证在三个应用上,unsupervised domain adaptation, transferring classifier.原创 2020-08-21 16:22:54 · 131 阅读 · 0 评论 -
动作识别-Regularization on Spatio-Temporally Smoothed Feature for Action Recognition-CVPR2020
Abstract3D 卷积核,由于参数量多,容易overfitting;提出了一种正则化方法;the key idea of RMS is to randomly vary the magnitude of low-frequency components of the feature to regularize the model.Introduction解决overfitting问题有Perturbation base regularization methods on the input sp.原创 2020-07-30 15:33:08 · 479 阅读 · 0 评论 -
VALSE2020.7.22——迁移学习:他山之石,可以攻玉
视频链接转载 2020-07-22 22:24:51 · 389 阅读 · 2 评论 -
田奇老师报告——浅谈计算机视觉过去、现在、未来
参考博客挑战:1)semantic gap; 2) 缺乏常识,计算机不会犯小错误,它们会犯大错误;3)从平面投影到三维感知基础技术概括;算法、算力、数据;检测、识别、检索、跟踪、分割、增强、重建、分类...原创 2020-07-22 21:28:40 · 441 阅读 · 0 评论 -
持续学习——Continual Unsupervised Representation Learning——NeurIPS2019
AbstractUnsupervised continual learning (learning representations without any knowledge about task identity)Introduction挖坑写法,however, most of these techniques have focused on a sequence of tasks in which both the identity of the task (task label) and b.原创 2020-07-22 10:35:04 · 563 阅读 · 0 评论 -
Action Recognition-Temporal Attentive Alignment for Large-Scale Video Domain Adaptation——ICCV2019
Abstractimage-based domain adaptation, domain shift in videos,Two large-scale DA datasets (UCF-HMDB_full, Kinetics-Gameplay)Introductionvideos can suffer from domain discrepancy along both the spatial and temporal directions.Claim的两个点:Conclusionth.原创 2020-07-17 17:55:19 · 5217 阅读 · 0 评论 -
Action Recognition——Deep Domain Adaptation in Action Space——BMVC2018
AbstractThe problem of Domain Shift in action videos.Introduction实际应用例子surveillance cameras are everywhere, be it city streets, market place, buildings or airports. A massive amount of video data that needs to be processed for autonomous understanding .原创 2020-07-17 17:52:58 · 234 阅读 · 0 评论 -
持续学习——Neural Topic Modeling with Continual Lifelong Learning——ICML2020
Abstractcontinual learning+ unsupervised topic modeling《Lifelong machine learning for natural language processing, EMNLP2016》《Topic modeling using topics from many domains, lifelong learning and big data, ICML2014》难点data sparsity(in a small collection ..原创 2020-07-14 15:26:59 · 541 阅读 · 0 评论 -
持续学习——Optimal Continual Learning has Perfect Memory and is NP-HARD——ICML2020
AbstractThe main finding is that such optimal continual algorithms generally solve an NP-HARD problem and will require a perfect memory to do so.Introduction分类方法分成regularization-based, replay-based和bayesian and variationally Bayesian三类;另外就是每个任务学一份参数;如..原创 2020-07-14 11:20:12 · 361 阅读 · 0 评论 -
生成模型——NVAE: A Deep Hierarchical Variational Autoencoder——arxiv2020.07
VAE相关改进VAE的相关改进:1)VAE和GAN结合,GAN的缺点是训练不稳定;2)VAE和flow模型结合;3)VQ-VAENVAENouveau VAE,包含了多尺度架构、可分离卷积、swish 激活函数、flow 模型等自回归分布将隐变量分组,针对组间多尺度设计其他性能提升技巧BN层改进(改成Instance Normalization或Weight Normalization);谱正则化的应用(在每一个卷积层加谱正则化);flow模型增强分布;节省显存的技巧参考材料扩展.原创 2020-07-14 10:03:21 · 3779 阅读 · 0 评论 -
《Computer Vision》书籍——page 3-25
chapter 11.2 A brief history1.3 book overview原创 2020-07-14 09:30:59 · 152 阅读 · 0 评论 -
动作识别——Multi-Model Domain Adaptation for Fine-Grained Action Recognition——CVPR2020 oral
AbstractFine-grained action recognition datasets exhibit environmental bias, where multiple video sequences are captured from a limited number of environments. Multi-modal nature of video(视频的多模态性),提出的方法一个是multi-modal self-supervision,还有一个是adversarial tra.原创 2020-07-06 22:11:53 · 1526 阅读 · 0 评论 -
动作识别——action recognition新手入门
定义行为识别似乎是图像分类任务到多个帧的扩展,然后聚合来自每帧的预测背景传统方法,视频输入=》特征提取=》特征融合=》特征分类=》分类结果深度学习方法,单流法,双流法,基于骨架特征提取,ROI提取表示传统方法DT(Dense Trajectories)算法,是利用光流场来获得视频序列中的轨迹,在沿着轨迹提取轨迹形状特征HOF,HOG,MHB特征,然后利用BoF(Bag of Features)方法对特征进行编码,最后基于编码结果训练SVM分类器。iDT算法,基于DT算法进行了以下几点的改进:原创 2020-07-06 19:57:36 · 1917 阅读 · 0 评论 -
持续学习——Automatic Recall Machines-Internal Replay, Continual Learning and the Brain——arxiv202006
AbstractReplay-based methods, present a method where these auxiliary samples are generated on the fly(出发点,就是减少内存开销),也加入了神经科学的启发来加强motivation。Introductionlearn from sequential or non-stationary data的能力(人和神经网络相比),谈到replay这一类的方法;The goal of this work, Aut.原创 2020-07-06 11:10:32 · 2388 阅读 · 0 评论 -
持续学习——《Selfless Sequential Learning》——ICLR2019
Abstractsequential learning=lifelong learning=incremental learning = continual learning, look at the scenario with fixed model capacity, the learning process should account for future tasks to be added and thus leave enough capacity for them. (not selfish原创 2020-07-01 16:57:41 · 338 阅读 · 0 评论 -
增量学习——《Insights from the Future for Continual Learning》——arxiv202006
Abstract提出了一个持续学习的新情景,prescient continual learning(测试模型不仅past classes and current classes,还需要考虑future classes)。基于zero-shot learning的启发,提出了Ghost Model,表征空间的生成模型与损失函数的小心微调Introductionthe future classes(no training samples),作者认为这个setting需要让模型know the class原创 2020-06-29 12:21:09 · 354 阅读 · 2 评论 -
如何阅读一篇论文(从事计算机专业)
如何阅读一篇论文三步走第一步第二步第三步参考博客和资料三步走第一步花几分钟粗略地读abstract, introduction, sub-introduction of a new section and the conclusion. 对reference快速扫一遍。是否细读的五个标准:第二步首先关注图表,然后是论文的主要要点(避免例如证明的细节),要记下主要的references第三步仔细检查每一个证明、假设和肯定。在这一步,读者还需要列出这篇文章的优点和弱点,重点关注。(新idea的源原创 2020-06-29 10:03:57 · 542 阅读 · 0 评论