![](https://img-blog.csdnimg.cn/20201014180756919.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
计算机视觉
文章平均质量分 53
focus_clam
这个作者很懒,什么都没留下…
展开
-
words and sentences
2021-01-04mingle(混合,联合) the features of current task with the features of all previous tasksknowledge recitation (知识背诵,朗诵)omit(删除)the encoder and maps an arbitrary sample ID to the corresponding feature map directly...原创 2021-01-04 15:20:12 · 228 阅读 · 0 评论 -
持续学习-Towards reusable network components by learning compatible representations-arxiv2020
AbstractThis paper proposed to make a first step towards compatible and hence reusable network components. Split a network into two components: a feature extractor and a target task head. 最终验证在三个应用上,unsupervised domain adaptation, transferring classifier.原创 2020-08-21 16:22:54 · 114 阅读 · 0 评论 -
动作识别-Regularization on Spatio-Temporally Smoothed Feature for Action Recognition-CVPR2020
Abstract3D 卷积核,由于参数量多,容易overfitting;提出了一种正则化方法;the key idea of RMS is to randomly vary the magnitude of low-frequency components of the feature to regularize the model.Introduction解决overfitting问题有Perturbation base regularization methods on the input sp.原创 2020-07-30 15:33:08 · 456 阅读 · 0 评论 -
VALSE2020.7.22——迁移学习:他山之石,可以攻玉
视频链接转载 2020-07-22 22:24:51 · 374 阅读 · 2 评论 -
田奇老师报告——浅谈计算机视觉过去、现在、未来
参考博客挑战:1)semantic gap; 2) 缺乏常识,计算机不会犯小错误,它们会犯大错误;3)从平面投影到三维感知基础技术概括;算法、算力、数据;检测、识别、检索、跟踪、分割、增强、重建、分类...原创 2020-07-22 21:28:40 · 425 阅读 · 0 评论 -
持续学习——Continual Unsupervised Representation Learning——NeurIPS2019
AbstractUnsupervised continual learning (learning representations without any knowledge about task identity)Introduction挖坑写法,however, most of these techniques have focused on a sequence of tasks in which both the identity of the task (task label) and b.原创 2020-07-22 10:35:04 · 530 阅读 · 0 评论 -
Action Recognition-Temporal Attentive Alignment for Large-Scale Video Domain Adaptation——ICCV2019
Abstractimage-based domain adaptation, domain shift in videos,Two large-scale DA datasets (UCF-HMDB_full, Kinetics-Gameplay)Introductionvideos can suffer from domain discrepancy along both the spatial and temporal directions.Claim的两个点:Conclusionth.原创 2020-07-17 17:55:19 · 4909 阅读 · 0 评论 -
Action Recognition——Deep Domain Adaptation in Action Space——BMVC2018
AbstractThe problem of Domain Shift in action videos.Introduction实际应用例子surveillance cameras are everywhere, be it city streets, market place, buildings or airports. A massive amount of video data that needs to be processed for autonomous understanding .原创 2020-07-17 17:52:58 · 213 阅读 · 0 评论 -
增量学习——Maintaining Discrimination and Fairness in Class Incremental Learning——CVPR2020
Abstractknowledge distillation; 造成灾难性遗忘的很大一个原因是the weights in the last fully connected layer are highly biased in class-incremental learning;IntroductionConclusionmaintain the discrimination via knowledge distillation and maintains the fairness via a.原创 2020-07-14 16:07:55 · 1136 阅读 · 2 评论 -
增量学习——Incremental Learning in Online Scenario——CVPR2020
Abstract两个问题,1)灾难性遗忘;2)As new observations of old classes come sequentially over time, the distribution may change in unforeseen way, making the performance degrade dramatically on future data, which is referred to as concept drift.一个新的online learning s.原创 2020-07-14 15:29:13 · 1019 阅读 · 0 评论 -
生成模型——NVAE: A Deep Hierarchical Variational Autoencoder——arxiv2020.07
VAE相关改进VAE的相关改进:1)VAE和GAN结合,GAN的缺点是训练不稳定;2)VAE和flow模型结合;3)VQ-VAENVAENouveau VAE,包含了多尺度架构、可分离卷积、swish 激活函数、flow 模型等自回归分布将隐变量分组,针对组间多尺度设计其他性能提升技巧BN层改进(改成Instance Normalization或Weight Normalization);谱正则化的应用(在每一个卷积层加谱正则化);flow模型增强分布;节省显存的技巧参考材料扩展.原创 2020-07-14 10:03:21 · 3605 阅读 · 0 评论 -
《Computer Vision》书籍——page 3-25
chapter 11.2 A brief history1.3 book overview原创 2020-07-14 09:30:59 · 140 阅读 · 0 评论 -
动作识别——Multi-Model Domain Adaptation for Fine-Grained Action Recognition——CVPR2020 oral
AbstractFine-grained action recognition datasets exhibit environmental bias, where multiple video sequences are captured from a limited number of environments. Multi-modal nature of video(视频的多模态性),提出的方法一个是multi-modal self-supervision,还有一个是adversarial tra.原创 2020-07-06 22:11:53 · 1507 阅读 · 0 评论 -
网络结构搜索——《Practical Block-wise Neural Network Architecture Generation》阅读笔记——CVPR2018
A block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-learning paradigm with epsilon-greedy exploration strategy.Inception and Res...原创 2018-05-07 22:05:49 · 871 阅读 · 0 评论 -
自动学习——《Learning to Teach》——ICLR2018
自动学习(AutoML):通过自动化的方式,机器试图学习到最优的学习策略,从而避免机器学习从业者低效的手动调整方式。经典的自动学习方法: (1)用于超参数调节的贝叶斯优化(Bayesian Optimization) (2)用于优化器和网络结构调整的元学习技术(Meta learning/Learning to learn)传统的机器学习算法还是最近的自动学习算法,重点都是如何让AI更好...原创 2018-05-07 10:44:56 · 3520 阅读 · 0 评论 -
图像合成——《Semi-parametric Image Synthesis》阅读笔记
根据大致的草图框架(语义布局法),深度神经网络现在可以直接合成真实效果的图片。参数模型(parametric models)的优点是具有高度的表现力,可进行端到端的训练。非参数模型(Non-parametric models)的优点是可以在测试时提取大型的真实图片数据集里的素材。集结这两种方法的优势,提出了SIMS(半参数模型)SIMS工作思路: (1)先用大型真实图像数据集先训练非参数...原创 2018-05-07 08:43:14 · 3289 阅读 · 0 评论 -
Caffe 用自己的数据集在ImageNet网络结构上训练测试
目的:使用自己的数据集,使用caffe自带的ImageNet网络结构,训练测试参考官网链接:http://caffe.berkeleyvision.org/gathered/examples/imagenet.html我自己准备的数据集:http://pan.baidu.com/s/1o60802I我们的数据集图片分10个类,每个类有100个train图片(train文件夹下,一共1000),转载 2017-09-08 11:08:12 · 683 阅读 · 0 评论