Interested Papers

Feb 28

Deep Image Harmonization
[https://arxiv.org/pdf/1702.08502.pdf](Unsupervised Image-to-Image Translation Networks), segmentation
Unsupervised Image-to-Image Translation Networks, Unsupervised image-to-image

Feb 23

Learning Chained Deep Features and Classifiers for Cascade in Object Detection, Xiaogang
ViP-CNN: A Visual Phrase Reasoning Convolutional Neural Network for Visual
Relationship Detection
, Xiaogang

Feb 22

Learning Deep Features via Congenerous Cosine Loss for Person Recognition, Yu Liu, Xiaogang Wang
TRANSFERRING FACE VERIFICATION NETS TO PAIN AND EXPRESSION REGRESSION, Alan Yuille
Unsupervised Diverse Colorization via Generative Adversarial Networks, Colorization

Jan 11

Revisiting Deep Image Smoothing and Intrinsic Image Decomposition, Qingnan Fan, Baoquan Chen

Jan 10

Deep Feature Interpolation for Image Content Changes

manipulate

Dec 22

https://arxiv.org/pdf/1612.06890.pdf Justin
https://arxiv.org/pdf/1612.07182.pdf NLP
https://arxiv.org/pdf/1612.06933.pdf unsupervised place discovery for visual place classification
https://arxiv.org/pdf/1612.07086.pdf Recurrent Highway Networks with Language CNN for Image Captioning
https://arxiv.org/pdf/1612.07217.pdf Learning Motion Patterns in Videos
https://arxiv.org/pdf/1612.07310.pdf Beyond Holistic Object Recognition:
Enriching Image Understanding with Part States

Dec 21

https://arxiv.org/pdf/1612.06851.pdf beyond skip connections (5)
https://arxiv.org/pdf/1612.06573.pdf detecting unexpected obstacles
https://arxiv.org/pdf/1612.06558.pdf semantic segmentation
https://arxiv.org/pdf/1612.06530.pdf grounded visual questions
https://arxiv.org/pdf/1612.06524.pdf 3d human pose estimation

Dec 20

https://arxiv.org/pdf/1612.06371.pdf action recognition (5)
https://arxiv.org/pdf/1612.06321.pdf image retrieval (5)
https://arxiv.org/pdf/1612.06152.pdf few-shot object recognition (5)
https://arxiv.org/pdf/1612.06053.pdf visual tracking
https://arxiv.org/pdf/1612.05877.pdf action recognition
https://arxiv.org/pdf/1612.05872.pdf 3d shape
https://arxiv.org/pdf/1612.05836.pdf egoTransfer
https://arxiv.org/pdf/1612.05753.pdf qlearning

Dec 19

https://arxiv.org/pdf/1612.05363.pdf learning Residual Images for face attribute manipulation (5)
https://arxiv.org/pdf/1612.05322.pdf face detection
https://arxiv.org/pdf/1612.05478.pdf video propagation networks
https://arxiv.org/pdf/1612.05424.pdf unsupervised pixel-level domain adaptation with GAN (5)
https://arxiv.org/pdf/1612.05400.pdf deep residual hashing
https://arxiv.org/pdf/1612.05386.pdf vqa-machine

Dec 16

https://arxiv.org/pdf/1612.05086.pdf Coupling Adaptive Batch Sizes with Learning Rates
https://arxiv.org/pdf/1612.04844.pdf The more you know
https://arxiv.org/pdf/1612.04884.pdf action recognition
https://arxiv.org/pdf/1612.04901.pdf zero-shot learning, CMU
https://arxiv.org/pdf/1612.04904.pdf regressing robust model
https://arxiv.org/pdf/1612.04949.pdf Recurrent Image
https://arxiv.org/pdf/1612.05079.pdf SceneNet
https://arxiv.org/pdf/1612.05234.pdf Visual Compiler

Dec 14,15

https://arxiv.org/pdf/1612.04357.pdf stacked GAN
https://arxiv.org/pdf/1612.04337.pdf fast style transfer
https://arxiv.org/pdf/1612.04229.pdf recurrent generative model
https://arxiv.org/pdf/1612.03928.pdf more attention

Others

https://arxiv.org/pdf/1611.09969v1.pdf
https://arxiv.org/pdf/1612.00496v1.pdf
https://arxiv.org/pdf/1507.02379.pdf
https://arxiv.org/pdf/1611.08402.pdf
https://arxiv.org/pdf/1611.08303.pdf
https://arxiv.org/pdf/1611.08408.pdf
https://arxiv.org/pdf/1511.07125v1.pdf
https://arxiv.org/pdf/1611.08583.pdf
https://arxiv.org/pdf/1611.08986.pdf
https://arxiv.org/pdf/1611.09078.pdf
https://arxiv.org/pdf/1611.09325.pdf
https://arxiv.org/pdf/1611.09326.pdf
https://arxiv.org/pdf/1611.08588.pdf

Dec 13

https://arxiv.org/pdf/1612.03809.pdf
https://arxiv.org/pdf/1612.03236.pdf
https://arxiv.org/pdf/1612.03242.pdf
https://arxiv.org/pdf/1612.03268.pdf
https://arxiv.org/pdf/1612.03365.pdf
https://arxiv.org/pdf/1612.03550.pdf
https://arxiv.org/pdf/1612.03557.pdf
https://arxiv.org/pdf/1612.03628.pdf
https://arxiv.org/pdf/1612.03630.pdf
https://arxiv.org/pdf/1612.03663.pdf
https://arxiv.org/pdf/1612.03897.pdf

Dec 12

https://arxiv.org/pdf/1612.03144.pdf
https://arxiv.org/pdf/1612.03052.pdf
https://arxiv.org/pdf/1612.03129.pdf

Dec 7

https://arxiv.org/pdf/1612.02372.pdf
https://arxiv.org/pdf/1612.02297.pdf
https://arxiv.org/pdf/1612.02287.pdf
https://arxiv.org/pdf/1612.02177.pdf

Dec 6

https://arxiv.org/pdf/1612.01635.pdf
https://arxiv.org/pdf/1612.01895.pdf
https://arxiv.org/pdf/1612.01958.pdf
https://arxiv.org/pdf/1612.01981.pdf
https://arxiv.org/pdf/1612.01991.pdf
https://arxiv.org/pdf/1612.01887.pdf

Dec 5

https://arxiv.org/pdf/1612.01202.pdf
https://arxiv.org/pdf/1612.01465.pdf
https://arxiv.org/pdf/1612.01380.pdf
https://arxiv.org/pdf/1612.01079.pdf
https://arxiv.org/pdf/1612.01057.pdf
https://arxiv.org/pdf/1612.01051.pdf
https://arxiv.org/pdf/1612.00991.pdf
https://arxiv.org/pdf/1612.00901.pdf
https://arxiv.org/pdf/1612.01230.pdf
https://arxiv.org/pdf/1612.01294.pdf
https://arxiv.org/pdf/1612.01452.pdf
https://arxiv.org/pdf/1612.01479.pdf

Dec 4,

  1. https://arxiv.org/pdf/1612.00835.pdf
  2. https://arxiv.org/pdf/1612.00500.pdf
  3. https://arxiv.org/pdf/1612.00522.pdf
  4. https://arxiv.org/pdf/1612.00593.pdf
  5. https://arxiv.org/pdf/1612.00606.pdf
  6. https://arxiv.org/pdf/1612.00686.pdf
  7. https://arxiv.org/pdf/1612.00500.pdf
  8. https://arxiv.org/pdf/1612.00523.pdf
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
code for papers 可以理解为为论文编写代码的意思。在研究领域,科学家们经常需要编写代码来支持他们的研究工作。这些代码可以用于数据分析、模拟实验、算法实现等各种科学研究任务。 论文中的代码可以通过多种编程语言来实现,如Python、R、Matlab等。编写论文代码的目的是为了使研究的结果能够被其他人重现,同时还可以提高研究的可信度和可靠性。通过代码共享,其他研究者可以验证和复现研究结果,从而促进学术交流和合作。 论文中的代码应该具备一定的可读性和可维护性,以便其他研究者能够理解和使用。在编写代码时,研究者需要考虑到代码的结构和组织,使用清晰的变量和函数命名,添加注释和文档,使代码具备良好的可读性。此外,还需要考虑代码的可扩展性和复用性,以便其他人可以在代码的基础上开展进一步的研究。 值得注意的是,编写论文代码并非仅仅是实现功能,更重要的是保证代码的正确性和可靠性。因此,研究者需要进行充分的测试和验证,确保代码能够给出准确可信的结果。 总而言之,code for papers 是为论文编写代码的概念,通过共享代码,科研人员可以促进学术交流和合作,提高研究的可信度和可靠性。编写论文代码需要考虑代码的可读性、可维护性、可扩展性和正确性,以便其他人能够理解和使用,并在此基础上进行进一步的研究。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值