network
文章平均质量分 66
eight_Jessen
这个作者很懒,什么都没留下…
展开
-
Flax深度学习框架
今天看代码看到一个新框架,Flax,由Google研究人员开发的Flax是基于JAX上构建的文档链接githubJAXJAX是一个函数转换的集合,例如实时编译和自动微分,它是用一个API在XLA上实现的瘦包装器,API本质上是NumPy和SciPy的替代品。事实上,开始使用JAX的一种方法是将其视为一个加速器支持的NumPy。import jax.numpy as np# Will be seamlessly executed on an accelerator such as GPU/TP原创 2021-01-11 15:56:30 · 3963 阅读 · 0 评论 -
论文解读VSR MuCAN: Multi-Correspondence Aggregation Network for Video Super-Resolution 2020 ECCV
MuCAN: Multi-Correspondence Aggregation Network for Video Super-ResolutionGitHub地址1.总结这篇文章作者主要在于突出利用多帧输入里面帧间和帧内的信息,对此作者分别提出了Temporal Multi-Correspondence Aggregation Module 和 Cross-Scale Nonlocal-Correspondence Aggregation Module,相比于以往的视频超分,这两个模块的功能我认为可原创 2021-01-06 11:08:20 · 1147 阅读 · 4 评论 -
超分论文笔记之纹理迁移2019-2020CVPR:Image SRby Neural Texture Transfer -Learning Texture Transformer Network
1. Image Super-Resolution by Neural Texture Transfer代码地址1.1 总结使用RefS方法,当参考图像很相似时,超分的结果还不错。但是参考图像对超分结果影响很大,特别是当参考图像相似性比较低时,效果不佳。作者通过纹理细节,根据纹理相似性做超分的方法,让RefSR方法受参考图像的相似性影响比较少。相比以往在输入做match,作者在多个level做match,利用多尺度神经迁移,模型能够从具有语义相关性的Ref patches获益更多,在输入的ref im原创 2020-09-04 11:21:41 · 1302 阅读 · 0 评论 -
论文笔记之CVPR2019超分三:PASSRnet-NatSR-AdaFMNet
1、Learning Parallax Attention for Stereo Image Super-Resolution(2019CVPR PASSRnet)1.1 方法输入:a stereo image pair as input and super-resolves the left image一个立体图像对作为输入和左图像超分结果?总体网络结构图如图所示:1.1.1 Residual Atrous Spatial Pyramid Pooling (ASPP) Module利用密集的像原创 2020-09-02 11:07:29 · 1000 阅读 · 0 评论 -
论文笔记:VGG设计理解
Very small (3 × 3) convolution filters1.IntroductionUtilised smaller receptive window size and smaller stride of the first convolutional layer.Training and testing the networks densely over the whole image and over multiple scales.Address another impo原创 2020-08-11 20:28:42 · 230 阅读 · 0 评论 -
论文笔记:ResNet
1.IntroductionProblem:Vanishing/exploding gradientsAddressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) wi原创 2020-08-11 20:26:34 · 180 阅读 · 0 评论 -
论文笔记:GoogleNet网络结构设计理解
2.Related WorkStarting with LeNet-5 [10], convolutional neural networks (CNN) have typically had a standard structure – stacked convolutional layers (optionally followed by contrast normalization and max-pooling) are followed by one or more fully-connect原创 2020-08-11 20:03:01 · 367 阅读 · 0 评论 -
论文笔记:Alexnet
The ArchitectureIt contains eight learned layers — five convolutional and three fully-connected.ReLU Nonlinearity(f(x)=max(0,x)(f(x) = max(0,x)(f(x)=max(0,x)Deep convolutional neural networks with ReLUs train several times faster than their equivalents原创 2020-08-11 19:55:56 · 215 阅读 · 0 评论