- 博客(9)
- 资源 (10)
- 收藏
- 关注
原创 分类Global-View
Fine-Grained recognitionrequire recognition of highly localized attributes of objects while being invariant to their pose and location in the imagePart-based models construct representations by loca
2017-11-20 09:58:46 495
原创 Highly Efficient Forward and Backward Propagation of Convolutional Neural Networks for Pixelwise Cla
Contributionseliminate all the redundant computation in convolution and pooling on images by introducing novel d-regularly sparse kernels.It generates exactly the same results as those by patch-by-pa
2017-11-06 16:24:24 504
原创 SQUEEZENET
Compelling AdvantagesSmaller CNNs require less communication across servers during distributed training.Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car.
2017-11-05 10:32:17 548
原创 DenseNet--Densely Connected Convolutional Networks
Compelling advantagesalleviate the vanishing-gradient problemstrengthen feature propagationencourage feature reusesubstantially reduce the number of parameters.requires fewer parametersno need to
2017-11-03 22:16:33 415
原创 ResNet--Deep Residual Learning for Image Recognition
Key questionVanishing/exploding gradients hamper convergence from the beginning, as the network becomes more deeper.with the network depth increasing, accuracy gets saturated (which might be unsurpri
2017-11-03 20:57:57 326
原创 VGG--VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Key Pointswe fix other parameters of the architecture, and steadily increase the depth of the network by adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convol
2017-11-03 19:47:13 271
原创 Maxout Networks
Motivationin multiple dimensions a maxout unit can approximate arbitrary convex functionsContributionsmaxout is cross channel poolingmaxout enhances dropout’s abilities as a model averaging techniq
2017-11-03 19:35:28 419
原创 Network in Network
Key ProblemsCNN implicitly makes the assumption that the latent concepts are linearly separablethe data for the same concept often live on a nonlinear manifold, therefore the representations that cap
2017-11-03 17:17:07 267
原创 AlexNet--ImageNet Classification with Deep Convolutional Neural Networks
LRN(Local Response Normalization)applied this normalization after applying the ReLU nonlinearity in certain layersOverlapping PoolingWe generally observe during training that models with overlapping
2017-11-03 16:48:27 334
opencv缺失文件
2017-07-25
红黑树(Red Black Tree)
2017-02-13
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人