自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+

用思想、知识和音乐去影响和改变

  • 博客(13)
  • 资源 (6)
  • 收藏
  • 关注

转载 常见的图embedding的方法

参考代码:https://github.com/keras-team/keras/blob/master/examples/pretrained_word_embeddings.py转:https://blog.csdn.net/k284213498/article/details/83474972keras.layers.embeddings.Embedding(input_dim,...

2018-06-08 17:38:09 6975

转载 FM做feature embedding表示以及CTR loss function

FM算法是CTR预估中的经典算法,其优势是能够自动学习出交叉特征.因为这种特性,FM在CTR预估上的效果会远超LR.说明:通过FM的公式可以看出,FM自动学习交叉是通过学习到每个特征xi的向量表示vi得到的.比如说,对于field A,其特征有100w种取值,如果使用one-hot编码。那么,每个特征需要使用100w维特征表示.使用了FM算法学习之后,比如说使用vi的特征维度是10维.那么,每个特...

2018-06-08 11:49:58 3186

原创 tensorflow nmt 源码结构梳理

nmt.py main()->run_main(train_fn, inference_fn) 其中,train_fn指train.py中的train() run_main中,根据参数:flags.inference_input_file决定是走train逻辑还是走infer逻辑 如果是infer,则取最新的checkpoint,执行inference_fn 如果是train,则走train...

2018-06-05 20:22:01 1451

转载 Tensorflow中卷积的padding操作

https://www.jianshu.com/p/05c4f1621c7e之前一直对tensorflow的padding一知半解,直到查阅了tensorflow/core/kernels/ops_util.cc中的Get2dOutputSizeVerbose函数,才恍然大悟,下面是具体的介绍实际上tensorflow官方API里有介绍!!不科学上网貌似打不开根据tensorflow中的conv2...

2018-06-29 16:18:40 326

原创 why original CNN need fixed size as input while FCN doesn't

Last few layers of Original CNN are dense layer with fixed size which need inputs with fixed size, otherwise the dense layer ....Mean while the FCN doesn't has a fixed dense layer .so 

2018-06-27 10:57:06 188

转载 反卷积(Deconvolution)、上采样(UNSampling)与上池化(UnPooling)

前言在看图像语义分割方面的论文时,发现在网络解码器结构中有的时候使用反卷积、而有的时候使用unpooling或或者unsampling,查了下资料,发现三者还是有不同的。这里记录一下。图示理解使用三张图进行说明: 图(a)表示UnPooling的过程,特点是在Maxpooling的时候保留最大值的位置信息,之后在unPooling阶段使用该信息扩充Feature Map,除最大值位置以外,其余补0...

2018-06-21 15:01:01 5581 2

转载 全卷积网络 FCN 详解

背景CNN能够对图片进行分类,可是怎么样才能识别图片中特定部分的物体,在2015年之前还是一个世界难题。神经网络大神Jonathan Long发表了《Fully Convolutional Networks for Semantic Segmentation》在图像语义分割挖了一个坑,于是无穷无尽的人往坑里面跳。全卷积网络 Fully Convolutional NetworksCNN 与 FCN...

2018-06-21 15:00:21 636

转载 反卷积 Transposed Convolution, Fractionally Strided Convolution or Deconvolution

https://buptldy.github.io/2016/10/29/2016-10-29-deconv/反卷积(Deconvolution)的概念第一次出现是Zeiler在2010年发表的论文Deconvolutional networks中,但是并没有指定反卷积这个名字,反卷积这个术语正式的使用是在其之后的工作中(Adaptive deconvolutional networks for ...

2018-06-20 20:47:16 432

转载 卷积步长strides参数的具体解释

conv1 = tf.nn.conv2d(input_tensor,conv1_weights,strides=[1,1,1,1],padding='SAME')这是一个常见的卷积操作,其中strides=【1,1,1,1】表示滑动步长为1,padding=‘SAME’表示填0操作当我们要设置步长为2时,strides=【1,2,2,1】,很多同学可能不理解了,这四个参数分别代表了什么,查了官方函...

2018-06-20 20:18:51 2514 2

转载 计算广告点击率预估算法总结

出处:http://hacker.duanshishi.com前言谈到CTR,都多多少少有些了解,尤其在互联网广告这块,简而言之,就是给某个网络服务使用者推送一个广告,该广告被点击的概率,这个问题难度简单到街边算命随口告诉你今天适不适合娶亲、适不适合搬迁一样,也可以复杂到拿到各种诸如龟壳、铜钱等等家伙事,在沐浴更衣、净手煴香后,最后一通预测,发现完全扯淡,被人暴打一顿,更有甚者,在以前关系国家危亡...

2018-06-08 20:13:56 3302

转载 tf.clip_by_global_norm理解

refer to :https://blog.csdn.net/u013713117/article/details/56281715Gradient Clipping的引入是为了处理gradient explosion或者gradients vanishing的问题。当在一次迭代中权重的更新过于迅猛的话,很容易导致loss divergence。Gradient Clipping的直观作用就是让...

2018-06-08 17:38:54 438

转载 seq2seq 简单版本 - tensorflow

https://github.com/NELSONZHAO/zhihu/blob/master/basic_seq2seq/Seq2seq_char.ipynbSeq2Seq本篇代码将实现一个基础版的Seq2Seq,输入一个单词(字母序列),模型将返回一个对字母排序后的“单词”。基础Seq2Seq主要包含三部分:Encoder隐层状态向量(连接Encoder和Decoder)Decoder查看Te...

2018-06-01 17:14:09 737 5

转载 从AlexNet理解卷积神经网络的一般结构

出自:https://blog.csdn.net/chaipp0607/article/details/72847422/2012年AlexNet在ImageNet大赛上一举夺魁,开启了深度学习的时代,虽然后来大量比AlexNet更快速更准确的卷积神经网络结构相继出现,但是AlexNet作为开创者依旧有着很多值得学习参考的地方,它为后续的CNN甚至是R-CNN等其他网络都定下了基调,所以下面我们将...

2018-06-01 11:07:11 421

httpclient tutorial httpclient 指南

httpclient 指南 包括了详细的调用和常用代码 The Hyper-Text Transfer Protocol (HTTP) is perhaps the most significant protocol used on the Internet today. Web services, network-enabled appliances and the growth of network computing continue to expand the role of the HTTP protocol beyond user-driven web browsers, while increasing the number of applications that require HTTP support. Although the java.net package provides basic functionality for accessing resources via HTTP, it doesn't provide the full flexibility or functionality needed by many applications. HttpClient seeks to fill this void by providing an efficient, up-to-date, and feature-rich package implementing the client side of the most recent HTTP standards and recommendations. Designed for extension while providing robust support for the base HTTP protocol, HttpClient may be of interest to anyone building HTTP-aware client applications such as web browsers, web service clients, or systems that leverage or extend the HTTP protocol for distributed communication.

2018-03-08

mask rcnn paper

We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.

2018-03-07

Applying Deep Learning To Answer Selection

Applying Deep Learning To Answer Selection- A Study And An Open Task

2018-03-07

Learning Phrase Representations using RNN Encoder–Decoder

Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

2018-03-07

BPTT BackPropagation Through Time.pdf

BPTT paper This report provides detailed description and necessary derivations for the BackPropagation Through Time (BPTT) algorithm. BPTT is often used to learn recurrent neural networks (RNN). Contrary to feed-forward neural networks, the RNN is characterized by the ability of encoding longer past information, thus very suitable for sequential models. The BPTT extends the ordinary BP algorithm to suit the recurrent neural architecture.

2018-03-07

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除