人工智能资料库:第65辑(20170805)

1.【Quora】Rank the most important factors during a PhD, which will increase your probability of finding a good faculty position?

简介:

A2A: I’s hard to make a simple rank-ordered list here because these factors combine in complicated, non-linear ways. So let me just say what we end up discussing in our faculty-hiring process in various departments in CMU SCS. Perhaps that will give you some idea about what you should work on.

原文链接:https://www.quora.com/Rank-the-most-important-factors-during-a-PhD-which-will-increase-your-probability-of-finding-a-good-faculty-position/answer/Scott-E-Fahlman?ref=fb_page


2.【博客】Faces recreated from monkey brain signals

简介:

10012303_5a5E.jpg

The brains of primates can resolve different faces with remarkable speed and reliability, but the underlying mechanisms are not fully understood.

The researchers showed pictures of human faces to macaques and then recorded patterns of brain activity.

The work could inspire new facial recognition algorithms, they report.
In earlier investigations, Professor Doris Tsao from the California Institute of Technology (Caltech) and colleagues had used functional magnetic resonance imaging (fMRI) in humans and other primates to work out which areas of the brain were responsible for identifying faces.

Six areas were found to be involved, all of which are located in part of the brain known as the inferior temporal (IT) cortex. The researchers described these six areas as "face patches".

原文链接:http://www.bbc.com/news/science-environment-40131242


3.【论文】A Fast Unified Model for Parsing and Sentence Understanding

简介:

Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two
key technical problems that make them slow and unwieldy for large-scale NLP tasks: they usually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducing the Stackaugmented Parser-Interpreter Neural Network (SPINN), which combines parsing and interpretation within a single treesequence hybrid model by integrating treestructured sentence interpretation into the
linear sequential structure of a shift-reduce parser. Our model supports batched computation for a speedup of up to 25× over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.

原文链接:https://arxiv.org/pdf/1603.06021.pdf


4.【博客】Hierarchical Softmax

简介:

Hierarchical softmax is an alternative to the softmax in which the probability of any one outcome depends on a number of model parameters that is only logarithmic in the total number of outcomes. In “vanilla” softmax, on the other hand, the number of such parameters is linear in the number of total number of outcomes. In a case where there are many outcomes (e.g. in language modelling) this can be a huge difference. The consequence is that models using hierarchical softmax are significantly faster to train with stochastic gradient descent, since only the parameters upon which the current training example depend need to be updated, and less updates means we can move on to the next training example sooner. At evaluation time, hierarchical softmax models allow faster calculation of individual outcomes, again because they depend on less parameters (and because the calculation using the parameters is just as straightforward as in the softmax case). So hierarchical softmax is very interesting from a computational point-of-view. By explaining it here, I hope to convince you that it is also interesting conceptually. To keep things concrete, I’ll illustrate using the CBOW learning task from word2vec (and fasttext, and others).

原文链接:http://building-babylon.net/2017/08/01/hierarchical-softmax/


5.【博客】How to Visualize Your Recurrent Neural Network with Attention in Keras

简介:

10012303_YiRj.png

Neural networks are taking over every part of our lives. In particular — thanks to deep learning — Siri can fetch you a taxi using your voice; and Google can enhance and organize your photos automagically. Here at Datalogue, we use deep learning to structurally and semantically understand data, allowing us to prepare it for use automatically.
Neural networks are massively successful in the domain of computer vision. Specifically, convolutional neural networks(CNNs) take images and extract relevant features from them by using small windows that travel over the image. This understanding can be leveraged to identify objects from your camera (Google Lens) and, in the future, even drive your car (NVIDIA).

原文链接:https://medium.com/datalogue/attention-in-keras-1892773a4f22


转载于:https://my.oschina.net/u/3579120/blog/1533426

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值