人工智能资料库:第57辑(20170525)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/CoderPai/article/details/80344409

作者:chen_h
微信号 & QQ:862251340
微信公众号:coderpai


1.【博客】How to Build a Recurrent Neural Network in TensorFlow

简介:

In this tutorial I’ll explain how to build a simple working Recurrent Neural Network in TensorFlow. This is the first in a series of seven parts where various aspects and techniques of building Recurrent Neural Networks in TensorFlow are covered. A short introduction to TensorFlow is available here. For now, let’s get started with the RNN!

原文链接:http://www.kdnuggets.com/2017/04/build-recurrent-neural-network-tensorflow.html


2.【博客】Overview of Artificial Intelligence and Role of Natural Language Processing in Big Data

简介:

Natural Language Processing (NLP) is “ability of machines to understand and interpret human language the way it is written or spoken”.
The objective of NLP is to make computer/machines as intelligent as human beings in understanding language.

原文链接:https://www.xenonstack.com/blog/overview-of-artificial-intelligence-and-role-of-natural-language-processing-in-big-data


3.【代码】Deep Image Analogy

简介:

eep Image Analogy is a technique to find semantically-meaningful dense correspondences between two input images. It adapts the notion of image analogy with features extracted from a Deep Convolutional Neural Network.
Deep Image Analogy is initially described in a SIGGRAPH 2017 paper

原文链接:https://github.com/msracver/Deep-Image-Analogy


4.【博客】Convolutional Methods for Text

简介:

  • RNNS work great for text but convolutions can do it faster
  • Any part of a sentence can influence the semantics of a word. For that reason we want our network to see the entire input at once
  • Getting that big a receptive can make gradients vanish and our networks fail
  • We can solve the vanishing gradient problem with DenseNets or Dilated Convolutions
  • Sometimes we need to generate text. We can use “deconvolutions” to generate arbitrarily long outputs.

原文链接:https://medium.com/@TalPerry/convolutional-methods-for-text-d5260fd5675f


5.【博客】Using Long Short-Term Memory Networks and TensorFlow for Image Captioning

简介:

From this blog post, you will learn how to enable a machine to describe what is shown in an image and generate a caption for it, using long short-term memory networks and TensorFlow. You will also find out how to make use of TensorBoard for visualizing graphs, better understand what’s under the hood, and debug the performance of a model if necessary.

原文链接:https://becominghuman.ai/using-long-short-term-memory-networks-and-tensorflow-for-image-captioning-3dab5f86d976


阅读更多

扫码向博主提问

coderpai

问题是最好的解答
去开通我的Chat快问

没有更多推荐了,返回首页