What are some good books/papers for learning deep learning?

What's the most effective way to get started with deep learning?
 
 
 
29 Answers
 
Yoshua Bengio 
 

Yes, if you have prior machine learning experience it should be easy to get started with deep learning. The Deep Learning book should help you, for sure. So would the deep learning summer school. See my other answers, about Yoshua Bengio: How can one get started with machine learning?

 
 
Karlijn Willems
 
 

I’d first start off by checking off this checklist: Karlijn Willems' answer to How does a total beginner start to learn machine learning if they have some knowledge of programming languages? , which is concerned with learning Machine Learning.

The seven following steps (+ resources!), listed below, are included in the answer above.

  1. Assess, refresh and learn math and stats.
  2. Don’t be scared of investing in “theory”.
  3. Get hands-on.
  4. Practice.
  5. Don’t be scared of projects.
  6. Don’t stop.
  7. Make use of all the material that is out there.

It’s clear from step 6 that you never stop learning and since deep learning is a subfield of Machine Learning that is a set of algorithms that is inspired by the structure and function of the brain, deep learning will also fall under this.

The steps that I outlined will still stay the same, but you’ll probably want to make use of Deep Learning-specific material:

Tutorials

Courses

Books

Sites

Video

Cheat sheets, infographics, …

Relevant posts, papers, …

Blogs

Podcasts

This is just a small overview; There is much more material out there!

Become a successful algo & quant trader in 6 months.
Acquire the knowledge, tools & techniques used by traders in the real world.
Start now at quantinsti.com
 
Ashish Bakshi
 

At first, I would recommend you to go through this video that will give you an overall description of Deep Learning in a concise manner:

 

If you have no prior experience with Machine Learning, you may find it difficult to start with Deep Learning as Deep Learning is a sub-set of Machine Learning. But don’t worry, if you follow proper structure, you will be able to do it conveniently. Now, let me give you a proper structure that I believe one should follow in order to get started with Deep Learning:

  • Basic Mathematics - Many of the Deep Learning concepts involves maths, if you are interested in knowing things in and out. Therefore, before diving deep into Deep Learning, you should have basic knowledge of statistics, probability, linear algebra and machine learning algorithm.
  • Programming Tool/Library: There are many tools or libraries out there that provide all the required build - in functions and data structure need for implementing different type of Deep Neural Networks. You need to choose one of them based on the latest trend and how much familiar you are with the programming language used by them. I would recommend you to go ahead with TensorFlow library based on Python that has been developed by Google and is quite trending nowadays. Following is the video on TensorFlow Tutorial that explains all the basics first and then take you through various use cases:
 
  • Perceptron and Neural Network - A perceptron is a fundamental unit of a neural network that mimics the functionality of a biological neuron. You need to understand its functioning like how the output of one layer in a neural network is propagated forward to the next layer so as to learn the intrinsic features of the input data set.
  • Gradient Descent and Backpropagation - Once you are familiar with the standard neural network , you need to understand the mechanism by which these networks are trained to obtain the correct results. In fact, you need to understand this part quite well so that you can choose the correct variants of gradient descent or other optimization techniques that suits your use case.
  • Hands on Experience in Training Neural Networks - Although it is quite evident, still I think it is necessary to cast light on the importance of hands on experience. You need to work on different projects that are already there on the internet as well as devise some your own. In fact, you need to experiment a lot by tweaking different parameters and playing around with different API’s provided by the framework that you have chosen for Deep Learning.
  • Advanced Neural Network Types: Once you are familiar with neural network and have gained enough experience in training a neural networks, you can move ahead and explore more advanced topics such as CNN(Convolutional Neural Networks),RNN (Recurrent Neural Networks), RBM(Restricted Boltzmann Machine) andAuto-Encoders.
  1. CNN are a special case of Deep Neural Network that has been successfully applied to analyze visual imagery.
  2. RNN, which is also a type of deep neural mode that are good at processing arbitrary sequences of inputs and therefore, are quite effective for Speech Recognition or sequences with temporal dependencies.
  3. RBMs also a special case of Artificial Neural Network are used for applications in dimensionality reduction, classification, collaborativefiltering, feature learning and topic modelling.
  4. Auto-encoder is an artificial neural network used for unsupervised learning of efficient codings.

You may also start with this introductory blog on Deep Learning that explains idea behind deep learning in a nutshell or you go through the following video on Deep Learning Tutorial:

At last, I want to remind you again about the importance of hands on experience because this is the only way to gain various insights on the working of a neural network. Hope it helps!

Software for productivity tracking.
Time tracking and productivity improvement software with screenshots and website and applications.
Free trial at timedoctor.com
 
Akhil Anil
 
 
Here are some resources to start learning Deep Learning:

Free Online Books

  1. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville (05/07/2015)
  2. Neural Networks and Deep Learning by Michael Nielsen (Dec 2014)
  3. Deep Learning by Microsoft Research (2013)
  4. Deep Learning Tutorial by LISA lab, University of Montreal (Jan 6 2015)
  5. neuraltalk by Andrej Karpathy : numpy-based RNN/LSTM implementation
  6. An introduction to genetic algorithms
  7. Artificial Intelligence: A Modern Approach
  8. Deep Learning in Neural Networks: An Overview

Courses

  1. Machine Learning - Stanford by Andrew Ng in Coursera (2010-2014)
  2. Machine Learning - Caltech by Yaser Abu-Mostafa (2012-2014)
  3. Machine Learning - Carnegie Mellon by Tom Mitchell (Spring 2011)
  4. Neural Networks for Machine Learning by Geoffrey Hinton in Coursera (2012)
  5. Neural networks class by Hugo Larochelle from Université de Sherbrooke (2013)
  6. Deep Learning Course by CILVR lab @ NYU (2014)
  7. A.I - Berkeley by Dan Klein and Pieter Abbeel (2013)
  8. A.I - MIT by Patrick Henry Winston (2010)
  9. Vision and learning - computers and brains by Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013)
  10. Convolutional Neural Networks for Visual Recognition - Stanford by Fei-Fei Li, Andrej Karpathy (2015)
  11. Deep Learning for Natural Language Processing - Stanford
  12. Neural Networks - usherbrooke
  13. Machine Learning - Oxford (2014-2015)
  14. Deep Learning - Nvidia (2015)

Videos and Lectures

  1. How To Create A Mind By Ray Kurzweil
  2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng
  3. Recent Developments in Deep Learning By Geoff Hinton
  4. The Unreasonable Effectiveness of Deep Learning by Yann LeCun
  5. Deep Learning of Representations by Yoshua bengio
  6. Principles of Hierarchical Temporal Memory by Jeff Hawkins
  7. Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab by Adam Coates
  8. Making Sense of the World with Deep Learning By Adam Coates
  9. Demystifying Unsupervised Feature Learning By Adam Coates
  10. Visual Perception with Deep Learning By Yann LeCun
  11. The Next Generation of Neural Networks By Geoffrey Hinton at GoogleTechTalks
  12. The wonderful and terrifying implications of computers that can learn By Jeremy Howard at TEDxBrussels
  13. Unsupervised Deep Learning - Stanford by Andrew Ng in Stanford (2011)
  14. Natural Language Processing By Chris Manning in Stanford

Papers

  1. ImageNet Classification with Deep Convolutional Neural Networks
  2. Using Very Deep Autoencoders for Content Based Image Retrieval
  3. Learning Deep Architectures for AI
  4. CMU’s list of papers
  5. Neural Networks for Named Entity Recognition zip
  6. Training tricks by YB
  7. Geoff Hinton's reading list (all papers)
  8. Supervised Sequence Labelling with Recurrent Neural Networks
  9. Statistical Language Models based on Neural Networks
  10. Training Recurrent Neural Networks
  11. Recursive Deep Learning for Natural Language Processing and Computer Vision
  12. Bi-directional RNN
  13. LSTM
  14. GRU - Gated Recurrent Unit
  15. GFRNN . .
  16. LSTM: A Search Space Odyssey
  17. A Critical Review of Recurrent Neural Networks for Sequence Learning
  18. Visualizing and Understanding Recurrent Networks
  19. Wojciech Zaremba, Ilya Sutskever, An Empirical Exploration of Recurrent Network Architectures
  20. Recurrent Neural Network based Language Model
  21. Extensions of Recurrent Neural Network Language Model
  22. Recurrent Neural Network based Language Modeling in Meeting Recognition
  23. Deep Neural Networks for Acoustic Modeling in Speech Recognition
  24. Speech Recognition with Deep Recurrent Neural Networks
  25. Reinforcement Learning Neural Turing Machines
  26. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
  27. Google - Sequence to Sequence Learning with Nneural Networks
  28. Memory Networks
  29. Policy Learning with Continuous Memory States for Partially Observed Robotic Control
  30. Microsoft - Jointly Modeling Embedding and Translation to Bridge Video and Language
  31. Neural Turing Machines
  32. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

Tutorials

  1. UFLDL Tutorial 1
  2. UFLDL Tutorial 2
  3. Deep Learning for NLP (without Magic)
  4. A Deep Learning Tutorial: From Perceptrons to Deep Networks
  5. Deep Learning from the Bottom up
  6. Theano Tutorial
  7. Neural Networks for Matlab
  8. Using convolutional neural nets to detect facial keypoints tutorial
  9. Torch7 Tutorials
  10. The Best Machine Learning Tutorials On The Web
  11. VGG Convolutional Neural Networks Practical

    Datasets

  1. MNIST Handwritten digits
  2. Google House Numbers from street view
  3. CIFAR-10 and CIFAR-1004.
  4. IMAGENET
  5. Tiny Images 80 Million tiny images6.
  6. Flickr Data 100 Million Yahoo dataset
  7. Berkeley Segmentation Dataset 500
  8. UC Irvine Machine Learning Repository
  9. Flickr 8k
  10. Flickr 30k
  11. Microsoft COCO
  12. VQA
  13. Image QA
  14. AT&T Laboratories Cambridge face database
  15. AVHRR Pathfinder
  16. Air Freight - The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. (455 images + GT, each 160x120 pixels). (Formats: PNG)
  17. Amsterdam Library of Object Images - ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. (Formats: png)
  18. Annotated face, hand, cardiac & meat images - Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. (Formats: bmp,asf)
  19. Image Analysis and Computer Graphics
  20. Brown University Stimuli - A variety of datasets including geons, objects, and "greebles". Good for testing recognition algorithms. (Formats: pict)
  21. CAVIAR video sequences of mall and public space behavior - 90K video frames in 90 sequences of various human activities, with XML ground truth of detection and behavior classification (Formats: MPEG2 & JPEG)
  22. Machine Vision Unit
  23. CCITT Fax standard images - 8 images (Formats: gif)
  24. CMU CIL's Stereo Data with Ground Truth - 3 sets of 11 images, including color tiff images with spectroradiometry (Formats: gif, tiff)
  25. CMU PIE Database - A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.
  26. CMU VASC Image Database - Images, sequences, stereo pairs (thousands of images) (Formats: Sun Rasterimage)
  27. Caltech Image Database - about 20 images - mostly top-down views of small objects and toys. (Formats: GIF)
  28. Columbia-Utrecht Reflectance and Texture Database - Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. (Formats: bmp)
  29. Computational Colour Constancy Data - A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. (Formats: tiff)
  30. Computational Vision Lab
  31. Content-based image retrieval database - 11 sets of color images for testing algorithms for content-based retrieval. Most sets have a description file with names of objects in each image. (Formats: jpg)
  32. Efficient Content-based Retrieval Group
  33. Densely Sampled View Spheres - Densely sampled view spheres - upper half of the view sphere of two toy objects with 2500 images each. (Formats: tiff)
  34. Computer Science VII (Graphical Systems)
  35. Digital Embryos - Digital embryos are novel objects which may be used to develop and test object recognition systems. They have an organic appearance. (Formats: various formats are available on request)
  36. Univerity of Minnesota Vision Lab
  37. El Salvador Atlas of Gastrointestinal VideoEndoscopy - Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. (Formats: jpg, mpg, gif)
  38. FG-NET Facial Aging Database - Database contains 1002 face images showing subjects at different ages. (Formats: jpg)
  39. FVC2000 Fingerprint Databases - FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark (3520 fingerprints in all).
  40. Biometric Systems Lab - University of Bologna
  41. Face and Gesture images and image sequences - Several image datasets of faces and gestures that are ground truth annotated for benchmarking
  42. German Fingerspelling Database - The database contains 35 gestures and consists of 1400 image sequences that contain gestures of 20 different persons recorded under non-uniform daylight lighting conditions. (Formats: mpg,jpg)
  43. Language Processing and Pattern Recognition
  44. Groningen Natural Image Database - 4000+ 1536x1024 (16 bit) calibrated outdoor images (Formats: homebrew)
  45. ICG Testhouse sequence - 2 turntable sequences from ifferent viewing heights, 36 images each, resolution 1000x750, color (Formats: PPM)
  46. Institute of Computer Graphics and Vision
  47. IEN Image Library - 1000+ images, mostly outdoor sequences (Formats: raw, ppm)
  48. INRIA's Syntim images database - 15 color image of simple objects (Formats: gif)
  49. INRIA
  50. INRIA's Syntim stereo databases - 34 calibrated color stereo pairs (Formats: gif)
  51. Image Analysis Laboratory - Images obtained from a variety of imaging modalities -- raw CFA images, range images and a host of "medical images". (Formats: homebrew)
  52. Image Analysis Laboratory
  53. Image Database - An image database including some textures
  54. JAFFE Facial Expression Image Database - The JAFFE database consists of 213 images of Japanese female subjects posing 6 basic facial expressions as well as a neutral pose. Ratings on emotion adjectives are also available, free of charge, for research purposes. (Formats: TIFF Grayscale images.)
  55. ATR Research, Kyoto, Japan
  56. JISCT Stereo Evaluation - 44 image pairs. These data have been used in an evaluation of stereo analysis, as described in the April 1993 ARPA Image Understanding Workshop paper ``The JISCT Stereo Evaluation'' by R.C.Bolles, H.H.Baker, and M.J.Hannah, 263--274 (Formats: SSI)
  57. MIT Vision Texture - Image archive (100+ images) (Formats: ppm)
  58. MIT face images and more - hundreds of images (Formats: homebrew)
  59. Machine Vision - Images from the textbook by Jain, Kasturi, Schunck (20+ images) (Formats: GIF TIFF)
  60. Mammography Image Databases - 100 or more images of mammograms with ground truth. Additional images available by request, and links to several other mammography databases are provided. (Formats: homebrew)
  61. ftp://ftp.cps.msu.edu/pub/prip - many images (Formats: unknown)
  62. Middlebury Stereo Data Sets with Ground Truth - Six multi-frame stereo data sets of scenes containing planar regions. Each data set contains 9 color images and subpixel-accuracy ground-truth data. (Formats: ppm)
  63. Middlebury Stereo Vision Research Page - Middlebury College
  64. Modis Airborne simulator, Gallery and data set - High Altitude Imagery from around the world for environmental modeling in support of NASA EOS program (Formats: JPG and HDF)
  65. NIST Fingerprint and handwriting - datasets - thousands of images (Formats: unknown)
  66. NIST Fingerprint data - compressed multipart uuencoded tar file
  67. NLM HyperDoc Visible Human Project - Color, CAT and MRI image samples - over 30 images (Formats: jpeg)
  68. National Design Repository - Over 55,000 3D CAD and solid models of (mostly) mechanical/machined engineerign designs. (Formats: gif,vrml,wrl,stp,sat)
  69. Geometric & Intelligent Computing Laboratory
  70. OSU (MSU) 3D Object Model Database - several sets of 3D object models collected over several years to use in object recognition research (Formats: homebrew, vrml)
  71. OSU (MSU/WSU) Range Image Database - Hundreds of real and synthetic images (Formats: gif, homebrew)
  72. OSU/SAMPL Database: Range Images, 3D Models, Stills, Motion Sequences - Over 1000 range images, 3D object models, still images and motion sequences (Formats: gif, ppm, vrml, homebrew)
  73. Signal Analysis and Machine Perception Laboratory
  74. Otago Optical Flow Evaluation Sequences - Synthetic and real sequences with machine-readable ground truth optical flow fields, plus tools to generate ground truth for new sequences. (Formats: ppm,tif,homebrew)
  75. Vision Research Group
  76. ftp://ftp.limsi.fr/pub/quenot/op... - Real and synthetic image sequences used for testing a Particle Image Velocimetry application. These images may be used for the test of optical flow and image matching algorithms. (Formats: pgm (raw))
  77. LIMSI-CNRS/CHM/IMM/vision
  78. LIMSI-CNRS
  79. Photometric 3D Surface Texture Database - This is the first 3D texture database which provides both full real surface rotations and registered photometric stereo data (30 textures, 1680 images). (Formats: TIFF)
  80. SEQUENCES FOR OPTICAL FLOW ANALYSIS (SOFA) - 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. (Formats: gif)
  81. Computer Vision Group
  82. Sequences for Flow Based Reconstruction - synthetic sequence for testing structure from motion algorithms (Formats: pgm)
  83. Stereo Images with Ground Truth Disparity and Occlusion - a small set of synthetic images of a hallway with varying amounts of noise added. Use these images to benchmark your stereo algorithm. (Formats: raw, viff (khoros), or tiff)
  84. Stuttgart Range Image Database - A collection of synthetic range images taken from high-resolution polygonal models available on the web (Formats: homebrew)
  85. Department Image Understanding
  86. The AR Face Database - Contains over 4,000 color images corresponding to 126 people's faces (70 men and 56 women). Frontal views with variations in facial expressions, illumination, and occlusions. (Formats: RAW (RGB 24-bit))
  87. Purdue Robot Vision Lab
  88. The MIT-CSAIL Database of Objects and Scenes - Database for testing multiclass object detection and scene recognition algorithms. Over 72,000 images with 2873 annotated frames. More than 50 annotated object classes. (Formats: jpg)
  89. The RVL SPEC-DB (SPECularity DataBase) - A collection of over 300 real images of 100 objects taken under three different illuminaiton conditions (Diffuse/Ambient/Directed). -- Use these images to test algorithms for detecting and compensating specular highlights in color images. (Formats: TIFF )
  90. Robot Vision Laboratory
  91. The Xm2vts database - The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.
  92. Centre for Vision, Speech and Signal Processing
  93. Traffic Image Sequences and 'Marbled Block' Sequence - thousands of frames of digitized traffic image sequences as well as the 'Marbled Block' sequence (grayscale images) (Formats: GIF)
  94. IAKS/KOGS
  95. U Bern Face images - hundreds of images (Formats: Sun rasterfile)
  96. U Michigan textures (Formats: compressed raw)
  97. U Oulu wood and knots database - Includes classifications - 1000+ color images (Formats: ppm)
  98. UCID - an Uncompressed Colour Image Database - a benchmark database for image retrieval with predefined ground truth. (Formats: tiff)
  99. UMass Vision Image Archive - Large image database with aerial, space, stereo, medical images and more. (Formats: homebrew)
  100. UNC's 3D image database - many images (Formats: GIF)
  101. USF Range Image Data with Segmentation Ground Truth - 80 image sets (Formats: Sun rasterimage)
  102. University of Oulu Physics-based Face Database - contains color images of faces under different illuminants and camera calibration conditions as well as skin spectral reflectance measurements of each person.
  103. Machine Vision and Media Processing Unit
  104. University of Oulu Texture Database - Database of 320 surface textures, each captured under three illuminants, six spatial resolutions and nine rotation angles. A set of test suites is also provided so that texture segmentation, classification, and retrieval algorithms can be tested in a standard manner. (Formats: bmp, ras, xv)
  105. Machine Vision Group
  106. Usenix face database - Thousands of face images from many different sites (circa 994)
  107. View Sphere Database - Images of 8 objects seen from many different view points. The view sphere is sampled using a geodesic with 172 images/sphere. Two sets for training and testing are available. (Formats: ppm)
  108. PRIMA, GRAVIR
  109. Vision-list Imagery Archive - Many images, many formats
  110. Wiry Object Recognition Database - Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. (Formats: jpg)
  111. 3D Vision Group
  112. Yale Face Database - 165 images (15 individuals) with different lighting, expression, and occlusion configurations.
  113. Yale Face Database B - 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions). (Formats: PGM)
  114. Center for Computational Vision and Control

Frameworks

  1. Caffe
  2. Torch7
  3. Theano
  4. cuda-convnet
  5. convetjs
  6. Ccv
  7. NuPIC
  8. DeepLearning4J
  9. Brain
  10. DeepLearnToolbox
  11. Deepnet
  12. Deeppy
  13. JavaNN
  14. hebel
  15. Mocha.jl
  16. OpenDL
  17. cuDNN
  18. MGL
  19. KUnet.jl
  20. Nvidia DIGITS - a web app based on Caffe
  21. Neon - Python based Deep Learning Framework
  22. Keras - Theano based Deep Learning Library
  23. Chainer - A flexible framework of neural networks for deep learning
  24. RNNLM Toolkit
  25. RNNLIB - A recurrent neural network library
  26. char-rnn
  27. MatConvNet: CNNs for MATLAB
  28. Minerva - a fast and flexible tool for deep learning on multi-GPU

Miscellaneous

  1. Google Plus - Deep Learning Community
  2. Caffe Webinar
  3. 100 Best Github Resources in Github for DL
  4. Word2Vec
  5. Caffe DockerFile
  6. TorontoDeepLEarning convnet
  7. gfx.js
  8. Torch7 Cheat sheet
  9. Misc from MIT's 'Advanced Natural Language Processing' course
  10. Misc from MIT's 'Machine Learning' course
  11. Misc from MIT's 'Networks for Learning: Regression and Classification' course
  12. Misc from MIT's 'Neural Coding and Perception of Sound' course
  13. Implementing a Distributed Deep Learning Network over Spark
  14. A chess AI that learns to play chess using deep learning.
  15. Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind
  16. Wiki2Vec. Getting Word2vec vectors for entities and word from Wikipedia Dumps
  17. The original code from the DeepMind article + tweaks
  18. Google deepdream - Neural Network art
  19. An efficient, batched LSTM.
  20. A recurrent neural network designed to generate classical music.
 
Parag K Mital
 
 

We’ve just relaunched a new course on Tensorflow: Creative Applications of Deep Learning with TensorFlow | Kadenze

Unlike other courses, this is an application-led course, teaching you fundamentals of Tensorflow as well as state-of-the-art algorithms by encouraging exploration through the development of creative thinking and creative applications of deep neural networks. We’ve already built a very strong community with an active forum and Slack, where students are able to ask each other questions and learn from each others approaches on the homework. I highly encourage you to try this course. There are plenty of *GREAT* resources for learning Deep Learning and Tensorflow. But this is the only comprehensive online course that will both teach you how to use Tensorflow and develop your creative potential for understanding how to apply the techniques in creating Neural Networks. The feedback has been overwhelmingly positive. Please have a look!

Course Information:

This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.

SCHEDULE

Session 1: Introduction To Tensorflow
We'll cover the importance of data with machine and deep learning algorithms, the basics of creating a dataset, how to preprocess datasets, then jump into Tensorflow, a library for creating computational graphs built by Google Research. We'll learn the basic components of Tensorflow and see how to use it to filter images.

Session 2: Training A Network W/ Tensorflow
We'll see how neural networks work, how they are "trained", and see the basic components of training a neural network. We'll then build our first neural network and use it for a fun application of teaching a neural network how to paint an image, and explore such a network can be extended to produce different aesthetics.

Session 3: Unsupervised And Supervised Learning
We explore deep neural networks capable of encoding a large dataset, and see how we can use this encoding to explore "latent" dimensions of a dataset or for generating entirely new content. We'll see what this means, how "autoencoders" can be built, and learn a lot of state-of-the-art extensions that make them incredibly powerful. We'll also learn about another type of model that performs discriminative learning and see how this can be used to predict labels of an image.

Session 4: Visualizing And Hallucinating Representations
This sessions works with state of the art networks and sees how to understand what "representations" they learn. We'll see how this process actually allows us to perform some really fun visualizations including "Deep Dream" which can produce infinite generative fractals, or "Style Net" which allows us to combine the content of one image and the style of another to produce widely different painterly aesthetics automatically.

Session 5: Generative Models
The last session offers a teaser into some of the future directions of generative modeling, including some state of the art models such as the "generative adversarial network", and its implementation within a "variational autoencoder", which allows for some of the best encodings and generative modeling of datasets that currently exist. We also see how to begin to model time, and give neural networks memory by creating "recurrent neural networks" and see how to use such networks to create entirely generative text.

 
Keshav Dhandhania
 
 
I agree with Tudor Achim's answer that you should first look into neural networks and back-propagation. Although that is not what deep learning is all about (there is also RBMs and auto-encoders and many other things), it is indeed a good starting point. 

In terms of resources, I think the best available resource out there is definitely Coursera - Neural Networks and Machine Learning by Geoffrey Hinton. The first half of the material gives a very good picture of the things you need to know when starting out, and the second half covers a lot of the more advanced material. 

I also think it is a good idea to start implementing things as soon as possible. Kaggle contests, in particular Digit Recognizer might be a good place to start with that. 

Reading research papers might make more sense after you do the above two. Geoffrey Hinton, Andrew Ng, and the people they cite often is a good place to start off. 

Hope you have a good time making machines smarter. :)
 
Emily Larson
 
 

What's the most effective way to get started with deep learning?

There has been a great deal of discussion recently about how our educational systems should focus more on deep learning to encourage students to understand subject matter, as opposed to simply memorizing the key terms and basic facts of a subject. Deep learning is the key to developing students' abilities to assimilate and apply what they learn long after they complete a course.

I recommend you this site http://bit.ly/DeepLearningOnline these days is one of the most effective ways to get started with deep learning, there you can find excellent resources to begin.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times

Deep learning adds the assumption that these layers of factors correspond to levels ofabstraction or composition. Varying numbers of layers and layer sizes can provide different amounts of abstraction

Learning deeply is all about repetition. It is one formula that never fails you. If it is an audio lesson that you have been given make sure you have heard it with full concentration till you have learnt it completely. Yes, even if you need to do it hundred times. Although it may sound tedious but actually deep learning can be very fun, if you do it the right way.

 
Girish Kumar
 
 
Get started with Stanford's Deep Learning Tutorial which gives a very beginner-friendly introduction to some deep learning methods (convnets, stacked-autoencoders, etc) whilst also covering some machine learning basics: Unsupervised Feature Learning and Deep Learning Tutorial

Then, you could read through this paper written by Hinton: A Fast Learning Algorithm for Deep Belief Nets. This paper is one of the most important papers in Deep Learning. If you would like to get a more rigorous introduction to Deep Learning, you could also read through Yoshua Bengio's manuscript: Page on iro.umontreal.ca. This might not be very beginner-friendly though and can be a pretty tough read.

Thereafter, you should get your hands dirty and start implementing some of these deep learning methods. You could follow this tutorial set here to do that: Deep Learning Tutorials. These tutorials also give a very beginner-friendly introductions to many deep learning methods. You'll have to use theano, a python library, and though it might be pretty unintuitive at times, it's pretty awesome (performs automatic differentiation). 

Deep Learning is a relatively new field and new methods are being devised at an incredibly fast pace. You have got to keep up and the best way is to continually read research papers from conferences such as NIPS. A reading list that could be useful is this: Reading List " Deep Learning 

Good Luck!
 
Chomba Bupe
 
 

There are already amazing answers here but what has not been mentioned is that learning never stops, you need to commit to a long learning process. You need to build side projects like build a basic deep learning library from scratch.

Implement backpropagation, stochastic gradient descent (SGD), convolutional neural networks (convNets), long-short-term-memory (LSTM) networks and run them on well known datasets like the MNIST dataset. Implement the mini versions from scratch using your mini DL library.

Trust me DL is not so easy to work with in practice, the theories may ignore the fact that you won't know the right number of layers, you won’t know the right number of neurons per layer before hand. Heck you won't even know that bugs can mess things up, it can be frustratingly nice to build practical systems and you learn a great deal of stuff from that experience.

The debugging process, even though cumbersome, can give you a lot of insights into the working principles of DL architectures. When it finally works, you will transition in a euphoric state, worth the hassle.

As you do that make sure to explore any shortcomings by implementing unique novel approaches into your mini library. That way you will know these things inside out unlike just reading alone. Without practice you will quickly forget the stuff you are reading, practicing is a good way to retain knowledge. Another good technique for knowledge retention is to answer questions about the stuff you are reading, one of the best place to do just that is on Quora, because the audience is diverse from all backgrounds.

The mind is like a library, a library has a lot of books on a variety of literature. That literature needs to be easy to find otherwise the library becomes useless. Thus when you read stuff it is like stocking up books in the library but without sorting them in a manner that is searchable, you risk being unable to find what you are looking for in the future, in short you will forget easily without practice.

Answering questions and practicing is like rearranging the books in the mental library so that you can quickly relate that knowledge more effortlessly and efficiently to the problems you encounter in the field. Your knowledge becomes indexed like Google search when you practice and answer questions.

Of course don't just jump into implementation, get an overview of deep learning, which is just about stacking many processing layers one atop the other, by reading some introductory stuff as most answers have pointed out. It is even easier if you have knowledge in machine learning (ML) because DL architectures are just neural networks so it is not such a big deal for someone with knowledge in ML.

You can even quickly implement them by treating them as a black box. Use well known libraries like TensorFlow to implement them in this manner. In fact once you have done your mini-library and have some more in depth understanding of DL architectures it is now okay to implement them using such high-level libraries and treating them as black-boxes. Because your focus at this point is the high-level understanding of a wide range of DL architectures.

And finally try to apply DL to a real-world problem. Find something you want to solve in which DL can be a module or can solve the whole problem altogether. Start that side project as a GitHub open-source project or a personal project.

Like I am trying to build a sign language interpreter using convNets + LSTMs + my own ideas on mobile devices. It is such a challenging project that it has pushed me to the extreme, it motivates me to read new research papers, implement stuff and see how they work in real life and be just actively pursuing a long term go, which makes me happy in the end.

You can also get into competitions such as Kaggle or other competitions for the sake of practicing and not just winning.

Hope this helps.

 
Jagadeesh Dondeti
 
 

Baby Steps for Deep Learning with Python:

  1. Be comfortable with python. For this, you can take a course in Code Academy and complete it. After completing it, you will be very much familiar with basics of an Object Oriented Programming.
  2. Theano: It’s another requirement for Deep Learning as you will be working with a data which is represented in the form of a tensor in our work. For Theano, you can go to Deep Learning and there are enough tutorials which makes you to become familiar with syntax and it’s structure.
  3. One of the main prerequisite to work in this field is Machine Learning (ML) background. If you don’t know ML, you can do the following things.
  4. Brush up your skills in Probability and Statistics before taking any course on Machine Learning as it involves Probability. After that, you can take a course in Coursera “Machine Learning” by Andrew NG, from Stanford University. Do all the assignments in Python which are available in the course.
  5. By now, you are familiar with neural networks which are the building blocks of Deep Learning. Now, I’d recommend you to take cs231n course (Convolutional Neural Networks for Visual Recognition) by Dr. Fei-Fei Li, Dr. Andrej Karpathy, Justin Johnson from Stanford University. Above mentioned course covers a lot about Deep Learning.
  6. It’s time to implement your skill set on MNIST , CIFAR 10, CIFAR 100 data set. If possible, you can work with ImageNet data set also.
 
Ankit
 
 

You can learn Deep Learning by Online . There are Various Online Courses , i will Suggest you Best Deep Learning Online Courses

Deep Learning A-Z™: Hands-On Artificial Neural Networks [BEST]

*** As seen on Kickstarter ***

Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind's AlphaGo beat the World champion at Go - a game where intuition plays a key role.

But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that's why it's at the heart of Artificial intelligence.

--- Why Deep Learning A-Z? ---

Here are five reasons we think Deep Learning A-Z™ really is different, and stands out from the crowd of other training programs out there:

1. ROBUST STRUCTURE

The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it.

That's why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning.

2. INTUITION TUTORIALS

So many courses and books just bombard you with the theory, and math, and coding... But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that's how this course is so different. We focus on developing an intuitive *feel*for the concepts behind Deep Learning algorithms.

With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer.

3. EXCITING PROJECTS

Are you tired of courses based on over-used, outdated data sets?

Yes? Well then you're in for a treat.

Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges:

  • Artificial Neural Networks to solve a Customer Churn problem
  • Convolutional Neural Networks for Image Recognition
  • Recurrent Neural Networks to predict Stock Prices
  • Self-Organizing Maps to investigate Fraud
  • Boltzmann Machines to create a Recomender System
  • Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize

*Stacked Autoencoders is a brand new technique in Deep Learning which didn't even exist a couple of years ago. We haven't seen this method explained anywhere else in sufficient depth.


==> Deep Learning Specialization by Andrew Ng Co-founder, Coursera

If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.

In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.

You will also hear from many top leaders in Deep Learning, who will share with you their personal stories and give you career advice.

AI is transforming multiple industries. After finishing this specialization, you will likely find creative ways to apply it to your work.

We will help you master Deep Learning, understand how to apply it, and build a career in AI.

Relevant Courses

1. Zero to Deep Learning™ with Python and Keras]

2. Data Science: Deep Learning in Python

All The Best .

 
Tudor Achim
 
 
If you're already familiar with neural networks and know the basics of gradient descent, go through and implement all the exercises in UFLDL Tutorial - Ufldl. If you want to know more after that, read most of the deep learning papers from Andrew Ng's group.
 
Jan Bussieck
 
 

I think the best way to get started with Deep Learning is to fully understand the problems it arose to solve. The most important point being that Deep Learning is hierarchical feature learning.
Deep learning models are able to automatically extract features from raw data, otherwise know as feature feature learning.

Yoshua Bengio describes deep learning in terms of its capability to uncover and learn good representations:

Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features

This quote is from his 2012 paper titled “Deep Learning of Representations for Unsupervised and Transfer Learning”, which I would recommend as a starting point to anyone with a background in machine learning.

If you are looking for a full curriculum, take a look at the open-source Deep Learning curriculum.
As soon as you are familiar with the core concepts dive into a project where you are forced to implement the algorithms, deep learning mastery is a great place to start How to Run Your First Classifier in Weka - Machine Learning Mastery

Happy learning and hacking.

 
Dipan Pal
 
 
Here's the course website for a reading group at CMU on Deep Learning. The earlier papers span the humble beginnings of the neural network to the start of the deep learning paradigm. This course is still underway and thus more papers are on the way:
10-805 DEEP LEARNING

The following link has a very comprehensive reading list for deep learning papers. Someone familiar with machine learning fundamentals will gain much from a selected reading off the page:
Reading List " Deep Learning

Also, it is best to spread out readings to include prominent different groups such as Bengio, Hinton, Ng, LeCun and others. This way you would develop a much broader understanding of the subject.
 
Adam Gibson
 
 
For Deep Learning specifically you should look in to a Page on deeplearning.net for a somewhat decent baseline. 


The following techniques  are for feed forward nets and the baseline classifiers such as deep belief networks.

Deep Learning is composed of both Unsupervised and Supervised Learning.

The first thing to understand with respect to the fundamentals is how a neural network learns.

Restricted Boltzmann machines and denoising autoencoders are generative models that learn with repeated gibbs sampling of corrupted input data to come up with a good approximation of the data.

If we think about images, this would be learning good representations of images such as different parts of a face or lines in mnist.

With text, this would be context, or how the word is used.

From here, these are stacked such that the output of one goes in to another.

Eventually the network learns good enough features being capable of running a classifier.
 This, in combination with logistic regression as an output layer creates a deep belief network used for classification.

From here, it comes down to the different knobs you can turn to generalize better, whether you do things like adaptive learning rates to help with or speed up learning, among other things. 



Convolutional Restricted Boltzmann Machines are basically this but with a moving window over various subsets of something like an image to learn good features.

It also learns by repeated gibbs sampling over what we call different slices of the data.

Convolutional nets are a bit harder to understand because they require tensors which are multi dimensional matrices known as slices.


When you get in to recurrent and recursive nets, you can do sequential data.

That's a somewhat decent overview though.
 
Shishu Raj
 
 

Get Started with deep learning through this course. This is one of the best selling course on Deep learning all over the Internet. Let’s see what you will learn through this course-

What Will You Learn?

  • Understand the intuition behind Artificial Neural Networks
  • Apply Artificial Neural Networks in practice
  • Understand the intuition behind Convolutional Neural Networks
  • Apply Convolutional Neural Networks in practice
  • Understand the intuition behind Recurrent Neural Networks
  • Apply Recurrent Neural Networks in practice
  • Understand the intuition behind Self-Organizing Maps
  • Apply Self-Organizing Maps in practice
  • Understand the intuition behind Boltzmann Machines
  • Apply Boltzmann Machines in practice
  • Understand the intuition behind AutoEncoders
  • Apply AutoEncoders in practice

Course Link-Deep Learning A-Z™: Hands-On Artificial Neural Networks | Learn to create Deep Learning Algorithms in Python

Learn to create Deep Learning Algorithms in Python from two Machine Learning & Data Science experts. Templates included.

Course Description By Course Instructor-

Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind's AlphaGo beat the World champion at Go - a game where intuition plays a key role.

But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that's why it's at the heart of Artificial intelligence.

--- Why Deep Learning A-Z? ---

Here are five reasons we think Deep Learning A-Z™ really is different, and stands out from the crowd of other training programs out there:

1. ROBUST STRUCTURE

The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it.

That's why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning.

2. INTUITION TUTORIALS

So many courses and books just bombard you with the theory, and math, and coding... But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that's how this course is so different. We focus on developing an intuitive *feel*for the concepts behind Deep Learning algorithms.

With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer.

3. EXCITING PROJECTS

Are you tired of courses based on over-used, outdated data sets?

Yes? Well then you're in for a treat.

Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges:

  • Artificial Neural Networks to solve a Customer Churn problem
  • Convolutional Neural Networks for Image Recognition
  • Recurrent Neural Networks to predict Stock Prices
  • Self-Organizing Maps to investigate Fraud
  • Boltzmann Machines to create a Recomender System
  • Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize

*Stacked Autoencoders is a brand new technique in Deep Learning which didn't even exist a couple of years ago. We haven't seen this method explained anywhere else in sufficient depth.

4. HANDS-ON CODING

In Deep Learning A-Z™ we code together with you. Every practical tutorial starts with a blank page and we write up the code from scratch. This way you can follow along and understand exactly how the code comes together and what each line means.

In addition, we will purposefully structure the code in such a way so that you can download it and apply it in your own projects. Moreover, we explain step-by-step where and how to modify the code to insert YOUR dataset, to tailor the algorithm to your needs, to get the output that you are after.

This is a course which naturally extends into your career.

5. IN-COURSE SUPPORT

Have you ever taken a course or read a book where you have questions but cannot reach the author?

Well, this course is different. We are fully committed to making this the most disruptive and powerful Deep Learning course on the planet. With that comes a responsibility to constantly be there when you need our help.

In fact, since we physically also need to eat and sleep we have put together a team of professional Data Scientists to help us out. Whenever you ask a question you will get a response from us within 48 hours maximum.

No matter how complex your query, we will be there. The bottom line is we want you to succeed.

--- The Tools ---

Tensorflow and Pytorch are the two most popular open-source libraries for Deep Learning. In this course you will learn both!

TensorFlow was developed by Google and is used in their speech recognition system, in the new google photos product, gmail, google search and much more. Companies using Tensorflow include AirBnb, Airbus, Ebay, Intel, Uber and dozens more.

PyTorch is as just as powerful and is being developed by researchers at Nvidia and leading universities: Stanford, Oxford, ParisTech. Companies using PyTorch include Twitter, Saleforce and Facebook.

So which is better and for what?

Well, in this course you will have an opportunity to work with both and understand when Tensorflow is better and when PyTorch is the way to go. Throughout the tutorials we compare the two and give you tips and ideas on which could work best in certain circumstances.

The interesting thing is that both these libraries are barely over 1 year old. That's what we mean when we say that in this course we teach you the most cutting edge Deep Learning models and techniques.

--- More Tools ---

Theano is another open source deep learning library. It's very similar to Tensorflow in its functionality, but nevertheless we will still cover it.

Keras is an incredible library to implement Deep Learning models. It acts as a wrapper for Theano and Tensorflow. Thanks to Keras we can create powerful and complex Deep Learning models with only a few lines of code. This is what will allow you to have a global vision of what you are creating. Everything you make will look so clear and structured thanks to this library, that you will really get the intuition and understanding of what you are doing.

--- Even More Tools ---

Scikit-learn the most practical Machine Learning library. We will mainly use it:

  • to evaluate the performance of our models with the most relevant technique, k-Fold Cross Validation
  • to improve our models with effective Parameter Tuning
  • to preprocess our data, so that our models can learn in the best conditions

And of course, we have to mention the usual suspects. This whole course is based on Python and in every single section you will be getting hours and hours of invaluable hands-on practical coding experience.

Plus, throughout the course we will be using Numpy to do high computations and manipulate high dimensional arrays, Matplotlib to plot insightful charts and Pandas to import and manipulate datasets the most efficiently.

--- Who Is This Course For? ---

As you can see, there are lots of different tools in the space of Deep Learning and in this course we make sure to show you the most important and most progressive ones so that when you're done with Deep Learning A-Z™ your skills are on the cutting edge of today's technology.

If you are just starting out into Deep Learning, then you will find this course extremely useful. Deep Learning A-Z™ is structured around special coding blueprint approaches meaning that you won't get bogged down in unnecessary programming or mathematical complexities and instead you will be applying Deep Learning techniques from very early on in the course. You will build your knowledge from the ground up and you will see how with every tutorial you are getting more and more confident.

If you already have experience with Deep Learning, you will find this course refreshing, inspiring and very practical. Inside Deep Learning A-Z™ you will master some of the most cutting-edge Deep Learning algorithms and techniques (some of which didn't even exist a year ago) and through this course you will gain an immense amount of valuable hands-on experience with real-world business challenges. Plus, inside you will find inspiration to explore new Deep Learning skills and applications.

--- Real-World Case Studies ---

Mastering Deep Learning is not just about knowing the intuition and tools, it's also about being able to apply these models to real-world scenarios and derive actual measurable results for the business or project. That's why in this course we are introducing six exciting challenges:

#1 Churn Modelling Problem

In this part you will be solving a data analytics challenge for a bank. You will be given a dataset with a large sample of the bank's customers. To make this dataset, the bank gathered information such as customer id, credit score, gender, age, tenure, balance, if the customer is active, has a credit card, etc. During a period of 6 months, the bank observed if these customers left or stayed in the bank.

Your goal is to make an Artificial Neural Network that can predict, based on geo-demographical and transactional information given above, if any individual customer will leave the bank or stay (customer churn). Besides, you are asked to rank all the customers of the bank, based on their probability of leaving. To do that, you will need to use the right Deep Learning model, one that is based on a probabilistic approach.

If you succeed in this project, you will create significant added value to the bank. By applying your Deep Learning model the bank may significantly reduce customer churn.

#2 Image Recognition

In this part, you will create a Convolutional Neural Network that is able to detect various objects in images. We will implement this Deep Learning model to recognize a cat or a dog in a set of pictures. However, this model can be reused to detect anything else and we will show you how to do it - by simply changing the pictures in the input folder.

For example, you will be able to train the same model on a set of brain images, to detect if they contain a tumor or not. But if you want to keep it fitted to cats and dogs, then you will literally be able to a take a picture of your cat or your dog, and your model will predict which pet you have. We even tested it out on Hadelin’s dog!

#3 Stock Price Prediction

In this part, you will create one of the most powerful Deep Learning models. We will even go as far as saying that you will create the Deep Learning model closest to “Artificial Intelligence”. Why is that? Because this model will have long-term memory, just like us, humans.

The branch of Deep Learning which facilitates this is Recurrent Neural Networks. Classic RNNs have short memory, and were neither popular nor powerful for this exact reason. But a recent major improvement in Recurrent Neural Networks gave rise to the popularity of LSTMs (Long Short Term Memory RNNs) which has completely changed the playing field. We are extremely excited to include these cutting-edge deep learning methods in our course!

In this part you will learn how to implement this ultra-powerful model, and we will take the challenge to use it to predict the real Google stock price. A similar challenge has already been faced by researchers at Stanford University and we will aim to do at least as good as them.

#4 Fraud Detection

According to a recent report published by Markets & Markets the Fraud Detection and Prevention Market is going to be worth $33.19 Billion USD by 2021. This is a huge industry and the demand for advanced Deep Learning skills is only going to grow. That’s why we have included this case study in the course.

This is the first part of Volume 2 - Unsupervised Deep Learning Models. The business challenge here is about detecting fraud in credit card applications. You will be creating a Deep Learning model for a bank and you are given a dataset that contains information on customers applying for an advanced credit card.

This is the data that customers provided when filling the application form. Your task is to detect potential fraud within these applications. That means that by the end of the challenge, you will literally come up with an explicit list of customers who potentially cheated on their applications.

#5 & 6 Recommender Systems

From Amazon product suggestions to Netflix movie recommendations - good recommender systems are very valuable in today's World. And specialists who can create them are some of the top-paid Data Scientists on the planet.

We will work on a dataset that has exactly the same features as the Netflix dataset: plenty of movies, thousands of users, who have rated the movies they watched. The ratings go from 1 to 5, exactly like in the Netflix dataset, which makes the Recommender System more complex to build than if the ratings were simply “Liked” or “Not Liked”.

Your final Recommender System will be able to predict the ratings of the movies the customers didn’t watch. Accordingly, by ranking the predictions from 5 down to 1, your Deep Learning model will be able to recommend which movies each user should watch. Creating such a powerful Recommender System is quite a challenge so we will give ourselves two shots. Meaning we will build it with two different Deep Learning models.

Our first model will be Deep Belief Networks, complex Boltzmann Machines that will be covered in Part 5. Then our second model will be with the powerful AutoEncoders, my personal favorites. You will appreciate the contrast between their simplicity, and what they are capable of.

And you will even be able to apply it to yourself or your friends. The list of movies will be explicit so you will simply need to rate the movies you already watched, input your ratings in the dataset, execute your model and voila! The Recommender System will tell you exactly which movies you would love one night you if are out of ideas of what to watch on Netflix!

--- Summary ---

In conclusion, this is an exciting training program filled with intuition tutorials, practical exercises and real-World case studies.

We are super enthusiastic about Deep Learning and hope to see you inside the class!

Kirill & Hadelin

Who is the target audience?

  • Anyone interested in Deep Learning
  • Students who have at least high school knowledge in math and who want to start learning Deep Learning
  • Any intermediate level people who know the basics of Machine Learning or Deep Learning, including the classical algorithms like linear regression or logistic regression and more advanced topics like Artificial Neural Networks, but who want to learn more about it and explore all the different fields of Deep Learning
  • Anyone who is not that comfortable with coding but who is interested in Deep Learning and wants to apply it easily on datasets
  • Any students in college who want to start a career in Data Science
  • Any data analysts who want to level up in Deep Learning
  • Any people who are not satisfied with their job and who want to become a Data Scientist
  • Any people who want to create added value to their business by using powerful Deep Learning tools
  • Any business owners who want to understand how to leverage the Exponential technology of Deep Learning in their business
  • Any Entrepreneur who wants to create disruption in an industry using the most cutting edge Deep Learning algorithms

Requirements

  • Just some high school mathematics level

Course Link-Deep Learning A-Z™: Hands-On Artificial Neural Networks | Learn to create Deep Learning Algorithms in Python

 
Layak Singh
 

Hi, I could not get the context of the question though. Are you saying, get started with deep learning for business purpose or wanted to learn, how to apply while building techs and products. To get started with deep learning, you can do the following:

  • Go to Google Tensorflow and learn it through various examples and illustrations
  • Try to build some small test program with the tensorflow
  • You can also try through Amazon & IBM developer tools tools too
  • Do learn standard languages to be used for building deep learning focused technologies, tools & products
  • Once you are good with public libraries or open source platforms, you can try to build your own models from scratch

Keep practicing, learn new, try new ideas. Will help further.

 
Piotr Migdał
 
 

I tried to cover than in Learning Deep Learning with Keras.

In short: I recommend from image classification on simple datasets (especially notMNIST) with a high-level library such as Keras.

An alternative would be to go from lower lever, e.g. TensorFlow and building neural networks as multi-dimensional array expressions - but only when one has mathematical background (and ‘fun’ part will happen later).

 
Amol Umbarkar
 

Assuming you a software engineer looking to up your game with DL.

Here is what you can do -

  1. Take python class from data camp or code academy
  2. Learn about basic, therotical concepts in machine learning / back proportion networks. Most useful resource is programming collective intelligence. Or try Andrew ng course from coursera
  3. Once you know basics, move on to following courses in deep learning : fast.ai · Making neural nets uncool again , search audacity free deep learning course
  4. Start looking at kaggle. Look at published answers from past competitions.

You might need a GPU machine as you progress in learning.

Consider signing up AmpLabs - Up your game if you can wait. This is something I have been working on.

 
Mahesh Kashyap
 
 

Our Company has created Deep Learning Studio - A UI based drag and drop tool to create Deep Learning Models. Check it out at Visual Deep Learning in Cloud without Programming

The best part is you do not need to know coding or learn about tools like Tensorflow

You will need to know about Deep Learning concepts though. We are in process of building a Deep Learning course that can help get started with our tool ( if you are new to Deep Learning)

Apart from cloud version we have just started shipping out Desktop and Enterprise ( Can run Locally) version

There are few other companies who are trying to create a UI based tool for Deep learning but they are quite behind in terms of maturity of the tool

Mahesh Kashyap

Chief Digital Officer

Deep Cognition, Inc.

mahesh@deepcognition.ai

 
Jan Lofton
 
 

If you are asking as an individual, find and attend a deep learning conference, in whatever field interests you. Conference organizers always need volunteers and will waive or discount the cost of registration.

RE•WORK

Have a resume handy and network…ask what the hiring qualifications are for each company.

 
Gonzalo Mier
 

Being open-minded, understanding that all the “magic” is mathematics and a deep learning algorithm is a complex NN.

Knowing that, I recommend you to search for free online courses on Coursera, Udacity, Big Data University, etc… Beginning to use some library (Tensorflow, Caffe, Pytorch…) will give you some skills to understand better how they work on high level, and reading papers on low level. So, you have to choose the road you take, but, to start, online courses are the best.

 
Eric Martin
  Eric Martin's answer to What are some good books/papers for learning deep learning? (disclaimer: my own answer) provides a good set of resources for one to get started with.
 
Chris Nicholson
 

Please don’t build a chatbot. The tech isn’t mature and you won’t get good results.

Here’s a list of resources for people starting out with deep learning:

Deep Learning for Beginners

 

The most comprehensive source of information on Deep Learning I’ve found is StanfordCS231 class. Just watch all lectures and do all assignments.

 
Sadid Hasan
 

The best way to start to learn deep learning is to read this book from beginning to end: Deep Learning .

 
TECLOV
 
 

Hi i am a director at TECLOV (GeeksHub Pvt Ltd). We provides online courses for Machine Learning and Deep Learning. We have IIT’ian and MIT’ian teachers who provides basic as well as advanced topics . Can visit our site : Home | Teclov for whole syllabus. As we are a startup so we are providing great offers .Must visit.

 
Anonymous
Anonymous
 

Recently I have started reading and found very useful:

A Complete Guide on Getting Started with Deep Learning in Python

 
 

转载于:https://www.cnblogs.com/WCFGROUP/p/7677238.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
As an AI language model, I can suggest the following steps to learn deep learning: 1. Learn the basics of machine learning: Before diving into deep learning, it is essential to understand the basics of machine learning, including supervised and unsupervised learning, regression, clustering, and classification. 2. Learn programming languages: To work with deep learning, one needs to have a strong foundation in programming languages such as Python and R, as most of the deep learning libraries are written in these languages. 3. Understand the mathematics behind deep learning: Deep learning involves a lot of math, including linear algebra, calculus, and probability. Understanding these concepts will help you better understand the algorithms used in deep learning. 4. Choose a deep learning framework: Popular deep learning frameworks include Tensorflow, Keras, PyTorch, and Caffe. Choose one and learn it. 5. Practice with datasets: Work with datasets to understand how deep learning works in practice. Kaggle is a great platform to get started with real-world datasets. 6. Read research papers: Read research papers to stay up-to-date with the latest advancements in deep learning. 7. Join communities: Join online communities such as Reddit, Discord, or GitHub to connect with other deep learning enthusiasts and learn from them. 8. Build projects: Building projects is the best way to learn deep learning. Start with simple projects and gradually move on to more complex ones. Remember, deep learning is a vast field, and it takes time and effort to master it. Keep practicing, and you will get there.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值