GNNExplainer: Generating Explanations for Graph Neural Networks

翻译水平有限,读者见谅。

  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/ Deep Learning: Recurrent Neural Networks in Python GRU, LSTM, + more modern deep learning, machine learning, and data science for sequences Created by Lazy Programmer Inc. Last updated 5/2017 English What Will I Learn? Understand the simple recurrent unit (Elman unit) Understand the GRU (gated recurrent unit) Understand the LSTM (long short-term memory unit) Write various recurrent networks in Theano Understand backpropagation through time Understand how to mitigate the vanishing gradient problem Solve the XOR and parity problems using a recurrent neural network Use recurrent neural networks for language modeling Use RNNs for generating text, like poetry Visualize word embeddings and look for patterns in word vector representations Requirements Calculus Linear algebra Python, Numpy, Matplotlib Write a neural network in Theano Understand backpropagation Probability (conditional and joint distributions) Write a neural network in Tensorflow Description Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences – but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not – and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades. So what’s going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models? In the first section of the course we are going to add the concept of time to our neural networks. I’ll introduce you to the Simple Recurrent Unit, also known as the Elman unit. We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem – you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence. In the next section of the course, we are going to revisit one of the most popular applications of recurrent neural networkslanguage modeling. You saw when we studied Markov Models that we could do things like generate poetry and it didn’t look too bad. We could even discriminate between 2 different poets just from the sequence of parts-of-speech tags they used. In this course, we are going to extend our language model so that it no longer makes the Markov assumption. Another popular application of neural networks for language is word vectors or word embeddings. The most common technique for this is called Word2Vec, but I’ll show you how recurrent neural networks can also be used for creating word vectors. In the section after, we’ll look at the very popular LSTM, or long short-term memory unit, and the more modern and efficient GRU, or gated recurrent unit, which has been proven to yield comparable performance. We’ll apply these to some more practical problems, such as learning a language model from Wikipedia data and visualizing the word embeddings we get as a result. All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy, Matplotlib, and Theano. I am always available to answer your questions and help you along your data science journey. This course focuses on “how to build and understand“, not just “how to use”. Anyone can learn to use an API in 15 minutes after reading some documentation. It’s not about “remembering facts”, it’s about “seeing for yourself” via experimentation. It will teach you how to visualize what’s happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you. See you in class! NOTES: All the code for this course can be downloaded from my github: /lazyprogrammer/machine_learning_examples In the directory: rnn_class Make sure you always “git pull” so you have the latest version! HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE: calculus linear algebra probability (conditional and joint distributions) Python coding: if/else, loops, lists, dicts, sets Numpy coding: matrix and vector operations, loading a CSV file Deep learning: backpropagation, XOR problem Can write a neural network in Theano and Tensorflow TIPS (for getting through the course): Watch it at 2x. Take handwritten notes. This will drastically increase your ability to retain the information. Write down the equations. If you don’t, I guarantee it will just look like gibberish. Ask lots of questions on the discussion board. The more the better! Realize that most exercises will take you days or weeks to complete. Write code yourself, don’t just sit there and look at my code. USEFUL COURSE ORDERING: (The Numpy Stack in Python) Linear Regression in Python Logistic Regression in Python (Supervised Machine Learning in Python) (Bayesian Machine Learning in Python: A/B Testing) Deep Learning in Python Practical Deep Learning in Theano and TensorFlow (Supervised Machine Learning in Python 2: Ensemble Methods) Convolutional Neural Networks in Python (Easy NLP) (Cluster Analysis and Unsupervised Machine Learning) Unsupervised Deep Learning (Hidden Markov Models) Recurrent Neural Networks in Python Artificial Intelligence: Reinforcement Learning in Python Natural Language Processing with Deep Learning in Python Who is the target audience? If you want to level up with deep learning, take this course. If you are a student or professional who wants to apply deep learning to time series or sequence data, take this course. If you want to learn about word embeddings and language modeling, take this course. If you want to improve the performance you got with Hidden Markov Models, take this course. If you’re interested the techniques that led to new developments in machine translation, take this course. If you have no idea about deep learning, don’t take this course, take the prerequisites.
音乐转换器是一种能够生成具有长期结构的音乐的技术。传统上,音乐生成模型主要依赖于自回归模型,即根据前面的音符预测下一个音符。这种方法很难捕捉到音乐的长期结构,因为它只关注于当前音符与前面音符的关系。 然而,音乐转换器采用了一种全新的方法。它将音乐的生成问题转化为基于自注意力机制的序列到序列问题。自注意力机制允许模型在生成每个音符时考虑到整个音乐序列的信息,而不仅仅是前面的音符。 此外,音乐转换器还引入了一种基于位置编码和层归一化的技术,来增强模型对音乐序列的表征能力和泛化能力。位置编码在序列中为每个位置分配一个向量,以提供位置信息。而层归一化则可以确保模型的每一层都保持相似的输出分布,从而提高模型的训练稳定性和生成效果。 通过这些创新技术的运用,音乐转换器能够更好地捕捉到音乐的长期结构。它可以生成具有旋律、和声和节奏等多个音乐要素的音乐片段,并且这些片段之间能够形成完整的结构,如引言、主题、发展和回旋等。 总之,音乐转换器是一种利用自注意力机制、位置编码和层归一化等技术生成具有长期结构的音乐的方法。它的创新之处在于能够全局考虑音乐序列的信息,并能够生成具有完整结构的音乐片段。这使得音乐转换器成为一个有潜力的工具,在音乐创作和生成领域有着广阔的应用前景。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值