李宏毅课程笔记
文章平均质量分 80
李宏毅课程笔记
cx_0401
这个作者很懒,什么都没留下…
展开
-
BERT(李宏毅课程)
BERT自监督学习Bert应用 自监督学习 不完全等于无监督学习,属于一种无监督学习。 隐藏输入 bert中含有一个transformer和encoder,然后通过mask或者random的方式替换某个字符,然后希望输出这个替换的字符,训练出对应的参数,形成一个自监督的过程。 语句预测 预测两个句子是否接在一起 Bert应用 在应用之前,先pre-train出一个bert出来,然后可以应用经过微调的bert于其他任务上。 输入为序列,输出为类别 bert上再加上一个linear层经过原创 2021-11-04 22:48:16 · 329 阅读 · 0 评论 -
GAN(李宏毅课程-未完)
GANGenerative Adversarial Network #Genetator 根据不同分布的z可以得到不同分布的y,y称为generator 例子:link 即对于同样的输入可能有不同的输出(可能右转可能左转) Generative Adversarial Network unconditional generation 这里选择normal distribution,因为复杂的内容可以交给generator 除了generator之外,我们还需要一个discriminator,用于原创 2021-11-02 21:45:47 · 88 阅读 · 0 评论 -
Classification(李宏毅课程)
ClassificationClassification as RegressionClass as one-hot vectorLoss of Classification Classification as Regression if class 1/2/3 have some relations, we can use this method. ex: age and grade Class as one-hot vector two-dimension softmax is same as si原创 2021-08-17 21:52:19 · 73 阅读 · 0 评论 -
how to train network(李宏毅课程)
ML ExperimentModel BiasOptimization IssueOverfittingMismatchBatchMomentumError Surface Model Bias add the features make your model complex, improve the level of the net Optimization Issue If deeper networks do not obtain smaller loss on training data原创 2021-08-17 21:31:14 · 64 阅读 · 0 评论 -
Transformer(李宏毅课程)
TransformerSeq2seq Seq2seq The output length is determined by model Speech Recognition Machine Translation Speech Translation(language without text) Question Answering NLP problem can be solved by seq2seq Syntactic Parsing output is a tree which原创 2021-08-03 15:59:41 · 170 阅读 · 0 评论 -
Self-Attention(李宏毅课程)
Self-Attentionintroduction introduction input vector set: graph、audio output原创 2021-08-03 11:22:34 · 97 阅读 · 0 评论 -
论文笔记Evolving Fully Automated Machine Learning via Life-Long Knowledge Anchors
论文笔记Evolving Fully Automated Machine Learning via Life-Long Knowledge Anchors摘要介绍1. 现存的Self-AutoML2.本文提出的Fully-AutoML相关工作1.自动化机器学习2.终生学习和元学习 摘要 自动化机器学习 成就:人工提取特征、模型设计 例子:NAS、optimizer selection 缺陷:data cleaning、model ensemble,仍需要人为干预,局部最优 本文工作:提出完整的AutoML流原创 2021-08-02 18:31:38 · 485 阅读 · 0 评论 -
Life Long Learning(李宏毅课程)
Life Long Learningcatastrophic forgettingmulti-taskmulti-modelTransfer vs LLLEvaluationSelective Synaptic PlasticityAdditional Neural Resource AllocationMemory Reply catastrophic forgetting multi-task computation problem ---- using all data storage issue原创 2021-07-30 10:15:17 · 90 阅读 · 0 评论 -
pytorch-introduction
pytorchGoogle ColabDNN Training Procedure Google Colab link: https://colab.research.google.com/drive/1plDsa66koeaskb3YFag4CAX6FSyoJzBc DNN Training Procedure Overview Tensor 2.1 constructor: import numpy import torch # from list x = torch.tensor([[1,原创 2021-04-29 16:26:31 · 116 阅读 · 0 评论 -
Backpropagation
BackpropagationLoss function Loss function function content: L(θ)=∑iCin(θ)=∑iyi2−yi^2L(\theta)=\sum_{i}C_i^n(\theta)=\sum_{i}y_i^2-\hat{y_i}^2L(θ)=i∑Cin(θ)=i∑yi2−yi^2 how to calculate the partial value of partial of var ∂Ci∂θ=∂yi∂θ∗2yi{\partial{C_i原创 2021-04-26 15:58:09 · 44 阅读 · 0 评论 -
Machine Learning(李宏毅公开课笔记)-Machine Learning and Deep Learning
Machine Learning(李宏毅)Machine Learning and Basic Conceptions Machine Learning and Basic Conceptions Functions 1.1 Regression:PM2.5 1.2 Classification:chess 1.3 Others:structured learning The procedures of finding the functions 2.1 Functions with unknown原创 2021-04-23 20:20:30 · 334 阅读 · 0 评论