李宏毅课程笔记
文章平均质量分 80
李宏毅课程笔记
cx_0401
这个作者很懒,什么都没留下…
展开
-
BERT(李宏毅课程)
BERT自监督学习Bert应用自监督学习不完全等于无监督学习,属于一种无监督学习。隐藏输入bert中含有一个transformer和encoder,然后通过mask或者random的方式替换某个字符,然后希望输出这个替换的字符,训练出对应的参数,形成一个自监督的过程。语句预测预测两个句子是否接在一起Bert应用在应用之前,先pre-train出一个bert出来,然后可以应用经过微调的bert于其他任务上。输入为序列,输出为类别bert上再加上一个linear层经过原创 2021-11-04 22:48:16 · 423 阅读 · 0 评论 -
GAN(李宏毅课程-未完)
GANGenerative Adversarial Network#Genetator根据不同分布的z可以得到不同分布的y,y称为generator例子:link即对于同样的输入可能有不同的输出(可能右转可能左转)Generative Adversarial Networkunconditional generation这里选择normal distribution,因为复杂的内容可以交给generator除了generator之外,我们还需要一个discriminator,用于原创 2021-11-02 21:45:47 · 96 阅读 · 0 评论 -
Classification(李宏毅课程)
ClassificationClassification as RegressionClass as one-hot vectorLoss of ClassificationClassification as Regressionif class 1/2/3 have some relations, we can use this method. ex: age and gradeClass as one-hot vectortwo-dimension softmax is same as si原创 2021-08-17 21:52:19 · 85 阅读 · 0 评论 -
how to train network(李宏毅课程)
ML ExperimentModel BiasOptimization IssueOverfittingMismatchBatchMomentumError SurfaceModel Biasadd the featuresmake your model complex, improve the level of the netOptimization IssueIf deeper networks do not obtain smaller loss on training data原创 2021-08-17 21:31:14 · 85 阅读 · 0 评论 -
Transformer(李宏毅课程)
TransformerSeq2seqSeq2seqThe output length is determined by modelSpeech RecognitionMachine TranslationSpeech Translation(language without text)Question AnsweringNLP problem can be solved by seq2seqSyntactic Parsingoutput is a tree which原创 2021-08-03 15:59:41 · 201 阅读 · 0 评论 -
Self-Attention(李宏毅课程)
Self-Attentionintroductionintroductioninputvector set:graph、audiooutput原创 2021-08-03 11:22:34 · 110 阅读 · 0 评论 -
论文笔记Evolving Fully Automated Machine Learning via Life-Long Knowledge Anchors
论文笔记Evolving Fully Automated Machine Learning via Life-Long Knowledge Anchors摘要介绍1. 现存的Self-AutoML2.本文提出的Fully-AutoML相关工作1.自动化机器学习2.终生学习和元学习摘要自动化机器学习成就:人工提取特征、模型设计例子:NAS、optimizer selection缺陷:data cleaning、model ensemble,仍需要人为干预,局部最优本文工作:提出完整的AutoML流原创 2021-08-02 18:31:38 · 508 阅读 · 0 评论 -
Life Long Learning(李宏毅课程)
Life Long Learningcatastrophic forgettingmulti-taskmulti-modelTransfer vs LLLEvaluationSelective Synaptic PlasticityAdditional Neural Resource AllocationMemory Replycatastrophic forgettingmulti-taskcomputation problem ---- using all datastorage issue原创 2021-07-30 10:15:17 · 112 阅读 · 0 评论 -
pytorch-introduction
pytorchGoogle ColabDNN Training ProcedureGoogle Colablink: https://colab.research.google.com/drive/1plDsa66koeaskb3YFag4CAX6FSyoJzBcDNN Training ProcedureOverviewTensor2.1 constructor:import numpyimport torch# from listx = torch.tensor([[1,原创 2021-04-29 16:26:31 · 139 阅读 · 0 评论 -
Backpropagation
BackpropagationLoss functionLoss functionfunction content:L(θ)=∑iCin(θ)=∑iyi2−yi^2L(\theta)=\sum_{i}C_i^n(\theta)=\sum_{i}y_i^2-\hat{y_i}^2L(θ)=i∑Cin(θ)=i∑yi2−yi^2how to calculate the partial value of partial of var∂Ci∂θ=∂yi∂θ∗2yi{\partial{C_i原创 2021-04-26 15:58:09 · 52 阅读 · 0 评论 -
Machine Learning(李宏毅公开课笔记)-Machine Learning and Deep Learning
Machine Learning(李宏毅)Machine Learning and Basic ConceptionsMachine Learning and Basic ConceptionsFunctions1.1 Regression:PM2.51.2 Classification:chess1.3 Others:structured learningThe procedures of finding the functions2.1 Functions with unknown原创 2021-04-23 20:20:30 · 465 阅读 · 0 评论