《机器学习-小知识点》1: Generative VS Discriminative 问题

生成式模型(generative) vs 判别式模型(discriminative)
摘要由CSDN通过智能技术生成

生成式模型(generative) vs 判别式模型(discriminative)

2021.12.22 在《统计学习方法》的时候,对此篇内容重新做了整理和更新

1 基本概念

监督学习的任务就是学习一个模型, 应用这一模型, 对给定的输入预测相应的输出。 这个模型的一般形式为决策函数: Y = f ( X ) Y=f(X) Yf(X)或者条件概率分布: P ( Y ∣ X ) P(Y|X) P(YX)

举个例子,方便理解:

  • 输入一个人身高2米,体重200斤的数据
  • Y=f(X) 输出为0(代表男) 是个分类的结果
  • P(Y|X) 则可能输出 P ( 男 ) = 0.9 , P ( 女 ) = 0.1 P(男)=0.9,P(女)=0.1 P()=0.9P()=0.1
  • 监督学习方法又可以分为生成方法(generative approach)判别方法(discriminative approach)
  • 所学到的模型分别称为生成模型(generative model)判别模型(discriminative model)

2 思路的区别

2.1 生成式模型(generative)

生成方法原理:① 由数据学习联合概率分布 P ( X , Y )

  • 3
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers) are both advanced natural language processing (NLP) models developed by OpenAI and Google respectively. Although they share some similarities, there are key differences between the two models. 1. Pre-training Objective: GPT is pre-trained using a language modeling objective, where the model is trained to predict the next word in a sequence of words. BERT, on the other hand, is trained using a masked language modeling objective. In this approach, some words in the input sequence are masked, and the model is trained to predict these masked words based on the surrounding context. 2. Transformer Architecture: Both GPT and BERT use the transformer architecture, which is a neural network architecture that is specifically designed for processing sequential data like text. However, GPT uses a unidirectional transformer, which means that it processes the input sequence in a forward direction only. BERT, on the other hand, uses a bidirectional transformer, which allows it to process the input sequence in both forward and backward directions. 3. Fine-tuning: Both models can be fine-tuned on specific NLP tasks, such as text classification, question answering, and text generation. However, GPT is better suited for text generation tasks, while BERT is better suited for tasks that require a deep understanding of the context, such as question answering. 4. Training Data: GPT is trained on a massive corpus of text data, such as web pages, books, and news articles. BERT is trained on a similar corpus of text data, but it also includes labeled data from specific NLP tasks, such as the Stanford Question Answering Dataset (SQuAD). In summary, GPT and BERT are both powerful NLP models, but they have different strengths and weaknesses depending on the task at hand. GPT is better suited for generating coherent and fluent text, while BERT is better suited for tasks that require a deep understanding of the context.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

羊老羊

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值