Huggingface简介及BERT代码浅析

本文为预训练语言模型专题系列第六篇

快速传送门

[萌芽时代][风起云涌][文本分类通用技巧][GPT家族][BERT来临]

感谢清华大学自然语言处理实验室对预训练语言模型架构的梳理,我们将沿此脉络前行,探索预训练语言模型的前沿技术,红色框为已介绍的文章。本期的内容是结合Huggingface的Transformers代码,来进一步了解下BERT的pytorch实现,欢迎大家留言讨论交流。

Hugging face 简介

Hugging face 是一家总部位于纽约的聊天机器人初创服务商,开发的应用在青少年中颇受欢迎,相比于其他公司,Hugging Face更加注重产品带来的情感以及环境因素。官网链接在此 huggingface.co/

但更令它广为人知的是Hugging Face专注于NLP技术,拥有大型的开源社区。尤其是在github上开源的自然语言处理,预训练模型库 Transformers,已被下载超过一百万次,github上超过24000个star。Transformers 提供了NLP领域大量state-of-art的 预训练语言模型结构的模型和调用框架。以下是repo的链接(github.com/huggingface/

这个库最初的名称是pytorch-pretrained-bert,它随着BERT一起应运而生。Google2018年10月底在 github.com/google-resea 开源了BERT的tensorflow实现。当时,BERT以其强劲的性能,引起NLPer的广泛关注。几乎与此同时,pytorch-pretrained-bert也开始了它的第一次提交。pytorch-pretrained-bert 用当时已有大量支持者的pytorch框架复现了BERT的性能,并提供预训练模型的下载,使没有足够算力的开发者们也能够在几分钟内就实现 state-of-art-fine-tuning。

因为pytorch框架的友好,BERT的强大,以及pytorch-pretrained-bert的简单易用,使这个repo也是受到大家的喜爱,不到10天就突破了1000个star。在2018年11月17日,repo就实现了BERT的基本功能,发布了版本0.1.2。接下来他们也没闲着,又开始将GPT等模型也往repo上搬。在2019年2月11日release的 0.5.0版本中,已经添加上了OpenAI GPT模型,以及Google的TransformerXL。

直到2019年7月16日,在repo上已经有了包括BERT,GPT,GPT-2,Transformer-XL,XLNET,XLM在内六个预训练语言模型,这时候名字再叫pytorch-pretrained-bert就不合适了,于是改成了pytorch-transformers,势力范围扩大了不少。这还没完!2019年6月Tensorflow2的beta版发布,Huggingface也闻风而动。为了立于不败之地,又实现了TensorFlow 2.0和PyTorch模型之间的深层互操作性,可以在TF2.0/PyTorch框架之间随意迁移模型。在2019年9月也发布了2.0.0版本,同时正式更名为 transformers 。到目前为止,transformers 提供了超过100种语言的,32种预训练语言模型,简单,强大,高性能,是新手入门的不二选择。

Transfromers中BERT简单运用

前几期里,一直在分享论文的阅读心得,虽然不是第一次看,但不知道大家是不是和我一样又有所收获。本期我们一起来看看如何使用Transformers包实现简单的BERT模型调用。

安装过程不再赘述,比如安装2.2.0版本 pip install transformers==2.2.0 即可,让我们看看如何调用BERT。

import torch
from transformers import BertModel, BertTokenizer
# 这里我们调用bert-base模型,同时模型的词典经过小写处理
model_name = 'bert-base-uncased'
# 读取模型对应的tokenizer
tokenizer = BertTokenizer.from_pretrained(model_name)
# 载入模型
model = BertModel.from_pretrained(model_name)
# 输入文本
input_text = "Here is some text to encode"
# 通过tokenizer把文本变成 token_id
input_ids = tokenizer.encode(input_text, add_special_tokens=True)
# input_ids: [101, 2182, 2003, 2070, 3793, 2000, 4372, 16044, 102]
input_ids = torch.tensor([input_ids])
# 获得BERT模型最后一个隐层结果
with torch.no_grad():
    last_hidden_states = model(input_ids)[0]  # Models outputs are now tuples

“”" tensor([[[-0.0549, 0.1053, -0.1065, …, -0.3550, 0.0686, 0.6506],
[-0.5759, -0.3650, -0.1383, …, -0.6782, 0.2092, -0.1639],
[-0.1641, -0.5597, 0.0150, …, -0.1603, -0.1346, 0.6216],
…,
[ 0.2448, 0.1254, 0.1587, …, -0.2749, -0.1163, 0.8809],
[ 0.0481, 0.4950, -0.2827, …, -0.6097, -0.1212, 0.2527],
[ 0.9046, 0.2137, -0.5897, …, 0.3040, -0.6172, -0.1950]]])
shape: (1, 9, 768)
“”"

可以看到,包括import在内的不到十行代码,我们就实现了读取一个预训练过的BERT模型,来encode我们指定的一个文本,对文本的每一个token生成768维的向量。如果是二分类任务,我们接下来就可以把第一个token也就是[CLS]的768维向量,接一个linear层,预测出分类的logits,或者根据标签进行训练。

如果你想在一些NLP常用数据集上复现BERT的效果,Transformers上也有现成的代码和方法,只要把数据配置好,运行命令即可,而且finetune的任务可以根据你的需要切换,非常方便。

BERT configuration

接下来,我们进一步看下Transformers的源码,我们首先进入代码的路径src/transformers 下,其中有很多的python代码文件。

configuration 开头的都是各个模型的配置代码,比如 configuration_bert.py。在这个文件里我们能够看到,主要是一个继承自 PretrainedConfig 的类 BertConfig的定义,以及不同BERT模型的config文件的下载路径,下方显示前三个。

BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
“bert-base-uncased”: “https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json”,
“bert-large-uncased”: “https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json”,
“bert-base-cased”: “https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json”,
}

我们打开第一个的链接,就能下载到bert-base-uncased的模型的配置,其中包括dropout, hidden_size, num_hidden_layers, vocab_size 等等。比如bert-base-uncased的配置它是12层的,词典大小30522等等,甚至可以在config里利用output_hidden_states配置是否输出所有hidden_state。

{
“architectures”: [
“BertForMaskedLM”
],
“attention_probs_dropout_prob”: 0.1,
“hidden_act”: “gelu”,
“hidden_dropout_prob”: 0.1,
“hidden_size”: 768,
“initializer_range”: 0.02,
“intermediate_size”: 3072,
“max_position_embeddings”: 512,
“num_attention_heads”: 12,
“num_hidden_layers”: 12,
“type_vocab_size”: 2,
“vocab_size”: 30522
}
BERT tokenization

tokenization开头的都是跟vocab有关的代码,比如在 tokenization_bert.py 中有函数如whitespace_tokenize,还有不同的tokenizer的类。同时也有各个模型对应的vocab.txt。从第一个链接进去就是bert-base-uncased的词典,这里面有30522个词,对应着config里面的vocab_size。

其中,第0个token是[pad],第101个token是[CLS],第102个token是[SEP],所以之前我们encode得到的 [101, 2182, 2003, 2070, 3793, 2000, 4372, 16044, 102] ,其实tokenize后convert前的token就是 [’[CLS]’, ‘here’, ‘is’, ‘some’, ‘text’, ‘to’, ‘en’, ‘##code’, ‘[SEP]’],经过之前BERT论文的介绍,大家应该都比较熟悉了。其中值得一提的是,BERT的vocab预留了不少unused token,如果我们会在文本中使用特殊字符,在vocab中没有,这时候就可以通过替换vacab中的unused token,实现对新的token的embedding进行训练。

PRETRAINED_VOCAB_FILES_MAP = {
“vocab_file”: {
“bert-base-uncased”: “https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt”,
“bert-large-uncased”: “https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt”,
“bert-base-cased”: “https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt”,
}
}
BERT modeling

以modeling开头的就是我们最关心的模型代码,比如 modeling_bert.py。同样的,文件中有许多不同的预训练模型以供下载,我们可以按需获取。

代码中我们可以重点关注BertModel类,它就是BERT模型的基本代码。我们可以看到它的类定义中,由embedding,encoder,pooler组成,forward时顺序经过三个模块,输出output。

class BertModel(BertPreTrainedModel):
def init(self, config):
super().init(config)
self.config = config
    <span class="bp">self</span><span class="o">.</span><span class="n">embeddings</span> <span class="o">=</span> <span class="n">BertEmbeddings</span><span class="p">(</span><span class="n">config</span><span class="p">)</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span> <span class="o">=</span> <span class="n">BertEncoder</span><span class="p">(</span><span class="n">config</span><span class="p">)</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">pooler</span> <span class="o">=</span> <span class="n">BertPooler</span><span class="p">(</span><span class="n">config</span><span class="p">)</span>

    <span class="bp">self</span><span class="o">.</span><span class="n">init_weights</span><span class="p">()</span>

def forward(
self, input_ids=None, attention_mask=None, token_type_ids=None,
position_ids=None, head_mask=None, inputs_embeds=None,
encoder_hidden_states=None, encoder_attention_mask=None,
):
“”" 省略部分代码 “”"

    <span class="n">embedding_output</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">embeddings</span><span class="p">(</span>
        <span class="n">input_ids</span><span class="o">=</span><span class="n">input_ids</span><span class="p">,</span> <span class="n">position_ids</span><span class="o">=</span><span class="n">position_ids</span><span class="p">,</span> <span class="n">token_type_ids</span><span class="o">=</span><span class="n">token_type_ids</span><span class="p">,</span> <span class="n">inputs_embeds</span><span class="o">=</span><span class="n">inputs_embeds</span>
    <span class="p">)</span>
    <span class="n">encoder_outputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span><span class="p">(</span>
        <span class="n">embedding_output</span><span class="p">,</span>
        <span class="n">attention_mask</span><span class="o">=</span><span class="n">extended_attention_mask</span><span class="p">,</span>
        <span class="n">head_mask</span><span class="o">=</span><span class="n">head_mask</span><span class="p">,</span>
        <span class="n">encoder_hidden_states</span><span class="o">=</span><span class="n">encoder_hidden_states</span><span class="p">,</span>
        <span class="n">encoder_attention_mask</span><span class="o">=</span><span class="n">encoder_extended_attention_mask</span><span class="p">,</span>
    <span class="p">)</span>
    <span class="n">sequence_output</span> <span class="o">=</span> <span class="n">encoder_outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
    <span class="n">pooled_output</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">pooler</span><span class="p">(</span><span class="n">sequence_output</span><span class="p">)</span>

    <span class="n">outputs</span> <span class="o">=</span> <span class="p">(</span><span class="n">sequence_output</span><span class="p">,</span> <span class="n">pooled_output</span><span class="p">,)</span> <span class="o">+</span> <span class="n">encoder_outputs</span><span class="p">[</span>
        <span class="mi">1</span><span class="p">:</span>
    <span class="p">]</span>  <span class="c1"># add hidden_states and attentions if they are here</span>
    <span class="k">return</span> <span class="n">outputs</span>  <span class="c1"># sequence_output, pooled_output, (hidden_states), (attentions)</span></code></pre></div><p>BertEmbeddings这个类中可以清楚的看到,embedding由三种embedding相加得到,经过layernorm 和 dropout后输出。</p><div class="highlight"><pre><code class="language-python"><span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">config</span><span class="p">):</span>
    <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">word_embeddings</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Embedding</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">vocab_size</span><span class="p">,</span> <span class="n">config</span><span class="o">.</span><span class="n">hidden_size</span><span class="p">,</span> <span class="n">padding_idx</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">position_embeddings</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Embedding</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">max_position_embeddings</span><span class="p">,</span> <span class="n">config</span><span class="o">.</span><span class="n">hidden_size</span><span class="p">)</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">token_type_embeddings</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Embedding</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">type_vocab_size</span><span class="p">,</span> <span class="n">config</span><span class="o">.</span><span class="n">hidden_size</span><span class="p">)</span>
    <span class="c1"># self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load</span>
    <span class="c1"># any TensorFlow checkpoint file</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">LayerNorm</span> <span class="o">=</span> <span class="n">BertLayerNorm</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">hidden_size</span><span class="p">,</span> <span class="n">eps</span><span class="o">=</span><span class="n">config</span><span class="o">.</span><span class="n">layer_norm_eps</span><span class="p">)</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">dropout</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dropout</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">hidden_dropout_prob</span><span class="p">)</span>

def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):
“”" 省略 embedding生成过程 “”"

    <span class="n">embeddings</span> <span class="o">=</span> <span class="n">inputs_embeds</span> <span class="o">+</span> <span class="n">position_embeddings</span> <span class="o">+</span> <span class="n">token_type_embeddings</span>
    <span class="n">embeddings</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">LayerNorm</span><span class="p">(</span><span class="n">embeddings</span><span class="p">)</span>
    <span class="n">embeddings</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">dropout</span><span class="p">(</span><span class="n">embeddings</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">embeddings</span></code></pre></div><p>BertEncoder主要将embedding的输出,逐个经过每一层Bertlayer的处理,得到各层hidden_state,再根据config的参数,来决定最后是否所有的hidden_state都要输出,BertLayer的内容展开的话,篇幅过长,读者感兴趣可以自己一探究竟。</p><div class="highlight"><pre><code class="language-python"><span class="k">class</span> <span class="nc">BertEncoder</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">config</span><span class="p">):</span>
    <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">output_attentions</span> <span class="o">=</span> <span class="n">config</span><span class="o">.</span><span class="n">output_attentions</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">output_hidden_states</span> <span class="o">=</span> <span class="n">config</span><span class="o">.</span><span class="n">output_hidden_states</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">layer</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">ModuleList</span><span class="p">([</span><span class="n">BertLayer</span><span class="p">(</span><span class="n">config</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">num_hidden_layers</span><span class="p">)])</span>

<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span>
    <span class="bp">self</span><span class="p">,</span>
    <span class="n">hidden_states</span><span class="p">,</span>
    <span class="n">attention_mask</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span>
    <span class="n">head_mask</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span>
    <span class="n">encoder_hidden_states</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span>
    <span class="n">encoder_attention_mask</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span>
<span class="p">):</span>
    <span class="n">all_hidden_states</span> <span class="o">=</span> <span class="p">()</span>
    <span class="n">all_attentions</span> <span class="o">=</span> <span class="p">()</span>
    <span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">layer_module</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer</span><span class="p">):</span>
        <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">output_hidden_states</span><span class="p">:</span>
            <span class="n">all_hidden_states</span> <span class="o">=</span> <span class="n">all_hidden_states</span> <span class="o">+</span> <span class="p">(</span><span class="n">hidden_states</span><span class="p">,)</span>

        <span class="n">layer_outputs</span> <span class="o">=</span> <span class="n">layer_module</span><span class="p">(</span>
            <span class="n">hidden_states</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">,</span> <span class="n">head_mask</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">encoder_hidden_states</span><span class="p">,</span> <span class="n">encoder_attention_mask</span>
        <span class="p">)</span>
        <span class="n">hidden_states</span> <span class="o">=</span> <span class="n">layer_outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>

        <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">output_attentions</span><span class="p">:</span>
            <span class="n">all_attentions</span> <span class="o">=</span> <span class="n">all_attentions</span> <span class="o">+</span> <span class="p">(</span><span class="n">layer_outputs</span><span class="p">[</span><span class="mi">1</span><span class="p">],)</span>

    <span class="c1"># Add last layer</span>
    <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">output_hidden_states</span><span class="p">:</span>
        <span class="n">all_hidden_states</span> <span class="o">=</span> <span class="n">all_hidden_states</span> <span class="o">+</span> <span class="p">(</span><span class="n">hidden_states</span><span class="p">,)</span>

    <span class="n">outputs</span> <span class="o">=</span> <span class="p">(</span><span class="n">hidden_states</span><span class="p">,)</span>
    <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">output_hidden_states</span><span class="p">:</span>
        <span class="n">outputs</span> <span class="o">=</span> <span class="n">outputs</span> <span class="o">+</span> <span class="p">(</span><span class="n">all_hidden_states</span><span class="p">,)</span>
    <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">output_attentions</span><span class="p">:</span>
        <span class="n">outputs</span> <span class="o">=</span> <span class="n">outputs</span> <span class="o">+</span> <span class="p">(</span><span class="n">all_attentions</span><span class="p">,)</span>
    <span class="k">return</span> <span class="n">outputs</span>  <span class="c1"># last-layer hidden state, (all hidden states), (all attentions)</span></code></pre></div><p>Bertpooler 其实就是将BERT的[CLS]的hidden_state 取出,经过一层DNN和Tanh计算后输出。</p><div class="highlight"><pre><code class="language-python"><span class="k">class</span> <span class="nc">BertPooler</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">config</span><span class="p">):</span>
    <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">dense</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="n">config</span><span class="o">.</span><span class="n">hidden_size</span><span class="p">,</span> <span class="n">config</span><span class="o">.</span><span class="n">hidden_size</span><span class="p">)</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">activation</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Tanh</span><span class="p">()</span>

<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">hidden_states</span><span class="p">):</span>
    <span class="c1"># We "pool" the model by simply taking the hidden state corresponding</span>
    <span class="c1"># to the first token.</span>
    <span class="n">first_token_tensor</span> <span class="o">=</span> <span class="n">hidden_states</span><span class="p">[:,</span> <span class="mi">0</span><span class="p">]</span>
    <span class="n">pooled_output</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">dense</span><span class="p">(</span><span class="n">first_token_tensor</span><span class="p">)</span>
    <span class="n">pooled_output</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">activation</span><span class="p">(</span><span class="n">pooled_output</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">pooled_output</span></code></pre></div><p>在这个文件中还有上述基础的BertModel的进一步的变化,比如BertForMaskedLM,BertForNextSentencePrediction这些是Bert加了预训练头的模型,还有BertForSequenceClassification, BertForQuestionAnswering  这些加上了特定任务头的模型。</p><blockquote><b>未完待续 </b></blockquote><p>本期的代码浅析就给大家分享到这里,感谢大家的阅读和支持,下期我们会继续给大家带来预训练语言模型相关的论文阅读,敬请大家期待!</p><p>欢迎关注朴素人工智能,这里有很多最新最热的论文阅读分享,有问题或建议可以在公众号下留言。</p><figure data-size="normal"><noscript><img src="https://pic2.zhimg.com/v2-e417f538e56d398f50f1d3698e9bce41_b.jpg" data-caption="" data-size="normal" data-rawwidth="720" data-rawheight="90" class="origin_image zh-lightbox-thumb" width="720" data-original="https://pic2.zhimg.com/v2-e417f538e56d398f50f1d3698e9bce41_r.jpg"/></noscript><img src="https://pic2.zhimg.com/80/v2-e417f538e56d398f50f1d3698e9bce41_720w.jpg" data-caption="" data-size="normal" data-rawwidth="720" data-rawheight="90" class="origin_image zh-lightbox-thumb lazy" width="720" data-original="https://pic2.zhimg.com/v2-e417f538e56d398f50f1d3698e9bce41_r.jpg" data-actualsrc="https://pic2.zhimg.com/v2-e417f538e56d398f50f1d3698e9bce41_b.jpg" data-lazy-status="ok"></figure><p>● <a href="https://link.zhihu.com/?target=http%3A//mp.weixin.qq.com/s%3F__biz%3DMzAxMDk0OTI3Ng%3D%3D%26mid%3D2247484022%26idx%3D1%26sn%3Defa203443f39e055c8d575c4f1ad6c33%26chksm%3D9b49c585ac3e4c9388bbd9f40b57cf14344f76d5e30a8876ccc66c4da5a2bdad09e1c81fc2a6%26scene%3D21%23wechat_redirect" class=" wrap external" target="_blank" rel="nofollow noreferrer" data-za-detail-view-id="1043">性能媲美BERT却只有其1/10参数量?| 近期最火模型ELECTRA解析</a></p><p>● <a href="https://link.zhihu.com/?target=http%3A//mp.weixin.qq.com/s%3F__biz%3DMzAxMDk0OTI3Ng%3D%3D%26mid%3D2247484003%26idx%3D1%26sn%3D51b706f885f6d20586503725beba1656%26chksm%3D9b49c590ac3e4c86aaf933273b79ace5118c32694a4af5e839b2cd916e84ba981e283c1fd5e2%26scene%3D21%23wechat_redirect" class=" wrap external" target="_blank" rel="nofollow noreferrer" data-za-detail-view-id="1043">微软统一预训练语言模型UniLM 2.0解读</a></p><p>●  <a href="https://link.zhihu.com/?target=http%3A//mp.weixin.qq.com/s%3F__biz%3DMzAxMDk0OTI3Ng%3D%3D%26mid%3D2247483966%26idx%3D1%26sn%3D435f4ddf48960c0c4e49f0d20fa328ba%26chksm%3D9b49c5cdac3e4cdb557138ea683af1944077a3858138e42c64e49526b09b6f1cdbf1d7068eca%26scene%3D21%23wechat_redirect" class=" wrap external" target="_blank" rel="nofollow noreferrer" data-za-detail-view-id="1043">BERT,开启NLP新时代的王者</a></p><p>●  <a href="https://link.zhihu.com/?target=http%3A//mp.weixin.qq.com/s%3F__biz%3DMzAxMDk0OTI3Ng%3D%3D%26mid%3D2247483887%26idx%3D1%26sn%3Da0c3d1c404f75d56f6eaf4df4151304c%26chksm%3D9b49c61cac3e4f0a21e3aac02db3222592c3d35288c3bdec01ac5f849593bd4781e71c8acd83%26scene%3D21%23wechat_redirect" class=" wrap external" target="_blank" rel="nofollow noreferrer" data-za-detail-view-id="1043">十分钟了解文本分类通用技巧</a> </p><p></p><p></p><p></p></div>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值