预训练模型transformers综合总结(一)

预训练模型transformers综合总结(一)

这是我对transformers库查看了原始文档后,进行的学习总结。

第一部分是将如何调用加载本地模型,使用模型,修改模型,保存模型

之后还会更新如何使用自定义的数据集训练以及对模型进行微调,感觉这样这个库基本就能玩熟了。


# 加载本地模型须知

1.使用transformers库加载预训练模型,99%的时间都是用于模型的下载。
为此,我直接从清华大学软件("https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/")把模型放在了我的本地目录地址:"H:\\code\\Model\\"下,这里可以进行修改。

2.下载的模型通常会是"模型名称-"+"config.json"的格式例如(bert-base-cased-finetuned-mrpc-config.json),但如果使用transformers库加载本地模型,需要的是模型路径中是config.json、vocab.txt、pytorch_model.bin、tf_model.h5、tokenizer.json等形式,为此在加载前,需要将把文件前面的模型名称,才能加载成功

我自己写的处理代码如下:

 
  1. #coding=utf-8

  2. import os

  3. import os.path

  4. # 模型存放路径

  5. rootdir = r"H:\code\Model\bert-large-uncased-whole-word-masking-finetuned-squad"# 指明被遍历的文件夹

  6.  
  7. for parent,dirnames,filenames in os.walk(rootdir):#三个参数:分别返回1.父目录 2.所有文件夹名字(不含路径) 3.所有文件名字

  8. for filename in filenames:#文件名

  9. # nameList=filename.split('.')

  10. # print(nameList)

  11. print(filename)

  12. # filenew=nameList[0]+'.jpg'

  13. # print(filenew)

  14. #模型的名称

  15. newName=filename.replace('bert-large-uncased-whole-word-masking-finetuned-squad-','')

  16. os.rename(os.path.join(parent,filename),os.path.join(parent,newName))#重命名

处理完后就可以使用transformers库进行代码加载了。


模型使用

序列分类(以情感分类为例)

1.使用管道

 
  1. model_path="H:\\code\\Model\\bert-base-cased-finetuned-mrpc\\"

  2.  
  3. from transformers import pipeline

  4. #使用当前模型+使用Tensorflow框架,默认应该是使用PYTORCH框架

  5. nlp = pipeline("sentiment-analysis",model=model_path, tokenizer=model_path, framework="tf")

  6. result = nlp("I hate you")[0]

  7. print(f"label: {result['label']}, with score: {round(result['score'], 4)}")

  8. result = nlp("I love you")[0]

  9. print(f"label: {result['label']}, with score: {round(result['score'], 4)}")

2.直接使用模型

 
  1. model_path="H:\\code\\Model\\bert-base-cased-finetuned-mrpc\\"

  2. #pytorch框架

  3.  
  4. from transformers import AutoTokenizer, AutoModelForSequenceClassification

  5. import torch

  6. tokenizer = AutoTokenizer.from_pretrained(model_path)

  7. model = AutoModelForSequenceClassification.from_pretrained(model_path)

  8. classes = ["not paraphrase", "is paraphrase"]

  9. sequence_0 = "The company HuggingFace is based in New York City"

  10. sequence_1 = "Apples are especially bad for your health"

  11. sequence_2 = "HuggingFace's headquarters are situated in Manhattan"

  12. paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")

  13. not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")

  14. paraphrase_classification_logits = model(**paraphrase).logits

  15. not_paraphrase_classification_logits = model(**not_paraphrase).logits

  16. paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]

  17. not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]

  18. # Should be paraphrase

  19. for i in range(len(classes)):

  20. print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")

  21. # Should not be paraphrase

  22. for i in range(len(classes)):

  23. print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")

  24.  
  25. #tensorflow框架

  26. from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

  27. import tensorflow as tf

  28. tokenizer = AutoTokenizer.from_pretrained(model_path)

  29. model = TFAutoModelForSequenceClassification.from_pretrained(model_path)

  30. classes = ["not paraphrase", "is paraphrase"]

  31. sequence_0 = "The company HuggingFace is based in New York City"

  32. sequence_1 = "Apples are especially bad for your health"

  33. sequence_2 = "HuggingFace's headquarters are situated in Manhattan"

  34. paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="tf")

  35. not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="tf")

  36. paraphrase_classification_logits = model(paraphrase)[0]

  37. not_paraphrase_classification_logits = model(not_paraphrase)[0]

  38. paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]

  39. not_paraphrase_results = tf.nn.softmax(not_paraphrase_classification_logits, axis=1).numpy()[0]

  40. # Should be paraphrase

  41. for i in range(len(classes)):

  42. print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")

  43. # Should not be paraphrase

  44. for i in range(len(classes)):

  45. print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")

提取式问答

1.使用管道

 
  1. model_path="H:\\code\\Model\\bert-large-uncased-whole-word-masking-finetuned-squad\\"

  2.  
  3. from transformers import pipeline

  4. nlp = pipeline("question-answering",model=model_path, tokenizer=model_path)

  5. context = r"""

  6. Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a

  7. question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune

  8. a model on a SQuAD task, you may leverage the examples/question-answering/run_squad.py script.

  9. """

  10. result = nlp(question="What is extractive question answering?", context=context)

  11. print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")

  12. result = nlp(question="What is a good example of a question answering dataset?", context=context)

  13. print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")

2.直接使用模型

 
  1. model_path="H:\\code\\Model\\bert-large-uncased-whole-word-masking-finetuned-squad\\"

  2. #使用pytorch框架

  3. from transformers import AutoTokenizer, AutoModelForQuestionAnswering

  4. import torch

  5. tokenizer = AutoTokenizer.from_pretrained(model_path)

  6. model = AutoModelForQuestionAnswering.from_pretrained(model_path)

  7. text = r"""

  8. 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose

  9. architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural

  10. Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between

  11. TensorFlow 2.0 and PyTorch.

  12. """

  13. questions = [

  14. "How many pretrained models are available in 🤗 Transformers?",

  15. "What does 🤗 Transformers provide?",

  16. "🤗 Transformers provides interoperability between which frameworks?",

  17. ]

  18. for question in questions:

  19. inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")

  20. input_ids = inputs["input_ids"].tolist()[0]

  21. text_tokens = tokenizer.convert_ids_to_tokens(input_ids)

  22. outputs = model(**inputs)

  23. answer_start_scores = outputs.start_logits

  24. answer_end_scores = outputs.end_logits

  25. answer_start = torch.argmax(

  26. answer_start_scores

  27. ) # Get the most likely beginning of answer with the argmax of the score

  28. answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score

  29. answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))

  30. print(f"Question: {question}")

  31. print(f"Answer: {answer}")

  32.  
  33. #使用tensorflow框架

  34. from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering

  35. import tensorflow as tf

  36. tokenizer = AutoTokenizer.from_pretrained(model_path)

  37. model = TFAutoModelForQuestionAnswering.from_pretrained(model_path)

  38. text = r"""

  39. 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose

  40. architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural

  41. Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between

  42. TensorFlow 2.0 and PyTorch.

  43. """

  44. questions = [

  45. "How many pretrained models are available in 🤗 Transformers?",

  46. "What does 🤗 Transformers provide?",

  47. "🤗 Transformers provides interoperability between which frameworks?",

  48. ]

  49. for question in questions:

  50. inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="tf")

  51. input_ids = inputs["input_ids"].numpy()[0]

  52. text_tokens = tokenizer.convert_ids_to_tokens(input_ids)

  53. outputs = model(inputs)

  54. answer_start_scores = outputs.start_logits

  55. answer_end_scores = outputs.end_logits

  56. answer_start = tf.argmax(

  57. answer_start_scores, axis=1

  58. ).numpy()[0] # Get the most likely beginning of answer with the argmax of the score

  59. answer_end = (

  60. tf.argmax(answer_end_scores, axis=1) + 1

  61. ).numpy()[0] # Get the most likely end of answer with the argmax of the score

  62. answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))

  63. print(f"Question: {question}")

  64. print(f"Answer: {answer}")

语言建模

1.使用管道

 
  1. model_path="H:\\code\\Model\\distilbert-base-cased\\"

  2. from transformers import pipeline

  3. nlp = pipeline("fill-mask",model=model_path, tokenizer=model_path, framework="tf")

  4. from pprint import pprint

  5. pprint(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))

2.直接使用模型

 
  1. model_path="H:\\code\\Model\\distilbert-base-cased\\"

  2.  
  3. ## 使用pytorch实现

  4. from transformers import AutoModelWithLMHead, AutoTokenizer

  5. import torch

  6. tokenizer = AutoTokenizer.from_pretrained(model_path)

  7. model = AutoModelWithLMHead.from_pretrained(model_path)

  8. sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."

  9. input = tokenizer.encode(sequence, return_tensors="pt")

  10. mask_token_index = torch.where(input == tokenizer.mask_token_id)[1]

  11. token_logits = model(input).logits

  12. mask_token_logits = token_logits[0, mask_token_index, :]

  13. top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()

  14. for token in top_5_tokens:

  15. print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))

  16.  
  17. ## 使用tensorflow实现

  18. from transformers import TFAutoModelWithLMHead, AutoTokenizer

  19. import tensorflow as tf

  20. tokenizer = AutoTokenizer.from_pretrained(model_path)

  21. model = TFAutoModelWithLMHead.from_pretrained(model_path)

  22. sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."

  23. input = tokenizer.encode(sequence, return_tensors="tf")

  24. mask_token_index = tf.where(input == tokenizer.mask_token_id)[0, 1]

  25. token_logits = model(input)[0]

  26. mask_token_logits = token_logits[0, mask_token_index, :]

  27. top_5_tokens = tf.math.top_k(mask_token_logits, 5).indices.numpy()

  28. for token in top_5_tokens:

  29. print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))

文字生成

1.使用管道

 
  1. model_path="H:\\code\\Model\\xlnet-base-cased\\"

  2. from transformers import pipeline

  3. text_generator = pipeline("text-generation",model=model_path, tokenizer=model_path, framework="tf")

  4. print(text_generator("As far as I am concerned, I will", max_length=50, do_sample=False))

2.直接使用模型

 
  1. model_path="H:\\code\\Model\\xlnet-base-cased\\"

  2. #使用pytorch版本

  3. from transformers import AutoModelWithLMHead, AutoTokenizer

  4. model = AutoModelWithLMHead.from_pretrained(model_path)

  5. tokenizer = AutoTokenizer.from_pretrained(model_path)

  6. # Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology

  7. PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family

  8. (except for Alexei and Maria) are discovered.

  9. The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the

  10. remainder of the story. 1883 Western Siberia,

  11. a young Grigori Rasputin is asked by his father and a group of men to perform magic.

  12. Rasputin has a vision and denounces one of the men as a horse thief. Although his

  13. father initially slaps him for making such an accusation, Rasputin watches as the

  14. man is chased outside and beaten. Twenty years later, Rasputin sees a vision of

  15. the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,

  16. with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""

  17. prompt = "Today the weather is really nice and I am planning on "

  18. inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="pt")

  19. prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))

  20. outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)

  21. generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]

  22. print(generated)

  23.  
  24. #使用tensorflow版本

  25. from transformers import TFAutoModelWithLMHead, AutoTokenizer

  26. model = TFAutoModelWithLMHead.from_pretrained(model_path)

  27. tokenizer = AutoTokenizer.from_pretrained(model_path)

  28. # Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology

  29. PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family

  30. (except for Alexei and Maria) are discovered.

  31. The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the

  32. remainder of the story. 1883 Western Siberia,

  33. a young Grigori Rasputin is asked by his father and a group of men to perform magic.

  34. Rasputin has a vision and denounces one of the men as a horse thief. Although his

  35. father initially slaps him for making such an accusation, Rasputin watches as the

  36. man is chased outside and beaten. Twenty years later, Rasputin sees a vision of

  37. the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,

  38. with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""

  39. prompt = "Today the weather is really nice and I am planning on "

  40. inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="tf")

  41. prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))

  42. outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)

  43. generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]

  44. print(generated)

  45.  

命名实体识别

1.使用管道

 
  1. model_path="H:\\code\\Model\\dbmdz\\bert-large-cased-finetuned-conll03-english\\"

  2. tokenizer="H:\\code\\Model\\bert-base-cased\\"

  3. from transformers import pipeline

  4. nlp = pipeline("ner",model=model_path, tokenizer=tokenizer_path)

  5. sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very"

  6. "close to the Manhattan Bridge which is visible from the window."

  7. print(nlp(sequence))

2.直接使用模型

 
  1. model_path="H:\\code\\Model\\dbmdz\\bert-large-cased-finetuned-conll03-english\\"

  2. tokenizer_path="H:\\code\\Model\\bert-base-cased\\"

  3.  
  4. ##使用pytorch格式

  5.  
  6. from transformers import AutoModelForTokenClassification, AutoTokenizer

  7. import torch

  8. model = AutoModelForTokenClassification.from_pretrained(model_path)

  9. tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)

  10. label_list = [

  11. "O", # Outside of a named entity

  12. "B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity

  13. "I-MISC", # Miscellaneous entity

  14. "B-PER", # Beginning of a person's name right after another person's name

  15. "I-PER", # Person's name

  16. "B-ORG", # Beginning of an organisation right after another organisation

  17. "I-ORG", # Organisation

  18. "B-LOC", # Beginning of a location right after another location

  19. "I-LOC" # Location

  20. ]

  21. sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \

  22. "close to the Manhattan Bridge."

  23. # Bit of a hack to get the tokens with the special tokens

  24. tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))

  25. inputs = tokenizer.encode(sequence, return_tensors="pt")

  26. outputs = model(inputs).logits

  27. predictions = torch.argmax(outputs, dim=2)

  28. print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])

  29.  
  30. ##使用tensorflow格式

  31. from transformers import TFAutoModelForTokenClassification, AutoTokenizer

  32. import tensorflow as tf

  33. model = TFAutoModelForTokenClassification.from_pretrained(model_path)

  34. tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)

  35. label_list = [

  36. "O", # Outside of a named entity

  37. "B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity

  38. "I-MISC", # Miscellaneous entity

  39. "B-PER", # Beginning of a person's name right after another person's name

  40. "I-PER", # Person's name

  41. "B-ORG", # Beginning of an organisation right after another organisation

  42. "I-ORG", # Organisation

  43. "B-LOC", # Beginning of a location right after another location

  44. "I-LOC" # Location

  45. ]

  46. sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \

  47. "close to the Manhattan Bridge."

  48. # Bit of a hack to get the tokens with the special tokens

  49. tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))

  50. inputs = tokenizer.encode(sequence, return_tensors="tf")

  51. outputs = model(inputs)[0]

  52. predictions = tf.argmax(outputs, axis=2)

  53. print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])

文本摘要

1.使用管道

 
  1. model_path="H:\\code\\Model\\t5-base\\"

  2.  
  3. from transformers import pipeline

  4. summarizer = pipeline("summarization",model=model_path, tokenizer=model_path)

  5. ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.

  6. A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.

  7. Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.

  8. In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.

  9. Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the

  10. 2010 marriage license application, according to court documents.

  11. Prosecutors said the marriages were part of an immigration scam.

  12. On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.

  13. After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective

  14. Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.

  15. All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.

  16. Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.

  17. Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.

  18. The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s

  19. Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.

  20. Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.

  21. If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.

  22. """

  23. print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))

2.直接使用模型

 
  1. model_path="H:\\code\\Model\\t5-base\\"

  2.  
  3. #使用pytorch框架

  4. from transformers import AutoModelWithLMHead, AutoTokenizer

  5. model = AutoModelWithLMHead.from_pretrained(model_path)

  6. tokenizer = AutoTokenizer.from_pretrained(model_path)

  7. # T5 uses a max_length of 512 so we cut the article to 512 tokens.

  8. inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="pt", max_length=512)

  9. outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)

  10. # print(outputs)

  11. print(tokenizer.decode(outputs[0]))

  12.  
  13. #使用tensorflow框架

  14. from transformers import TFAutoModelWithLMHead, AutoTokenizer

  15. model = TFAutoModelWithLMHead.from_pretrained(model_path)

  16. tokenizer = AutoTokenizer.from_pretrained(model_path)

  17. # T5 uses a max_length of 512 so we cut the article to 512 tokens.

  18. inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="tf", max_length=512)

  19. outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)

翻译

1.使用管道

 
  1. model_path="H:\\code\\Model\\t5-base\\"

  2. from transformers import pipeline

  3. translator = pipeline("translation_en_to_de",model=model_path, tokenizer=model_path)

  4. print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))

2.直接使用模型

模型定制

初始化调整

 
  1. model_path="H:\\code\\Model\\distilbert-base-uncased\\"

  2. #pytorch方式

  3.  
  4. from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification

  5. config = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)

  6. tokenizer = DistilBertTokenizer.from_pretrained(model_path)

  7. model = DistilBertForSequenceClassification(config)

  8.  
  9. #tensorflow方式

  10.  
  11. from transformers import DistilBertConfig, DistilBertTokenizer, TFDistilBertForSequenceClassification

  12. config = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)

  13. tokenizer = DistilBertTokenizer.from_pretrained(model_path)

  14. model = TFDistilBertForSequenceClassification(config)

  15.  

部分调整

 
  1. model_name="H:\\code\\Model\\distilbert-base-uncased\\"

  2.  
  3. #pytorch方式

  4.  
  5. from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification

  6. #model_name = "distilbert-base-uncased"

  7. model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=10)

  8. tokenizer = DistilBertTokenizer.from_pretrained(model_name)

  9.  
  10. #tensorflow方式

  11. from transformers import DistilBertConfig, DistilBertTokenizer, TFDistilBertForSequenceClassification

  12. #model_name = "distilbert-base-uncased"

  13. model = TFDistilBertForSequenceClassification.from_pretrained(model_name, num_labels=10)

  14. tokenizer = DistilBertTokenizer.from_pretrained(model_name)

模型保存与加载

 
  1. #模型保存

  2. ##对模型进行微调后,可以通过以下方式将其与标记器一起保存:

  3. save_directory="./save/"

  4. tokenizer.save_pretrained(save_directory)

  5. model.save_pretrained(save_directory)

  6.  
  7. #模型加载

  8. ##TensorFlow模型中加载已保存的PyTorch模型

  9. tokenizer = AutoTokenizer.from_pretrained(save_directory)

  10. model = TFAutoModel.from_pretrained(save_directory, from_pt=True)

  11.  
  12. ##PyTorch模型中加载已保存的TensorFlow模型

  13. tokenizer = AutoTokenizer.from_pretrained(save_directory)

  14. model = AutoModel.from_pretrained(save_directory, from_tf=True)

  15.  

 

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AI周红伟

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值