目前,大语言模型呈爆发式的增长,其中,基于llama家族的模型占据了半壁江山。而原始的llama模型对中文的支持不太友好,接下来本文将讲解如何去扩充vocab里面的词以对中文进行token化。
一般的,目前比较主流的是使用sentencepiece训练中文词库。安装指令也很简单:pip install sentencepiece
。然后,我们准备好语料,这里我们使用的语料是斗破苍穹小说。
with open("data.txt", "r", encoding="utf-8") as fp:
data = fp.read().strip().split("\n")
sentences = []
for d in data:
d = d.strip()
if "===" in d or len(d) == 0 or d == "《斗破苍穹》来自:":
continue
sentences.append(d)
with open("corpus.txt", "w", encoding="utf-8") as fp:
fp.write("\n".join(sentences))
!pip install sentencepiece
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: sentencepiece in e:\miniconda3\lib\site-packages (0.1.99)
DEPRECATION: Loading egg at e:\miniconda3\lib\site-packages\whisper_live-0.0.11-py3.11.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330
开始训练,这里面有几个参数要注意一下,model_type分词算法选择bpe,split_digits为True,byte_fallback为True,和LLaMa 保持一致,max_sentence_length设置的大一点,更多参数解释可以查看:https://zhuanlan.zhihu.com/p/655281268 和 https://zhuanlan.zhihu.com/p/639144223
import sentencepiece as spm
spm.SentencePieceTrainer.train(
input='corpus.txt',
input_format='text',
model_prefix='tokenizer',
vocab_size=10000,
character_coverage=0.9995,
model_type="bpe",
num_threads=32,
split_digits=True,
byte_fallback=