SentenceTransformers

102 篇文章 8 订阅

SentenceTransformers 是一个可以用于句子、文本和图像嵌入的Python库。 可以为 100 多种语言计算文本的嵌入并且可以轻松地将它们用于语义文本相似性、语义搜索和同义词挖掘等常见任务。

论文: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

官网:https://www.sbert.net/

安装

pip install -U sentence-transformers

获得嵌入向量

from sentence_transformers import SentenceTransformer

# Download model
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# The sentences we'd like to encode
sentences = ['Python is an interpreted high-level general-purpose programming language.',
    'Python is dynamically-typed and garbage-collected.',
    'The quick brown fox jumps over the lazy dog.']

# Get embeddings of sentences
embeddings = model.encode(sentences)

# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
    print("Sentence:", sentence)
    print("Embedding:", embedding)
    print("")
# Sentence: Python is an interpreted high-level general-purpose programming language.
# Embedding: [-1.17965914e-01 -4.57159936e-01 -5.87313235e-01 -2.72477478e-01 ...
# ...

语义文本相似度

from sentence_transformers import SentenceTransformer, util

# Download model
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# The sentences we'd like to compute similarity about
sentences = ['Python is an interpreted high-level general-purpose programming language.',
    'Python is dynamically-typed and garbage-collected.',
    'The quick brown fox jumps over the lazy dog.']

# Get embeddings of sentences
embeddings = model.encode(sentences)

# Compute similarities
sim = util.cos_sim(embeddings[0], embeddings[1])
print("{0:.4f}".format(sim.tolist()[0][0])) # 0.6445
sim = util.cos_sim(embeddings[0], embeddings[2])
print("{0:.4f}".format(sim.tolist()[0][0])) # 0.0365

语义搜索

语义搜索通过理解搜索查询的内容来提高搜索的准确性,而不是仅仅依赖于词汇匹配。这是利用嵌入之间的相似性完成的。

语义搜索是将语料库中的所有条目嵌入到向量空间中。在搜索时,查询也会被嵌入到相同的向量空间中,并从语料库中找到最接近的嵌入。
在这里插入图片描述

from sentence_transformers import SentenceTransformer, util

# Download model
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# Corpus of documents and their embeddings
corpus = ['Python is an interpreted high-level general-purpose programming language.',
    'Python is dynamically-typed and garbage-collected.',
    'The quick brown fox jumps over the lazy dog.']
corpus_embeddings = model.encode(corpus)

# Queries and their embeddings
queries = ["What is Python?", "What did the fox do?"]
queries_embeddings = model.encode(queries)

# Find the top-2 corpus documents matching each query
hits = util.semantic_search(queries_embeddings, corpus_embeddings, top_k=2)

# Print results of first query
print(f"Query: {queries[0]}")
for hit in hits[0]:
    print(corpus[hit['corpus_id']], "(Score: {:.4f})".format(hit['score']))
# Query: What is Python?
# Python is an interpreted high-level general-purpose programming language. (Score: 0.6759)
# Python is dynamically-typed and garbage-collected. (Score: 0.6219)

# Print results of second query
print(f"Query: {queries[1]}")
for hit in hits[1]:
    print(corpus[hit['corpus_id']], "(Score: {:.4f})".format(hit['score']))
# Query: What did the fox do?
# The quick brown fox jumps over the lazy dog. (Score: 0.3816)
# Python is dynamically-typed and garbage-collected. (Score: 0.0713)

为了充分利用语义搜索,必须区分对称和非对称语义搜索,因为它会严重影响要使用的模型的选择。

Paraphrase Mining

Paraphrase Mining是在大量句子中寻找释义的任务,即具有非常相似含义的文本。

from sentence_transformers import SentenceTransformer, util

# Download model
model = SentenceTransformer('all-MiniLM-L6-v2')

# List of sentences
sentences = ['The cat sits outside',
             'A man is playing guitar',
             'I love pasta',
             'The new movie is awesome',
             'The cat plays in the garden',
             'A woman watches TV',
             'The new movie is so great',
             'Do you like pizza?']

# Look for paraphrases
paraphrases = util.paraphrase_mining(model, sentences)

# Print paraphrases
print("Top 5 paraphrases")
for paraphrase in paraphrases[0:5]:
    score, i, j = paraphrase
    print("Score {:.4f} ---- {} ---- {}".format(score, sentences[i], sentences[j]))
# Top 5 paraphrases
# Score 0.8939 ---- The new movie is awesome ---- The new movie is so great
# Score 0.6788 ---- The cat sits outside ---- The cat plays in the garden
# Score 0.5096 ---- I love pasta ---- Do you like pizza?
# Score 0.2560 ---- I love pasta ---- The new movie is so great
# Score 0.2440 ---- I love pasta ---- The new movie is awesome

图片搜索

SentenceTransformers 提供允许将图像和文本嵌入到同一向量空间,通过这中模型可以找到相似的图像以及实现图像搜索,即使用文本搜索图像,反之亦然。
在这里插入图片描述
要执行图像搜索,需要加载像 CLIP 这样的模型,并使用其encode 方法对图像和文本进行编码。

from sentence_transformers import SentenceTransformer, util
from PIL import Image

# Load CLIP model
model = SentenceTransformer('clip-ViT-B-32')

# Encode an image
img_emb = model.encode(Image.open('two_dogs_in_snow.jpg'))

# Encode text descriptions
text_emb = model.encode(['Two dogs in the snow', 'A cat on a table', 'A picture of London at night'])

# Compute cosine similarities 
cos_scores = util.cos_sim(img_emb, text_emb)
print(cos_scores)

多模态模型获得的嵌入也允许执行图像相似性等任务。

其他任务

1、对于问答检索等复杂的搜索任务,可以通过使用 Retrieve & Re-Rank 显著改进语义搜索。
在这里插入图片描述
2、SentenceTransformers 可以以不同的方式用于对小或大的句子集进行聚类。
在这里插入图片描述

补充

sentence-transformers里边对自然语句向量化的深度学习模型基本都是基于bert系列(基于transormer encoder结构)的。假设我们的输入是有L个词的句子,那么在模型的输出头之前会被转化为(L,D)维度的矩阵,每个单词对应一个D维向量。但是我们想要的是一个句子的向量表示,而不是每个词的向量表示,如何得到呢?很简单,直接将(L,D)维矩阵的L的维度上进行mean pooling,当作句子的向量表示。sentence-tranformers库只是对transoformers库外边包一个壳,任何hugging face上边的语言模型都可以使用,只要换成对应的名字就行(当然,效果不一定好)。该项目对中文支持的模型不是很多,名字中带有“multilingual”的模型才支持中文,实践时可以用hugging face上的shibing624/text2vec-base-chinese模型,效果会好一些。

官方甚至给出了代码,教你不安装sentence-tranformers库去拿到句子的向量,具体实现方式就是对词向量序列做mean pooling,代码如下:

from transformers import AutoTokenizer, AutoModel
import torch


def mean_pooling(model_output, attention_mask):
    # model_output第0个位置是transformer encoder最后的输出,维度为(B,L,D)
    token_embeddings = model_output[0] 
    # input_mask_expanded记录句子哪些位置真的有东西,哪些位置是padding,防止把padding的向量也平均上。
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
    sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    return sum_embeddings / sum_mask

# 希望向量化的句子,支持多个句子同时输入
sentences = ['This framework generates embeddings for each input sentence',
             'Sentences are passed as a list of string.',
             'The quick brown fox jumps over the lazy dog.']

# huggingface接口加载模型
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
# 句子token化
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')

拿到模型的输出
with torch.no_grad():
    model_output = model(**encoded_input)

#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

直接把所有词向量取平均当作句子向量,会损失很多信息。所以,想通过向量检索的方式根据问题得到精确的检索答案是很难的。增加向量的维度可以减少mean pooling带来的信息损失。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值