LLM、AIGC、RAG 开发交流裙:377891973
文章目录
- 本文转载改编自:
https://python.langchain.com.cn/docs/modules/data_connection/retrievers/ - LangChain - Retrievers
https://python.langchain.com/docs/integrations/retrievers/ - 官方文档:https://python.langchain.com/docs/modules/data_connection/
- API : https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.retrievers
检索器是一个接口,根据非结构化查询返回文档。它比向量存储更通用。
检索器不需要能够存储文档,只需要返回(或检索)它们。
向量存储可以用作检索器的支撑,但也有其他类型的检索器。
一、入门
BaseRetriever
LangChain 中的 BaseRetriever
类如下所示:
from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
class BaseRetriever(ABC):
@abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get texts relevant for a query.
Args:
query: string to find relevant texts for
Returns:
List of relevant documents
"""
就是这样!get_relevant_documents
方法可以根据您的需要进行实现。
当然,我们还会帮助构建我们认为有用的检索器。
我们关注的主要检索器类型是向量存储检索器。本指南的剩余部分将重点介绍该类型。
向量检索
为了理解向量存储检索器是什么,了解向量存储是很重要的。所以让我们来看看它。
默认情况下,LangChain 使用 Chroma 作为向量存储来索引和搜索嵌入。为了完成本教程,我们首先需要安装 chromadb
。
pip install chromadb
此示例展示了对文档的问答功能。
我们选择这个作为入门示例,因为它很好地结合了许多不同的元素(文本分割器、嵌入、向量存储),并展示了如何在链中使用它们。
对文档进行问答包括四个步骤:
- 创建索引
- 从索引创建检索器
- 创建问答链
- 提出问题!
每个步骤都有多个子步骤和潜在的配置。在本教程中,我们主要关注(1)。
我们将首先展示一行代码的方式,然后分解实际发生的情况。
准备:您可以在此处下载 state_of_the_union.txt
文件 here
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
一行代码创建索引 (One Line Index Creation)
为了尽快入门,我们可以使用 VectorstoreIndexCreator
。
需要设置 OPENAI_API_KEY
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
现在索引已经创建,我们可以使用它来对数据进行问答!请注意,在幕后,这实际上也有一些步骤,我们将在本指南的后面介绍。
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
{'question': 'What did the president say about Ketanji Brown Jackson',
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
'sources': '../state_of_the_union.txt'}
VectorstoreIndexCreator
返回的是 VectorStoreIndexWrapper
,它提供了这些方便的 query
和 query_with_sources
功能。
如果我们只想直接访问向量存储,也可以这样做。
index.vectorstore
# -> <langchain.vectorstores.chroma.Chroma at 0x119aa5940>
可以如下方式访问 VectorstoreRetriever:
index.vectorstore.as_retriever()
# -> VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
Walkthrough
好吧,实际上发生了什么?这个索引是如何创建的?
很多的魔法都被隐藏在这个 VectorstoreIndexCreator
中。它在做什么?
文档加载后会经过三个主要步骤:
- 将文档拆分为块
- 为每个文档创建嵌入
- 在向量存储中存储文档和嵌入
让我们来看看这段代码
documents = loader.load()
# 文档拆分成块
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# 选择 嵌入
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
现在,我们创建要用作索引的向量存储。
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
这就是创建索引的过程。
然后,我们将在检索器接口中公开该索引。
retriever = db.as_retriever()
然后,与以前一样,我们创建一个链并用它来回答问题!
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
VectorstoreIndexCreator
只是围绕所有这些逻辑的包装器。
它可配置所使用的文本分割器、嵌入和向量存储。
例如,您可以进行以下配置:
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
)
希望这突出了 VectorstoreIndexCreator
幕后发生的情况。
虽然我们认为创建索引的简单方法很重要,但我们也认为了解幕后发生的情况很重要。
二、上下文压缩 contextual_compression
在检索中的一个挑战是,通常您不知道 将数据导入系统时 文档存储系统 将面临的具体查询。
这意味着与查询相关的最重要信息 可能埋藏在 具有大量无关文本的文档中。
将完整文档通过应用程序传递 可能导致更昂贵的 LLM 调用和较差的响应。
上下文压缩旨在解决这个问题。
其思想很简单:不是立即返回检索到的文档,而是使用 给定查询的 上下文对其进行压缩,以便仅返回相关信息。
这里的“压缩”既指压缩单个文档的内容,也指批量过滤文档。
要使用上下文压缩检索器,您需要:
- 一个基础检索器
- 一个文档压缩器
上下文压缩检索器将查询传递给基础检索器,获取初始文档并通过文档压缩器进行处理。
文档压缩器接受一个文档列表,并通过减少文档内容或完全丢弃文档来缩短列表。
1、入门
打印文档的辅助函数
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
2、使用纯向量存储检索器
让我们从初始化一个简单的向量存储检索器并存储 2023 年国情咨文演讲(以分块方式)开始。我们可以看到,给定一个示例问题,我们的检索器返回一个或两个相关文档和几个不相关的文档。而且即使相关的文档中也有很多不相关的信息。
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")
pretty_print_docs(docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
...
Document 4:
Tonight,
... best-kept secret: community colleges.
3、添加上下文压缩 LLMChainExtractor
现在让我们用 ContextualCompressionRetriever
包装我们的基本检索器。
我们将添加一个 LLMChainExtractor
,它将迭代最初返回的文档,并从每个文档中提取与查询相关的内容。
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
...
Document 2:
"A former top litigator in private practice.
...
4、更多内置压缩器:过滤器
LLMChainFilter
(健壮
LLMChainFilter
是一个稍微简单但更健壮的压缩器,它使用 LLM 链来决定 哪些最初检索到的文档 要被过滤掉,哪些要返回,而无需操作文档内容。
from langchain.retrievers.document_compressors import LLMChainFilter
_filter = LLMChainFilter.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
...
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
EmbeddingsFilter
(便宜
对每个检索到的文档进行额外的 LLM 调用是昂贵且缓慢的。
EmbeddingsFilter
提供了一种更便宜且更快速的选择,它通过嵌入文档和查询,只返回与查询具有足够相似嵌入的文档。
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings = OpenAIEmbeddings()
embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act.
...
Document 3:
...
First, beat the opioid epidemic.
5、串联压缩器和文档转换器 DocumentCompressorPipeline
使用 DocumentCompressorPipeline
我们还可以轻松地将多个压缩器按顺序组合起来。
除了压缩器,我们还可以将 BaseDocumentTransformer
添加到我们的管道中,它们不执行任何上下文压缩,只是在一组文档上执行一些转换。
例如,TextSplitter
可以用作文档转换器,将文档分割成较小的片段,而 EmbeddingsRedundantFilter
可以基于文档之间的嵌入相似性 过滤掉冗余文档。
下面,我们通过首先将文档分割成较小的块,然后删除冗余文档,最后根据与查询的相关性进行过滤,创建了一个压缩器管道。
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")
redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
pipeline_compressor = DocumentCompressorPipeline(
transformers=[splitter, redundant_filter, relevant_filter]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
...
Document 3:
A former top litigator in private practice. ... A consensus builder
三、自查询 self_query
自查询检索器是指 具有自我查询能力 的检索器。
具体而言,给定任何自然语言查询,检索器使用查询构造的 LLM 链来编写结构化查询,然后将该结构化查询应用于其底层的 VectorStore。
这使得检索器不仅可以 将用户输入的查询 用于与存储文档内容的语义相似性比较,还可以从用户查询中 提取存储文档的 元数据过滤器 并执行这些过滤器。
入门 (Pinecone
在这个示例中,我们将使用 Pinecone 向量存储。
首先,我们需要创建一个 Pinecone
VectorStore,并使用一些数据进行初始化。我们创建了一个包含电影摘要的小型演示文档集。
要使用 Pinecone,您需要安装 pinecone
包,并拥有 API 密钥和环境。请参阅 安装说明。
注意:自查询检索器要求您安装了 lark
包。
! pip install lark pinecone-client
import os
import pinecone
pinecone.init(api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"])
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
embeddings = OpenAIEmbeddings()
# create new index
pinecone.create_index("langchain-self-retriever-demo", dimension=1536)
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}),
...
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9})
]
vectorstore = Pinecone.from_documents(
docs, embeddings, index_name="langchain-self-retriever-demo"
)
创建自查询检索器
现在,我们可以实例化我们的检索器。为此,我们需要提前提供有关文档支持的 元数据字段 和文档内容的简短描述。
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
测试
现在,我们可以尝试使用我们的检索器!
1、下例只定义一个相关查询
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}),
...
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]
2、下例只定义一个过滤器
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
3、下例定义一个查询和一个过滤器
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]
4、下例定义一组过滤器
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
5、下例定义一个查询和一组过滤器
This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]
过滤 k
我们还可以使用自查询检索器来指定 k
:要获取的文档数量。
我们可以通过将 enable_limit=True
传递给构造函数来实现这一点。
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs")
四、时间加权向量存储检索器 time_weighted_vectorstore
该检索器使用语义相似度和时间衰减的组合。
评分算法如下:
semantic_similarity + (1.0 - decay_rate) ^ hours_passed
值得注意的是,hours_passed
指的是检索器中的对象 上次被访问 以来经过的小时数,而不是它被创建后经过的小时数。这意味着经常访问的对象保持“新鲜”。
import faiss
from datetime import datetime, timedelta
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.schema import Document
from langchain.vectorstores import FAISS
低衰减率 (Low Decay Rate)
低衰减率 (Low Decay Rate)
(在这里,为了极端起见,我们将它设置为接近 0)意味着记忆将会更长时间地 “记住”。
衰减率
为 0 意味着记忆永远不会被遗忘,使得这个检索器等效于向量查找。
embeddings_model = OpenAIEmbeddings()
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(
embeddings_model.embed_query,
index,
InMemoryDocstore({}),
{}
)
retriever = TimeWeightedVectorStoreRetriever(
vectorstore=vectorstore,
decay_rate=.0000000000000000000000001,
k=1
)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([
Document(
page_content="hello world",
metadata={"last_accessed_at": yesterday}
)
])
retriever.add_documents([Document(page_content="hello foo")])
# -> ['d7f85756-2371-4bdf-9140-052780a0f9b3']
# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough
retriever.get_relevant_documents("hello world")
[Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 678341), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]
高衰减率 (High Decay Rate)
使用高 高衰减率 (High Decay Rate)
(例如,多个 9),最近分数
迅速降为 0!如果将其全部设置为 1,对于所有对象,最近性
都是 0,再次使得这等效于向量查找。
embeddings_model = OpenAIEmbeddings()
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")])
# -> ['40011466-5bbe-4101-bfd1-e22e7f505de2']
# "Hello Foo" is returned first because "hello world" is mostly forgotten
retriever.get_relevant_documents("hello world")
[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]
虚拟时间 (Virtual Time)
使用 LangChain 中的一些实用工具,您可以模拟出时间组件
from langchain.utils import mock_now
import datetime
# Notice the last access time is that date time
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
print(retriever.get_relevant_documents("hello world"))
[Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2011, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]
五、基于向量存储的检索器 vectorstore
向量存储检索器是一种使用向量存储来检索文档的检索器。
它是对向量存储类的轻量级封装,以使其符合检索器接口。
它使用向量存储中实现的搜索方法,如相似性搜索和 MMR,在向量存储中查询文本。
一旦构建了一个向量存储,构建一个检索器非常容易。让我们通过一个例子来说明。
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(texts, embeddings)
Exiting: Cleaning up .chroma directory
retriever = db.as_retriever()
docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")
最大边际相关性检索 (Maximum Marginal Relevance Retrieval)
默认情况下,向量存储检索器使用相似性搜索。如果底层的向量存储支持最大边际相关性搜索,您可以指定该搜索类型。
retriever = db.as_retriever(search_type="mmr")
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
相似性分数阈值检索 (Similarity Score Threshold Retrieval)
您还可以指定一个检索方法,该方法设置一个相似性分数阈值,并只返回分数高于该阈值的文档
retriever = db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .5})
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
指定 top k
您还可以指定搜索参数,例如 k
,在执行检索时使用。
retriever = db.as_retriever(search_kwargs={"k": 1})
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
len(docs) # 1
六、集成
更多可见:LangChain - Retrievers
https://python.langchain.com/docs/integrations/retrievers/
- Arxiv
- AWS Kendra
- Azure Cognitive Search
- ChatGPT Plugin
- Cohere Reranker
- Databerry
- DocArray Retriever
- ElasticSearch BM25
- kNN
- LOTR (Merger Retriever)
- Metal
- Pinecone Hybrid Search
- PubMed
- SVM
- TF-IDF
- Vespa
- Weaviate Hybrid Search
- Wikipedia
- Zep
七、API
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes
retrievers.contextual_compression.ContextualCompressionRetriever | Retriever that wraps a base retriever and compresses the results. |
---|---|
retrievers.document_compressors.base.DocumentCompressorPipeline | Document compressor that uses a pipeline of Transformers. |
retrievers.document_compressors.chain_extract.LLMChainExtractor | Document compressor that uses an LLM chain to extract the relevant parts of documents. |
retrievers.document_compressors.chain_extract.NoOutputParser | Parse outputs that could return a null string of some sort. |
retrievers.document_compressors.chain_filter.LLMChainFilter | Filter that drops documents that aren’t relevant to the query. |
retrievers.document_compressors.cohere_rerank.CohereRerank | [Deprecated] Document compressor that uses Cohere Rerank API. |
retrievers.document_compressors.cross_encoder_rerank.CrossEncoderReranker | Document compressor that uses CrossEncoder for reranking. |
retrievers.document_compressors.embeddings_filter.EmbeddingsFilter | Document compressor that uses embeddings to drop documents unrelated to the query. |
retrievers.document_compressors.flashrank_rerank.FlashrankRerank | Document compressor using Flashrank interface. |
retrievers.ensemble.EnsembleRetriever | Retriever that ensembles the multiple retrievers. |
retrievers.merger_retriever.MergerRetriever | Retriever that merges the results of multiple retrievers. |
retrievers.multi_query.LineListOutputParser | Output parser for a list of lines. |
retrievers.multi_query.MultiQueryRetriever | Given a query, use an LLM to write a set of queries. |
retrievers.multi_vector.MultiVectorRetriever | Retrieve from a set of multiple embeddings for the same document. |
retrievers.multi_vector.SearchType (value) | Enumerator of the types of search to perform. |
retrievers.parent_document_retriever.ParentDocumentRetriever | Retrieve small chunks then retrieve their parent documents. |
retrievers.re_phraser.RePhraseQueryRetriever | Given a query, use an LLM to re-phrase it. |
retrievers.self_query.astradb.AstraDBTranslator () | Translate AstraDB internal query language elements to valid filters. |
retrievers.self_query.base.SelfQueryRetriever | Retriever that uses a vector store and an LLM to generate the vector store queries. |
retrievers.self_query.chroma.ChromaTranslator () | Translate Chroma internal query language elements to valid filters. |
retrievers.self_query.dashvector.DashvectorTranslator () | Logic for converting internal query language elements to valid filters. |
retrievers.self_query.deeplake.DeepLakeTranslator () | Translate DeepLake internal query language elements to valid filters. |
retrievers.self_query.dingo.DingoDBTranslator () | Translate DingoDB internal query language elements to valid filters. |
retrievers.self_query.elasticsearch.ElasticsearchTranslator () | Translate Elasticsearch internal query language elements to valid filters. |
retrievers.self_query.milvus.MilvusTranslator () | Translate Milvus internal query language elements to valid filters. |
retrievers.self_query.mongodb_atlas.MongoDBAtlasTranslator () | Translate Mongo internal query language elements to valid filters. |
retrievers.self_query.myscale.MyScaleTranslator ([…]) | Translate MyScale internal query language elements to valid filters. |
retrievers.self_query.opensearch.OpenSearchTranslator () | Translate OpenSearch internal query domain-specific language elements to valid filters. |
retrievers.self_query.pgvector.PGVectorTranslator () | Translate PGVector internal query language elements to valid filters. |
retrievers.self_query.pinecone.PineconeTranslator () | Translate Pinecone internal query language elements to valid filters. |
retrievers.self_query.qdrant.QdrantTranslator (…) | Translate Qdrant internal query language elements to valid filters. |
retrievers.self_query.redis.RedisTranslator (schema) | Visitor for translating structured queries to Redis filter expressions. |
retrievers.self_query.supabase.SupabaseVectorTranslator () | Translate Langchain filters to Supabase PostgREST filters. |
retrievers.self_query.timescalevector.TimescaleVectorTranslator () | Translate the internal query language elements to valid filters. |
retrievers.self_query.vectara.VectaraTranslator () | Translate Vectara internal query language elements to valid filters. |
retrievers.self_query.weaviate.WeaviateTranslator () | Translate Weaviate internal query language elements to valid filters. |
retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever | Retriever that combines embedding similarity with recency in retrieving values. |
retrievers.web_research.QuestionListOutputParser | Output parser for a list of numbered questions. |
retrievers.web_research.SearchQueries | Search queries to research for the user’s goal. |
retrievers.web_research.WebResearchRetriever | Google Search API retriever. |
Functions
retrievers.document_compressors.chain_extract.default_get_input (…) | Return the compression chain input. |
---|---|
retrievers.document_compressors.chain_filter.default_get_input (…) | Return the compression chain input. |
retrievers.ensemble.unique_by_key (iterable, key) | |
retrievers.self_query.deeplake.can_cast_to_float (string) | Check if a string can be cast to a float. |
retrievers.self_query.milvus.process_value (…) | Convert a value to a string and add double quotes if it is a string. |
retrievers.self_query.vectara.process_value (value) | Convert a value to a string and add single quotes if it is a string. |
2024-04-03(三) 晴天
2024-04-08(一) 改 小雨