【langchain qdrant知识库三种检索方式,支持本地保存】

Langchain

Langchain is a library that makes developing Large Language Model-based applications much easier. It unifies the interfaces to different libraries, including major embedding providers and Qdrant. Using Langchain, you can focus on the business value instead of writing the boilerplate.

Langchain distributes the Qdrant integration as a partner package.

It might be installed with pip:

pip install langchain-qdrant

The integration supports searching for relevant documents usin dense/sparse and hybrid retrieval.

Qdrant acts as a vector index that may store the embeddings with the documents used to generate them. There are various ways to use it, but calling QdrantVectorStore.from_texts or QdrantVectorStore.from_documents is probably the most straightforward way to get started:

from langchain_qdrant import QdrantVectorStore
from langchain_openai import OpenAIEmbeddings
from fastembed import TextEmbedding

embeddings = OpenAIEmbeddings()
# embeddings = TextEmbedding(
#     model_name="<model_name>", cache_dir="<cache_dir>", threads="<threads>"
# )


doc_store = QdrantVectorStore.from_texts(
    texts, embeddings, url="<qdrant-url>", api_key="<qdrant-api-key>", collection_name="texts"
)

Using an existing collection

To get an instance of langchain_qdrant.QdrantVectorStore without loading any new documents or texts, you can use the QdrantVectorStore.from_existing_collection() method.

doc_store = QdrantVectorStore.from_existing_collection(
    embeddings=embeddings,
    collection_name="my_documents",
    url="<qdrant-url>",
    api_key="<qdrant-api-key>",
)

Local mode

Python client allows you to run the same code in local mode without running the Qdrant server. That’s great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kept in memory or persisted on disk.

In-memory

For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.

qdrant = QdrantVectorStore.from_documents(
    docs,
    embeddings,
    location=":memory:",  # Local mode with in-memory storage only
    collection_name="my_documents",
)

On-disk storage

Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs.

qdrant = Qdrant.from_documents(
    docs,
    embeddings,
    path="/tmp/local_qdrant",
    collection_name="my_documents",
)

On-premise server deployment

No matter if you choose to launch QdrantVectorStore locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you’re going to connect to such an instance will be identical. You’ll need to provide a URL pointing to the service.

url = "<---qdrant url here --->"
qdrant = QdrantVectorStore.from_documents(
    docs,
    embeddings,
    url,
    prefer_grpc=True,
    collection_name="my_documents",
)

Similarity search

QdrantVectorStore supports 3 modes for similarity searches. They can be configured using the retrieval_mode parameter when setting up the class.

  • Dense Vector Search(Default)
  • Sparse Vector Search
  • Hybrid Search

Dense Vector Search

To search with only dense vectors,

  • The retrieval_mode parameter should be set to RetrievalMode.DENSE(default).
  • A dense embeddings value should be provided for the embedding parameter.
from langchain_qdrant import RetrievalMode

qdrant = QdrantVectorStore.from_documents(
    docs,
    embedding=embeddings,
    location=":memory:",
    collection_name="my_documents",
    retrieval_mode=RetrievalMode.DENSE,
)

query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)

Sparse Vector Search

To search with only sparse vectors,

  • The retrieval_mode parameter should be set to RetrievalMode.SPARSE.
  • An implementation of the SparseEmbeddings interface using any sparse embeddings provider has to be provided as value to the sparse_embedding parameter.

The langchain-qdrant package provides a FastEmbed based implementation out of the box.

To use it, install the FastEmbed package.

from langchain_qdrant import FastEmbedSparse, RetrievalMode

sparse_embeddings = FastEmbedSparse(model_name="Qdrant/BM25")

qdrant = QdrantVectorStore.from_documents(
    docs,
    sparse_embedding=sparse_embeddings,
    location=":memory:",
    collection_name="my_documents",
    retrieval_mode=RetrievalMode.SPARSE,
)

query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)

Hybrid Vector Search

To perform a hybrid search using dense and sparse vectors with score fusion,

  • The retrieval_mode parameter should be set to RetrievalMode.HYBRID.
  • A dense embeddings value should be provided for the embedding parameter.
  • An implementation of the SparseEmbeddings interface using any sparse embeddings provider has to be provided as value to the sparse_embedding parameter.
from langchain_qdrant import FastEmbedSparse, RetrievalMode

sparse_embeddings = FastEmbedSparse(model_name="Qdrant/bm25")

qdrant = QdrantVectorStore.from_documents(
    docs,
    embedding=embeddings,
    sparse_embedding=sparse_embeddings,
    location=":memory:",
    collection_name="my_documents",
    retrieval_mode=RetrievalMode.HYBRID,
)

query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)

Note that if you’ve added documents with HYBRID mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值