在这篇文章中,我们将探讨如何使用LlamaIndex来解析文档并进行查询。LlamaIndex提供了强大的功能,可以根据你的需求进行定制化。我们将介绍一些常见的使用方法,包括如何解析文档、使用不同的向量存储、检索更多的上下文信息、使用不同的大模型、选择不同的响应模式、流式返回响应以及创建聊天机器人。
安装和设置
首先,确保你已经安装了LlamaIndex并完成了入门教程。如果遇到不熟悉的术语,可以查看高层概念部分。
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
将文档解析为更小的块
全局设置:
from llama_index.core import Settings
Settings.chunk_size = 512
局部设置:
from llama_index.core.node_parser import SentenceSplitter
index = VectorStoreIndex.from_documents(
documents, transformations=[SentenceSplitter(chunk_size=512)]
)
使用不同的向量存储
首先,你可以安装你想使用的向量存储。例如,要使用Chroma作为向量存储,可以使用pip进行安装:
pip install llama-index-vector-stores-chroma
然后,你可以在代码中使用它:
import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
chroma_client = chromadb.PersistentClient()
chroma_collection = chroma_client.create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
检索更多上下文信息
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What did the author do growing up?")
print(response)
使用不同的大模型
全局设置:
from llama_index.core import Settings
from llama_index.llms.ollama import Ollama
Settings.llm = Ollama(model="mistral", request_timeout=60.0)
局部设置:
index.as_query_engine(llm=Ollama(model="mistral", request_timeout=60.0))
使用不同的响应模式
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(response_mode="tree_summarize")
response = query_engine.query("What did the author do growing up?")
print(response)
流式返回响应
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(streaming=True)
response = query_engine.query("What did the author do growing up?")
response.print_response_stream()
创建聊天机器人
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_chat_engine()
response = query_engine.chat("What did the author do growing up?")
print(response)
response = query_engine.chat("Oh interesting, tell me more.")
print(response)
可能遇到的错误
- 网络超时:在请求API时可能会遇到网络超时错误,可以通过增加超时时间来解决。
- 向量存储安装失败:确保正确安装所需的向量存储库。
- 文档加载失败:检查文档路径和格式是否正确。
如果你觉得这篇文章对你有帮助,请点赞,关注我的博客,谢谢!