好久没有体验新技术了,今天来玩一下GraphRAG
顾名思义,一种检索增强的方法,利用图谱来实现RAG
1.配置环境
conda create -n GraphRAG python=3.11
conda activate GraphRAG
pip install graphrag
2.构建GraphRAG
mkdir -p ./ragtest/input
#这本书详细介绍了如何通过提示工程技巧来引导像ChatGPT这样的语言模型生成高质量的文本。
curl https://raw.githubusercontent.com/win4r/mytest/main/book.txt > ./ragtest/input/book.txt
#初始化空间
python3 -m graphrag.index --init --root ./ragtest
然后填写.env里面的内容,可以直接写openai的key,如下
GRAPHRAG_API_KEY=sk-ZZvxAMzrl.....................
或者可以写GRAPHRAG_API_KEY=ollama
1)如果是ollama的话
进入settings.yaml里面
# api_base: https://<instance>.openai.azure.com
取消注释,并改为 api_base: http://127.0.0.1:11434/v1
同时将model改为llama3(你自己的ollama模型)
2)用key的话,将模型改为
model: gpt-3.5-turbo-1106
文档28行还有一个词嵌入模型,根据自己的选择更改
但是这个embeddings模型只能用openai的
如果上面用的是ollama的模型,这里要将api_base改一下,改为api_base: https://api.openai.com/v1
不然当进行到这一步的时候,会继承访问上面ollama设置的base——url,从而产生报错
#进行索引操作
python3 -m graphrag.index --root ./ragtest
构建完成
encoding_model: cl100k_base
skip_workflows: []
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: llama3
model_supports_json: true # recommended if this is available for your model.
# max_tokens: 4000
# request_timeout: 180.0
api_base: http://192.168.1.138:11434/v1
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
parallelization:
stagger: 0.3
# num_threads: 50 # the number of threads to use for parallel processing
async_mode: threaded # or asyncio
embeddings:
## parallelization: override the global parallelization settings for embeddings
async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_embedding # or azure_openai_embedding
model: text-embedding-3-small
api_base: https://api.openai.com/v1
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# batch_size: 16 # the number of documents to send in a single request
# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
# target: required # or optional
chunks:
size: 300
overlap: 100
group_by_columns: [id] # by default, we don't allow chunks to cross documents
input:
type: file # or blob
file_type: text # or csv
base_dir: "input"
file_encoding: utf-8
file_pattern: ".*\\.txt$"
cache:
type: file # or blob
base_dir: "cache"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name>
storage:
type: file # or blob
base_dir: "output/${timestamp}/artifacts"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name>
reporting:
type: file # or console, blob
base_dir: "output/${timestamp}/reports"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name>
entity_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/entity_extraction.txt"
entity_types: [organization,person,geo,event]
max_gleanings: 0
summarize_descriptions:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/summarize_descriptions.txt"
max_length: 500
claim_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
# enabled: true
prompt: "prompts/claim_extraction.txt"
description: "Any claims or facts that could be relevant to information discovery."
max_gleanings: 0
community_report:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/community_report.txt"
max_length: 2000
max_input_length: 8000
cluster_graph:
max_cluster_size: 10
embed_graph:
enabled: false # if true, will generate node2vec embeddings for nodes
# num_walks: 10
# walk_length: 40
# window_size: 2
# iterations: 3
# random_seed: 597832
umap:
enabled: false # if true, will generate UMAP embeddings for nodes
snapshots:
graphml: false
raw_entities: false
top_level_nodes: false
local_search:
# text_unit_prop: 0.5
# community_prop: 0.1
# conversation_history_max_turns: 5
# top_k_mapped_entities: 10
# top_k_relationships: 10
# max_tokens: 12000
global_search:
# max_tokens: 12000
# data_max_tokens: 12000
# map_max_tokens: 1000
# reduce_max_tokens: 2000
3. 全局检索和本地检索
python3 -m graphrag.query \
--root ./ragtest \
--method global \
"show me some Prompts about Interpretable Soft Prompts."
python3 -m graphrag.query \
--root ./ragtest \
--method local \
"show me some Prompts about Knowledge Generation."
4.可视化
#pip3 install chainlit
import chainlit as cl
import subprocess
import shlex
@cl.on_chat_start
def start():
cl.user_session.set("history", [])
@cl.on_message
async def main(message: cl.Message):
history = cl.user_session.get("history")
# 从 Message 对象中提取文本内容
query = message.content
# 构建命令
cmd = [
"python3", "-m", "graphrag.query",
"--root", "./ragtest",
"--method", "local",
]
# 安全地添加查询到命令中
cmd.append(shlex.quote(query))
# 运行命令并捕获输出
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
output = result.stdout
# 提取 "SUCCESS: Local Search Response:" 之后的内容
response = output.split("SUCCESS: Local Search Response:", 1)[-1].strip()
history.append((query, response))
cl.user_session.set("history", history)
await cl.Message(content=response).send()
except subprocess.CalledProcessError as e:
error_message = f"An error occurred: {e.stderr}"
await cl.Message(content=error_message).send()
if __name__ == "__main__":
cl.run()