Graphrag复现及问题定位

1.首先在github上下载GraphRAG源码:https://github.com/microsoft/graphrag
2.从readme中找到[Read the docs](https://microsoft.github.io/graphrag)<br/>,进入这个网址后,     点击Get Started
3.按照官方的步骤:①pip install graphrag       ②mkdir -p ./ragtest/input
③curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt > ./ragtest/input/book.txt
④python -m graphrag.index --init --root ./ragtest⑤python -m graphrag.index --root ./ragtest
4.修改/ragtest文件下的.env(换成自己的API密钥)
5.修改/ragtest文件下的setting.yaml,以下是我的一些配置
---------------------------------------------------------------------------------------------------------------------------------
还有一点注意事项,如果你已经在.env这里复制了密钥,则api_key: ${GRAPHRAG_API_KEY}这里需要这么写,因为有$这个符号,表明它是要找GRAPHRAG_API_KEY这个变量,
   此时需要设置成api_key: ${GRAPHRAG_API_KEY}。
   如果没有$这个符号,可以直接写密钥,此时为:api_key: {换成自己的API密钥})

我用的是GPT3.5的API_KEY,因为GPT4有点子昂贵
--------------------------------------------------------------------------------------------------------------------------------

encoding_model: cl100k_base
skip_workflows: []
llm:
  api_key: ${GRAPHRAG_API_KEY}# 注意,这里就是我说的注意事项
  type: openai_chat # or azure_openai_chat
  model: gpt-3.5-turbo-16k
  model_supports_json: false # true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  api_base: http://10.211.10.238:31000/v1
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  tokens_per_minute: 10_000 # set a leaky bucket throttle
  requests_per_minute: 20 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made
  # temperature: 0 # temperature for sampling
  # top_p: 1 # top-p sampling
  # n: 1 # Number of completions to generate

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${GRAPHRAG_API_KEY} # 注意,这里就是我说的注意事项
    type: openai_embedding # or azure_openai_embedding
    model: text-embedding-ada-002
    api_base: http://10.211.10.238:31000/v1
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    tokens_per_minute: 10_000 # set a leaky bucket throttle
    requests_per_minute: 20 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 1 #25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional
  


chunks:
  size: 300
  overlap: 100
  group_by_columns: [id] # by default, we don't allow chunks to cross documents
    
input:
  type: file # or blob
  file_type: text # or csv
  base_dir: "input"
  file_encoding: utf-8
  file_pattern: ".*\\.txt$"

cache:
  type: file # or blob
  base_dir: "cache"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

storage:
  type: file # or blob
  base_dir: "output/${timestamp}/artifacts"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

reporting:
  type: file # or console, blob
  base_dir: "output/${timestamp}/reports"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

entity_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/entity_extraction.txt"
  entity_types: [organization,person,geo,event]
  max_gleanings: 0

summarize_descriptions:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

claim_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  # enabled: true
  prompt: "prompts/claim_extraction.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 0

community_reports:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/community_report.txt"
  max_length: 5000 # 2000
  max_input_length: 15000 # 8000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: false # if true, will generate node2vec embeddings for nodes
  # num_walks: 10
  # walk_length: 40
  # window_size: 2
  # iterations: 3
  # random_seed: 597832

umap:
  enabled: false # if true, will generate UMAP embeddings for nodes

snapshots:
  graphml: false
  raw_entities: false
  top_level_nodes: false

local_search:
  # text_unit_prop: 0.5
  # community_prop: 0.1
  # conversation_history_max_turns: 5
  # top_k_mapped_entities: 10
  # top_k_relationships: 10
  # max_tokens: 12000

global_search:
  # max_tokens: 12000
  # data_max_tokens: 12000
  # map_max_tokens: 1000
  # reduce_max_tokens: 2000
  # concurrency: 32

微软开源的GraphRAG是一种基于图神经网络的强化学习框架,它主要用于文本生成任务,特别是长文本序列的生成。GraphRAG将输入的文本作为图结构处理,通过节点表示单词或词组,边则用于表示它们之间的上下文依赖关系。 ### GraphRAG的基本原理 1. **图构建**:首先,将输入文本转换成图形式,每个单词或词组是一个节点,边则代表它们之间的上下文关系或依赖关系。 2. **图嵌入**:利用预训练的语言模型(如BERT、ELMo等)对节点进行编码,得到每个节点的向量表示。这些向量不仅包含词汇信息,还能捕捉到复杂的语义和句法特征。 3. **注意力机制**:引入注意力机制来计算不同节点间的交互,帮助模型更好地理解句子的结构,并在生成过程中给予重要节点更多的权重。 4. **强化学习策略**:使用强化学习算法(例如Q-learning、Policy Gradient等),通过奖励函数指导模型学习如何生成高质量的文本。奖励通常基于生成文本的质量,比如语言流畅度、逻辑连贯性和主题一致性。 5. **动态规划优化**:为了提高效率和减少搜索空间,可以结合动态规划技术,使得模型能够在有限步内做出最优决策,生成最佳文本片段。 ### 实现步骤概览: 1. **数据准备**:收集并预处理所需的数据集,包括文本、标注和相应的奖励机制设计。 2. **模型搭建**:基于PyTorch或其他深度学习库构建GraphRAG框架的基础模型,包括图结构的创建、图嵌入层的设计以及强化学习组件的整合。 3. **训练过程**:使用优化器迭代更新模型参数,同时调整策略网络以最大化累积奖励。 4. **测试与评估**:在验证集上评估模型性能,关注生成文本的质量、长度和相关性。 5. **应用与扩展**:根据实际应用场景需求,调整模型配置或集成额外的功能,如多模态输入、特定领域的知识增强等。 ### 遇到的问题及解决思路: - **过拟合**:通过增加正则化项、使用更丰富的数据集、进行数据增强等方式缓解。 - **训练耗时**:优化模型架构、采用并行计算或GPU加速,选择更适合大规模数据的优化算法。 - **生成质量不稳定**:改进奖励函数设计、加强模型解释力、使用更精细的评价指标辅助训练过程。 ### 相关问题: 1. **GraphRAG与其他图神经网络模型的区别是什么?** 2. **如何优化GraphRAG的训练速度而不牺牲生成文本的质量?** 3. **在哪些领域中GraphRAG特别有优势?它的局限性在哪里?**
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值