在当今的AI技术领域,构建高效的数据检索系统是一个关键任务。本篇文章将介绍如何使用LlamaIndex和Cohere构建一个自定义的重排序器,提升数据检索的效果。本文将详细介绍所需的步骤,并提供示例代码供参考。
安装必要的包
首先,我们需要安装一些必要的Python包:
%pip install llama-index-postprocessor-cohere-rerank
%pip install llama-index-llms-openai
%pip install llama-index-finetuning
%pip install llama-index-embeddings-cohere
!pip install llama-index cohere pypdf
初始化API密钥
接下来,我们需要初始化OpenAI和Cohere的API密钥:
import os
openai_api_key = "YOUR_OPENAI_API_KEY"
cohere_api_key = "YOUR_COHERE_API_KEY"
os.environ["OPENAI_API_KEY"] = openai_api_key
os.environ["COHERE_API_KEY"] = cohere_api_key
下载数据
我们将使用Lyft 2021和Uber 2021的10-K SEC文件进行训练和评估:
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/lyft_2021.pdf' -O 'data/10k/lyft_2021.pdf'
加载数据
使用SimpleDirectoryReader加载数据:
from llama_index.core import SimpleDirectoryReader
lyft_docs = SimpleDirectoryReader(input_files=["./data/10k/lyft_2021.pdf"]).load_data()
uber_docs = SimpleDirectoryReader(input_files=["./data/10k/uber_2021.pdf"]).load_data()
创建节点
我们将文档拆分成小块以创建训练数据集:
from llama_index.core.node_parser import SimpleNodeParser
node_parser = SimpleNodeParser.from_defaults(chunk_size=400)
lyft_nodes = node_parser.get_nodes_from_documents(lyft_docs)
uber_nodes = node_parser.get_nodes_from_documents(uber_docs)
使用GPT-4生成问题
使用OpenAI的GPT-4模型从每个节点生成问题:
from llama_index.llms.openai import OpenAI
llm = OpenAI(temperature=0, model="gpt-4")
qa_generate_prompt_tmpl = """\
Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, generate only questions based on the below query.
You are a Professor. Your task is to setup {num_questions_per_chunk} questions for an upcoming quiz/examination. The questions should be diverse in nature across the document. The questions should not contain options, not start with Q1/ Q2. Restrict the questions to the context information provided.\
"""
from llama_index.core.evaluation import generate_question_context_pairs
qa_dataset_lyft_train = generate_question_context_pairs(
lyft_nodes[:256],
llm=llm,
num_questions_per_chunk=1,
qa_generate_prompt_tmpl=qa_generate_prompt_tmpl,
)
创建训练和验证数据集
根据生成的问题和上下文创建训练和验证数据集:
qa_dataset_lyft_train.save_json("lyft_train_dataset.json")
qa_dataset_lyft_val = generate_question_context_pairs(
lyft_nodes[257:321],
llm=llm,
num_questions_per_chunk=1,
qa_generate_prompt_tmpl=qa_generate_prompt_tmpl,
)
qa_dataset_lyft_val.save_json("lyft_val_dataset.json")
qa_dataset_uber_val = generate_question_context_pairs(
uber_nodes[:150],
llm=llm,
num_questions_per_chunk=1,
qa_generate_prompt_tmpl=qa_generate_prompt_tmpl,
)
qa_dataset_uber_val.save_json("uber_val_dataset.json")
创建带有和不带有硬负样本的数据集
我们将创建三种数据集:不带硬负样本,随机选择硬负样本,以及基于余弦相似性选择硬负样本:
from llama_index.embeddings.cohere import CohereEmbedding
from llama_index.finetuning import generate_cohere_reranker_finetuning_dataset
embed_model = CohereEmbedding(
cohere_api_key=cohere_api_key,
model_name="embed-english-v3.0",
input_type="search_document",
)
generate_cohere_reranker_finetuning_dataset(
qa_dataset_lyft_train, finetune_dataset_file_name="train.jsonl"
)
generate_cohere_reranker_finetuning_dataset(
qa_dataset_lyft_val, finetune_dataset_file_name="val.jsonl"
)
generate_cohere_reranker_finetuning_dataset(
qa_dataset_lyft_train,
num_negatives=5,
hard_negatives_gen_method="random",
finetune_dataset_file_name="train_5_random.jsonl",
embed_model=embed_model,
)
generate_cohere_reranker_finetuning_dataset(
qa_dataset_lyft_val,
num_negatives=5,
hard_negatives_gen_method="random",
finetune_dataset_file_name="val_5_random.jsonl",
embed_model=embed_model,
)
generate_cohere_reranker_finetuning_dataset(
qa_dataset_lyft_train,
num_negatives=5,
hard_negatives_gen_method="cosine_similarity",
finetune_dataset_file_name="train_5_cosine_similarity.jsonl",
embed_model=embed_model,
)
generate_cohere_reranker_finetuning_dataset(
qa_dataset_lyft_val,
num_negatives=5,
hard_negatives_gen_method="cosine_similarity",
finetune_dataset_file_name="val_5_cosine_similarity.jsonl",
embed_model=embed_model,
)
训练自定义重排序器
使用生成的训练数据集训练自定义重排序器:
from llama_index.finetuning import CohereRerankerFinetuneEngine
finetune_model_no_hard_negatives = CohereRerankerFinetuneEngine(
train_file_name="train.jsonl",
val_file_name="val.jsonl",
model_name="lyft_reranker_0_hard_negatives",
model_type="RERANK",
base_model="english",
)
finetune_model_no_hard_negatives.finetune()
finetune_model_random_hard_negatives = CohereRerankerFinetuneEngine(
train_file_name="train_5_random.jsonl",
val_file_name="val_5_random.jsonl",
model_name="lyft_reranker_5_random_hard_negatives",
model_type="RERANK",
base_model="english",
)
finetune_model_random_hard_negatives.finetune()
finetune_model_cosine_hard_negatives = CohereRerankerFinetuneEngine(
train_file_name="train_5_cosine_similarity.jsonl",
val_file_name="val_5_cosine_similarity.jsonl",
model_name="lyft_reranker_5_cosine_hard_negatives",
model_type="RERANK",
base_model="english",
)
finetune_model_cosine_hard_negatives.finetune()
测试模型
我们将使用Uber的节点进行测试,比较不同模型的效果:
from llama_index.core import VectorStoreIndex, SimpleNodeParser
from llama_index.core.retrievers import BaseRetriever, VectorIndexRetriever
from llama_index.postprocessor.cohere_rerank import CohereRerank
from llama_index.core.schema import NodeWithScore
from llama_index.core.evaluation import RetrieverEvaluator
from typing import List
import pandas as pd
index_embed_model = CohereEmbedding(
cohere_api_key=cohere_api_key,
model_name="embed-english-v3.0",
input_type="search_document",
)
query_embed_model = CohereEmbedding(
cohere_api_key=cohere_api_key,
model_name="embed-english-v3.0",
input_type="search_query",
)
vector_index = VectorStoreIndex(
uber_nodes[:150],
embed_model=index_embed_model,
)
vector_retriever = VectorIndexRetriever(
index=vector_index,
similarity_top_k=10,
embed_model=query_embed_model,
)
class CustomRetriever(BaseRetriever):
def __init__(self, vector_retriever: VectorIndexRetriever) -> None:
self._vector_retriever = vector_retriever
super().__init__()
def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
retrieved_nodes = self._vector_retriever.retrieve(query_bundle)
if reranker != "None":
retrieved_nodes = reranker.postprocess_nodes(retrieved_nodes, query_bundle)
else:
retrieved_nodes = retrieved_nodes[:5]
return retrieved_nodes
async def _aretrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
return self._retrieve(query_bundle)
async def aretrieve(self, str_or_query_bundle: Union[str, QueryBundle]) -> List[NodeWithScore]:
return self._retrieve(str_or_query_bundle)
rerank_no_hard_negatives = CohereRerank(
cohere_api_key=cohere_api_key,
model_id="lyft_reranker_0_hard_negatives",
)
rerank_random_hard_negatives = CohereRerank(
cohere_api_key=cohere_api_key,
model_id="lyft_reranker_5_random_hard_negatives",
)
rerank_cosine_hard_negatives = CohereRerank(
cohere_api_key=cohere_api_key,
model_id="lyft_reranker_5_cosine_hard_negatives",
)
qa_dataset = qa_dataset_uber_val
no_hard_negatives_retriever = CustomRetriever(vector_retriever=vector_retriever)
no_hard_negatives_evaluator = RetrieverEvaluator(
retriever=no_hard_negatives_retriever, eval_dataset=qa_dataset
)
random_hard_negatives_retriever = CustomRetriever(vector_retriever=vector_retriever)
random_hard_negatives_evaluator = RetrieverEvaluator(
retriever=random_hard_negatives_retriever, eval_dataset=qa_dataset
)
cosine_hard_negatives_retriever = CustomRetriever(vector_retriever=vector_retriever)
cosine_hard_negatives_evaluator = RetrieverEvaluator(
retriever=cosine_hard_negatives_retriever, eval_dataset=qa_dataset
)
no_hard_negatives_metrics = no_hard_negatives_evaluator.evaluate()
random_hard_negatives_metrics = random_hard_negatives_evaluator.evaluate()
cosine_hard_negatives_metrics = cosine_hard_negatives_evaluator.evaluate()
print(no_hard_negatives_metrics)
print(random_hard_negatives_metrics)
print(cosine_hard_negatives_metrics)
通过以上步骤,我们可以成功构建和评估一个自定义的重排序器,提升数据检索的准确性和效果。希望本教程能为您的项目提供帮助。