Langchain 学习03 RAG基础

本文介绍了如何使用AzureOpenAIAPI和RAG(Retrieval-AugmentedGeneration)方法,结合DocArray进行文档检索,并在聊天场景中处理用户问题,实现实时、基于上下文的信息查找和生成答案。
摘要由CSDN通过智能技术生成

RAG基础

# Requires:
# pip install langchain docarray tiktoken

import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())  # 读取本地 .env 文件,里面定义了 OPENAI_API_KEY

#############
api_key = os.environ['AZURE_OPENAI_API_KEY']
azure_endpoint = os.environ['AZURE_OPENAI_ENDPOINT']
api_version = os.environ['AZURE_OPENAPI_VERSION']
model="gpt-35-turbo"
deployment_name="gpt-35-turbo"
############

from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai.chat_models import AzureChatOpenAI
from langchain_openai.embeddings import AzureOpenAIEmbeddings

## retrieve relevant documents to the search and include them as part of the context.
vectorstore = DocArrayInMemorySearch.from_texts(
    ["harrison worked at kensho", "bears like to eat honey"],
    embedding=AzureOpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
## if no vector, the code as following:
# retriever.invoke("where did harrison work?")

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = AzureChatOpenAI(
    model=model,
    deployment_name=deployment_name,
    api_key=api_key,
    azure_endpoint=azure_endpoint,
    api_version=api_version,
)
output_parser = StrOutputParser()
##  create a RunnableParallel object with two entries. The first entry, context will include the document results fetched by the retriever.
## The second entry, question will contain the user’s original question. To pass on the question, we use RunnablePassthrough to copy this entry.
## using the retriever for document search, and RunnablePassthrough to pass the user’s question
setup_and_retrieval = RunnableParallel(
    {"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser

print(chain.invoke("where did harrison work?"))
  • 7
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值