基于langchain和向量库微调qwen2-1.5B学习行业专属知识

        现在有大量的word文档数据需要让大模型学习形成专业业务领域的知识后对外提供问答服务,以下是基于前问Qwen2-1.5B模型进行langchain RAG向量搜索+大模型问答的实现过程。

第一步:原始数据处理

       原始word文档数据里面包含了封面、目录、概述、专业知识章节、表格、备注、附录等内容,还好没有图像,这些数据直接做分词向量化处理的时候因为上下文环境的问题会造成信息丢失,比如现在有一大段文字内容是关于如何制作面条的,关于制作面条的过程讲解的很细致,但是缺失了一个关键问题,就是这是关于中国面条的知识还是关于制作意大利面的知识呢?在做了向量化处理以后,如果用户的问题是如何制作意大利面,文字向量化处理的时候缺失了文章所属的专业领域分类信息就会造成数据检索失败或不准确,所以我这里是先将word文档内的内容按照段落转换成了excel格式,每一段数据一行,然后手工的对数据进行了一个概述补充,即补充了这些文字知识对应的是专业领域内的哪个领域的描述信息。

     以下是word文档转为excel的处理程序:

import pandas as pd
from docx import Document

# 解析Word文档并保存段落到Excel文件
def parse_word_and_save_to_excel(word_file_path, excel_file_path, prefix):
    doc = Document(word_file_path)
    paragraphs = [prefix + p.text.strip() for p in doc.paragraphs if p.text.strip()]

    df = pd.DataFrame(paragraphs, columns=["paragraph"])
    df.to_excel(excel_file_path, index=False, engine='openpyxl')
    
    print(f"段落数据已保存到 {excel_file_path}")

if __name__ == "__main__":
    word_file_path = "/data/xxx/doc/新加坡.docx"  # 请填写您的Word文档路径
    excel_file_path = "/data/xxx/新加坡段落数据.xlsx"  # 输出的Excel文件路径
    prefix = "在新加坡,"  # 前缀字符串
    parse_word_and_save_to_excel(word_file_path, excel_file_path, prefix)

  得到excel数据以后手动的对内容进行补充修改完善调整,涉及到表格数据的,涉及到列表内容的有可能需要完善修改一下,另外就是刚刚提到的,补充这段文字和业务领域的关联,删除没用的垃圾数据,如“目录”,“附录”等等没有用的数据。下一步就是产生问答对数据, 这里采用的方式是去调用其他的更高级的模型来生成问答对数据内容,正好赶上智谱清言API注册就给一千万token,所以这里使用的是chatglm4的模型去生成问答对数据,代码如下:

import json
import requests
import pandas as pd
from tqdm import tqdm

# 设置HTTP代理
proxies = {
    "http": "http://x.1.2.3:1234",
    "https": "http://x.1.2.3:1234"
}

# API key 和 URL
api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"  # 请填写您的API Key
api_url = "https://open.bigmodel.cn/api/paas/v4/chat/completions"

# 调用智谱AI API生成问答对
def generate_qa_pairs(paragraph, model="glm-4"):
    headers = {
        "Authorization": f"Bearer {api_key}",
        "Content-Type": "application/json"
    }
    prompt = f"请基于以下段落生成尽可能多的问答对,并以 [{json.dumps({'q': '', 'a': ''})}] 结构化JSON格式返回:{paragraph}"
    data = {
        "model": model,
        "messages": [
            {"role": "user", "content": prompt}
        ],
        "stream": False
    }
    response = requests.post(api_url, headers=headers, json=data, proxies=proxies)
    response.raise_for_status()
    return response.json()["choices"][0]["message"]["content"]

# 保存结果为JSON文件
def save_to_json(data, output_file):
    with open(output_file, 'w', encoding='utf-8') as f:
        json.dump(data, f, ensure_ascii=False, indent=4)

def process_paragraphs_from_excel(excel_file_path, output_file, start_percent, end_percent):
    df = pd.read_excel(excel_file_path, engine='openpyxl')
    paragraphs = df['paragraph'].tolist()

    total_paragraphs = len(paragraphs)
    start_index = int(total_paragraphs * start_percent / 100)
    end_index = int(total_paragraphs * end_percent / 100)
    selected_paragraphs = paragraphs[start_index:end_index]

    qa_pairs = []

    for paragraph in tqdm(selected_paragraphs, desc=f"Processing paragraphs {start_percent}% to {end_percent}%"):
        try:
            #去除中文双引号及英文双引号
            cleaned_paragraph = paragraph.replace('“', '').replace('”', '').replace('"', '')
            qa_content = generate_qa_pairs(cleaned_paragraph)
            qa_pairs.append({
                "paragraph": paragraph,
                "qa_pairs": json.loads(qa_content)
            })
        except requests.exceptions.RequestException as e:
            print(f"Error processing paragraph: {paragraph}\nError: {e}\nserver response:{qa_content}")
        except json.JSONDecodeError as e:
            print(f"Error decoding JSON for paragraph: {paragraph}\nError: {e}\nserver response:{qa_content}")

    save_to_json(qa_pairs, output_file)
    print(f"问答对已保存到 {output_file}")

if __name__ == "__main__":
    excel_file_path = "/data/xxxxx/新加坡段落数据.xlsx"  # 输入的Excel文件路径
    output_json_file = "/data/xxxx/新加坡_qa_pairs.json"  # 输出的JSON文件路径
    start_percent = 0  # 起始百分比
    end_percent = 100  # 结束百分比
    process_paragraphs_from_excel(excel_file_path, output_json_file, start_percent, end_percent)

经过上面的步骤,word文档里面的知识就变成了问答对及上下文json数据,大概格式如下:

  [
  {
        "paragraph": "这里是上下文文字数据信息",
        "qa_pairs": [
            {
                "q": "示例问题1?",
                "a": "示例问题答案"
            },
            {
                "q": "示例问题2?",
                "a": "示例问题答案"
            }
       ]
    }
]

第二步:langchain+向量检索喂给大模型

    我这里是使用的Qwen2 1.5b模型,因为垃圾显卡只能跑这个模型了,另外是提前走向量搜索,大模型就做个总结也用不了多少算力。

先安装依赖

pip install flask flask-cors torch numpy transformers langchain langchain-community langchain-huggingface

 下面是程序代码:

from flask import Flask, request, jsonify
from flask_cors import CORS
import json
import os
import re
import torch
import numpy as np
from transformers import AutoModelForCausalLM, AutoTokenizer
from abc import ABC
from langchain.llms.base import LLM
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain_community.vectorstores import FAISS
from langchain_community.document_loaders import TextLoader
from langchain_huggingface import HuggingFaceEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.docstore.document import Document
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import RetrievalQA
from typing import Any, List, Mapping, Optional, Tuple


# Flask应用初始化
app = Flask(__name__)
CORS(app)  # 允许跨域

device = "cuda"  # 将模型加载到cuda设备上

# 从指定路径加载模型和分词器
model_path = "/data/model/Qwen2-1.5B-Instruct/"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)

# 定义Qwen模型类,继承自LangChain的LLM基类
class Qwen(LLM, ABC):
    max_token: int = 10000  # 最大token数
    temperature: float = 0.01  # 温度参数,用于控制生成文本的多样性
    top_p = 0.9  # top-p采样参数
    history_len: int = 3  # 对话历史长度

    def __init__(self):
        super().__init__()

    @property
    def _llm_type(self) -> str:
        return "Qwen"

    @property
    def _history_len(self) -> int:
        return self.history_len

    def set_history_len(self, history_len: int = 10) -> None:
        self.history_len = history_len

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
    ) -> str:
        # 构造对话消息
        messages = [
            {"role": "system", "content": "你是一个非常专业的人工智能助手."},
            {"role": "user", "content": prompt}
        ]
        # 将消息应用模板并生成文本
        text = tokenizer.apply_chat_template(
            messages,
            tokenize=False,
            add_generation_prompt=True
        )
        # 将文本转换为模型输入
        model_inputs = tokenizer([text], return_tensors="pt").to(device)
        # 生成响应文本
        generated_ids = model.generate(
            model_inputs.input_ids,
            max_new_tokens=512
        )
        # 获取生成的文本
        generated_ids = [
            output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
        ]

        response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
        return response

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """获取识别参数"""
        return {"max_token": self.max_token,
                "temperature": self.temperature,
                "top_p": self.top_p,
                "history_len": self.history_len}

# 中文文本拆分器类,继承自CharacterTextSplitter
class ChineseTextSplitter(CharacterTextSplitter):
    def __init__(self, pdf: bool = False, **kwargs):
        super().__init__(**kwargs)
        self.pdf = pdf

    def split_text(self, text: str) -> List[str]:
        if self.pdf:
            # 处理PDF文本格式
            text = re.sub(r"\n{3,}", "\n", text)
            text = re.sub('\s', ' ', text)
            text = text.replace("\n\n", "")
        # 定义句子分隔模式
        sent_sep_pattern = re.compile(
            '([﹒﹔﹖﹗.。!?]["’”」』]{0,2}|(?=["‘“「『]{1,2}|$))')
        sent_list = []
        for ele in sent_sep_pattern.split(text):
            if sent_sep_pattern.match(ele) and sent_list:
                sent_list[-1] += ele
            elif ele:
                sent_list.append(ele)
        return sent_list

# 加载文件并进行拆分
def load_file(filepath):
    loader = TextLoader(filepath, autodetect_encoding=True)
    textsplitter = ChineseTextSplitter(pdf=False)
    docs = loader.load_and_split(textsplitter)
    write_check_file(filepath, docs)
    return docs

# 将拆分后的文档写入检查文件
def write_check_file(filepath, docs):
    folder_path = os.path.join(os.path.dirname(filepath), "tmp_files")
    if not os.path.exists(folder_path):
        os.makedirs(folder_path)
    fp = os.path.join(folder_path, 'load_file.txt')
    with open(fp, 'a+', encoding='utf-8') as fout:
        fout.write("filepath=%s,len=%s" % (filepath, len(docs)))
        fout.write('\n')
        for i in docs:
            fout.write(str(i))
            fout.write('\n')
        fout.close()

# 将连续的列表分离成多个子列表
def separate_list(ls: List[int]) -> List[List[int]]:
    lists = []
    ls1 = [ls[0]]
    for i in range(1, len(ls)):
        if ls[i - 1] + 1 == ls[i]:
            ls1.append(ls[i])
        else:
            lists.append(ls1)
            ls1 = [ls[i]]
    lists.append(ls1)
    return lists

# FAISS向量搜索包装类
class FAISSWrapper(FAISS):
    chunk_size = 250
    chunk_conent = True
    score_threshold = 0

    def similarity_search_with_score_by_vector(
            self, embedding: List[float], k: int = 4, filter=None,fetch_k: Optional[int] = None
    ) -> List[Tuple[Document, float]]:
        if filter:
            embedding = [e for e, f in zip(embedding, filter) if f]
        if fetch_k is not None:
            k = fetch_k
        scores, indices = self.index.search(np.array([embedding], dtype=np.float32), k)
        docs = []
        id_set = set()
        store_len = len(self.index_to_docstore_id)
        for j, i in enumerate(indices[0]):
            if i == -1 or 0 < self.score_threshold < scores[0][j]:
                # 当返回的文档数量不足时会发生这种情况
                continue
            _id = self.index_to_docstore_id[i]
            doc = self.docstore.search(_id)
            if not self.chunk_conent:
                if not isinstance(doc, Document):
                    raise ValueError(f"Could not find document for id {_id}, got {doc}")
                doc.metadata["score"] = int(scores[0][j])
                docs.append(doc)
                continue
            id_set.add(i)
            docs_len = len(doc.page_content)
            for k in range(1, max(i, store_len - i)):
                break_flag = False
                for l in [i + k, i - k]:
                    if 0 <= l < len(self.index_to_docstore_id):
                        _id0 = self.index_to_docstore_id[l]
                        doc0 = self.docstore.search(_id0)
                        if docs_len + len(doc0.page_content) > self.chunk_size:
                            break_flag = True
                            break
                        elif doc0.metadata["source"] == doc.metadata["source"]:
                            docs_len += len(doc0.page_content)
                            id_set.add(l)
                if break_flag:
                    break
        if not self.chunk_conent:
            return docs
        if len(id_set) == 0 and self.score_threshold > 0:
            return []
        id_list = sorted(list(id_set))
        id_lists = separate_list(id_list)
        for id_seq in id_lists:
            for id in id_seq:
                if id == id_seq[0]:
                    _id = self.index_to_docstore_id[id]
                    doc = self.docstore.search(_id)
                else:
                    _id0 = self.index_to_docstore_id[id]
                    doc0 = self.docstore.search(_id0)
                    doc.page_content += " " + doc0.page_content
            if not isinstance(doc, Document):
                raise ValueError(f"Could not find document for id {_id}, got {doc}")
            doc_score = min([scores[0][id] for id in [indices[0].tolist().index(i) for i in id_seq if i in indices[0]]])
            doc.metadata["score"] = int(doc_score)
            docs.append((doc, doc_score))

        #print(docs)  #在这里可以看向量搜索结果,然后大模型根据这个结果去总结答案
        return docs

# 加载QA数据
def load_qa_data(filepath):
    with open(filepath, 'r', encoding='utf-8') as f:
        data = json.load(f)
    return data

# 获取文档的上下文信息
def get_context(document):
    return document.metadata.get("context_str", "No relevant information available.")

@app.route('/query', methods=['POST'])
def handle_query():
    data = request.json
    query = data.get("query")
    if not query:
        return jsonify({"error": "No query provided"}), 400

    # 使用检索到的上下文构造请求并调用模型
    response = qa.invoke({"query": query})

    return jsonify({"response": response})

if __name__ == '__main__':
    # 加载QA数据
    qa_data_path = '/data/xxxxx/xxxx_qa_pairs.json'
    qa_data = load_qa_data(qa_data_path)

    # 处理每个QA对
    docs = []
    for item in qa_data:
        paragraph = item["paragraph"]
        for qa_pair in item["qa_pairs"]:
            question = qa_pair["q"]
            expected_answer = qa_pair["a"]
            docs.append(Document(page_content=paragraph, metadata={"source": paragraph,"context_str": paragraph,"question": question, "expected_answer": expected_answer}))

    # 嵌入模型名称
    EMBEDDING_MODEL = 'text2vec'
    PROMPT_TEMPLATE = """已知信息:
    {context_str} 
    根据上面的已知信息, respond to the user's question concisely and professionally. If an answer cannot be derived from it, say 'The question cannot be answered with the given information' or 'Not enough relevant information has been provided,' and do not include fabricated details in the answer. 问题是 {question}"""
    # 嵌入运行设备
    EMBEDDING_DEVICE = "cuda"
    # 从向量存储返回top-k文本块
    VECTOR_SEARCH_TOP_K = 5 #向量搜索结果太多的话可以改小点
    CHAIN_TYPE = 'stuff'
    embedding_model_dict = {
        "text2vec": "/data/model/text2vec-base-chinese/",
    }
    llm = Qwen()
    embeddings = HuggingFaceEmbeddings(model_name=embedding_model_dict[EMBEDDING_MODEL], model_kwargs={'device': EMBEDDING_DEVICE})

    # 初始化FAISS向量存储
    docsearch = FAISSWrapper.from_documents(docs, embeddings)

    prompt = PromptTemplate(
        template=PROMPT_TEMPLATE, input_variables=["context_str", "question"]
    )

    chain_type_kwargs = {"prompt": prompt, "document_variable_name": "context_str"}
    qa = RetrievalQA.from_chain_type(
        llm=llm,
        chain_type=CHAIN_TYPE, 
        retriever=docsearch.as_retriever(search_kwargs={"k": VECTOR_SEARCH_TOP_K}), 
        chain_type_kwargs=chain_type_kwargs
    )

    app.run(host='0.0.0.0', port=5000)  # 启动Flask服务

上面的代码用到了text2vec,下载地址:

https://huggingface.co/shibing624/text2vec-base-chinese

千问大模型下载地址:

https://huggingface.co/Qwen/Qwen2-1.5B-Instruct

千问模型帮助文档资料:

https://qwen.readthedocs.io/en/latest/framework/Langchain.html

上面的代码的原始版本来源是(没法直接运行):

https://qwen.readthedocs.io/en/latest/_sources/framework/Langchain.rst.txt

运行程序以后可以通过postman调用,调用的测试数据样式可以是下面这样:

{"query": "你的问题内容"}

经过测试,我发现有时间回答的很好,好到你的问题是上下文数据的变种及涉及到计算才可以的都能回答的很好,有时候抽风到拿着问答对里面的问题去问大模型都不行,另外还有刚启动回答的不好,多问一会能稍微好点。

比如原始数据是你赚10000块钱的话需要缴税,你问说我每个月工资9999要不要缴税,大模型能回答不需要,再问我的月薪10001要不要缴税,大模型也能答对,注意这不是问答对里面的问题,就是大模型学习以后给出的答案,要是每次这样就挺好,可惜啊。

本文首发于http://blog.csdn.net/peihexian,不欢迎转载。

  • 22
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

peihexian

你的鼓励是我创作的动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值