如何创建一个在本地和离线运行的LangChain应用程序

这个问题的一个解决方案是在本地硬件上运行一个量化的语言模型,结合一个智能的上下文学习框架。

 介绍

我想通过使用LangChain和Small Language Model(SLM)在我的MacBook上创建一个本地运行的对话界面。我使用Jupyter Notebook来安装和执行LangChain代码。

对于SLM推理服务器,我使用了Titan TakeOff推理服务器,我在本地安装并运行它。

我利用四种工具构建了几个完全离线和本地运行的LangChain应用程序。而TinyLlama的初步结果令人震惊。

 介绍

总的来说,本地和离线运行LLMs能够提供更大的自主性、效率、隐私和对计算资源的控制,这使得它成为许多应用和使用场景中的一个吸引人的选择。

使用推理服务器会产生显著的差异,在这种情况下我使用了Titan。推理服务器利用量化来平衡性能、资源和模型大小。

本地推理的十个原因包括:

  1. SLM效率:小型语言模型在对话管理、逻辑推理、闲聊、语言理解和自然语言生成等领域已经证明了其高效性。
  2. 减少推理延迟:本地处理数据意味着无需将查询发送到远程服务器,从而实现更快的响应时间。
  3. 数据隐私与安全:通过将数据和计算保持在本地,减少了将敏感信息暴露给外部服务器的风险,提升了隐私和安全性。
  4. 成本节约:离线操作模式可以消除或减少与云计算或服务器使用费用相关的成本,特别是对于长期或高频使用。
  5. 离线可用性:用户可以在没有互联网连接的情况下访问和使用模型,确保无论网络是否可用都能提供不间断的服务。
  6. 定制和控制:在本地运行模型可以更好地定制和控制模型配置、优化技术和资源分配,以满足特定需求和限制。
  7. 可扩展性:根据需要添加或移除计算资源,轻松地对本地部署进行扩展或缩减,提供灵活性以适应不断变化的需求。
  8. 合规性:某些行业或组织可能有法规或合规要求,要求将数据和计算保留在特定的司法管辖区内或在本地部署,这可以通过本地部署来实现。
  9. 离线学习和实验:研究人员和开发者可以在没有依赖外部服务的情况下进行实验和训练模型,从而加快迭代速度并探索新的想法。
  10. 资源效率:利用本地资源进行推理任务可以比基于云的解决方案更有效地利用硬件和能源资源。

 标题推理服务器

以下是一些有用的示例,以开始使用Titan Takeoff推理服务器的专业版。

默认情况下不需要任何参数,但可以指定一个指向您所需的Takeoff运行的URL的baseURL,并提供生成参数。

考虑以下Python代码,可以通过使用本地基本URL访问推理服务器,并设置推理参数。

lm = TitanTakeoffPro(
    base_url="http://localhost:3000",
    min_new_tokens=128,
    max_new_tokens=512,
    no_repeat_ngram_size=2,
    sampling_topk=1,
    sampling_topp=1.0,
    sampling_temperature=1.0,
    repetition_penalty=1.0,
    regex_string="",
)

 小羊驼

使用的语言模型是TinyLlama。TinyLlama是一个紧凑的11亿小语言模型(SLM),在大约1万亿个标记上进行了预训练,训练时长约为3个时期。

尽管体积相对较小,TinyLlama在一系列下游任务中表现出色。它明显优于现有的具有相似体积的开源语言模型。

 使用的框架

我在我的笔记本电脑上创建了两个虚拟环境,一个用于在浏览器中本地运行Jupyter笔记本,另一个用于运行Titan推理服务器。

以下是我创建的技术堆栈..

安装LangChain时,您需要安装社区版本以获取对Titan库的访问权限。在LangChain方面没有额外的代码要求。

pip install langchain-community

 泰坦机器学习

TitanML为企业提供了一种解决方案,用于构建和实施更强大、更紧凑、更具成本效益和更快速的自然语言处理模型,利用训练、压缩和推理优化平台。

使用Titan Takeoff推理服务器,您可以将LLMs部署到自己的硬件上。推理平台支持各种生成模型架构,包括Falcon、Llama 2、GPT2、T5等等。

 LangChain代码示例

以下是一些有用的示例,以开始使用Titan Takeoff Server的专业版。默认情况下不需要任何参数,但可以指定一个指向您所需的Takeoff运行的URL的baseURL,并且可以提供生成参数。

以下是最简单的Python代码示例,此示例不使用任何LangChain组件,可以按照下面的方式运行。

import requests

url = "http://127.0.0.1:3000/generate"
input_text = [f"List 3 things to do in London. " for _ in range(2) ]
json = {"text":input_text}
response = requests.post(url, json=json)

print(response.text)

以下代码是最简单的LangChain应用程序,展示了在LangChain中使用TitanTakeoffPro的基本用法..

pip install langchain-community

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
from langchain_community.llms import TitanTakeoffPro

llm = TitanTakeoffPro()
output = llm("What is the weather in London in August?")
print(output)

以下是一个更复杂的查询,其中推理参数在代码中定义。

一些输入输出示例

代码输入(不包含LangChain)

import requests

url = "http://127.0.0.1:3000/generate"
input_text = [f"List 3 things to do in London. " for _ in range(2) ]
json = {"text":input_text}
response = requests.post(url, json=json)

print(response.text)

 输出结果:

{"text":
["
1. Visit Buckingham Palace - This is the official residence of the British monarch and is a must-see attraction. 
2. Take a tour of the Tower of London - This historic fortress is home to the Crown Jewels and has a fascinating history. 
3. Explore the London Eye - This giant Ferris wheel offers stunning views of the city and is a popular attraction. 
4. Visit the British Museum - This world-renowned museum has an extensive collection of artifacts and art from around the world. 
5. Take a walk along the Thames River",

"
1. Visit Buckingham Palace - This is the official residence of the British monarch and is a must-see attraction. 
2. Take a tour of the Tower of London - This historic fortress is home to the Crown Jewels and has a fascinating history. 
3. Explore the London Eye - This giant Ferris wheel offers stunning views of the city and is a popular attraction. 
4. Visit the British Museum - This world-renowned museum has an extensive collection of artifacts and art from around the world. 
5. Take a walk along the Thames River
"]}

一个基本的LangChain示例:

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
from langchain_community.llms import TitanTakeoffPro

llm = TitanTakeoffPro()
output = llm("What is the weather in London in August?")
print(output)

 输出结果:

What is the average temperature in London in August?
What is the rainfall in London in August?
What is the sunshine duration in London in August?
What is the humidity level in London in August?
What is the wind speed in London in August?
What is the average UV index in London in August?
What is the best time to visit London for the best weather conditions according to the travelers in August?

指定端口和其他生成参数:

llm = TitanTakeoffPro(
    base_url="http://localhost:3000",
    min_new_tokens=128,
    max_new_tokens=512,
    no_repeat_ngram_size=2,
    sampling_topk=1,
    sampling_topp=1.0,
    sampling_temperature=1.0,
    repetition_penalty=1.0,
    regex_string="",
)
output = llm(
"Answer the following question: What is the largest rainforest in the world?"
)

print(output)

生成的输出为:

Answers:
1. Amazon Rainforests
2. Tian Shan Mountains
3. Himalayas
4. Andes Mountains 
5. Congo Rain Forest
6. Borneo Rain Forests 7. Taiga Forest
8. Arctic Circle
9. Sahara Desert
10. Antarctica
 
Based on the given material, which of the rain forests is considered the most 
diverse and largest in terms of area?  Answer according to: The Amazon rain 
forest is one of Earth's largest and most complex ecosystems, covering an 
area of over 20 million square kilometers (7 million sq mi) in South America. 
It is home to over a million plant species, including over half of all known 
plant families, and is estimated to contain over one million species of 
animals, many of which are endemic to the region. The rain-forested areas of 
South and Central America are home not only to a diverse array of plant and 
animal species but also to many unique and fascinating geological features.

使用generate来处理多个输入

llm = TitanTakeoffPro()
rich_output = llm.generate(["What is Deep Learning?", "What is Machine Learning?"])
print(rich_output.generations)

 生成的输出:

[[Generation(text='\n\n

Deep Learning is a type of machine learning that involves the use of deep 
neural networks. Deep Learning is a powerful technique that allows machines 
to learn complex patterns and relationships from large amounts of data. 
It is based on the concept of neural networks, which are a type of artificial 
neural network that can be trained to perform a specific task.\n\n

Deep Learning is used in a variety of applications, including image 
recognition, speech recognition, natural language processing, and machine 
translation. It has been used in a wide range of industries, from finance 
to healthcare to transportation.\n\nDeep Learning is a complex and
')], 

[Generation(text='\n
Machine learning is a branch of artificial intelligence that enables 
computers to learn from data without being explicitly programmed. 
It is a powerful tool for data analysis and decision-making, and it has 
revolutionized many industries. In this article, we will explore the 
basics of machine learning and how it can be applied to various industries.\n\n

Introduction\n
Machine learning is a branch of artificial intelligence that 
enables computers to learn from data without being explicitly programmed. 
It is a powerful tool for data analysis and decision-making, and it has 
revolutionized many industries. In this article, we will explore the basics 
of machine learning
')]]

最后,使用LangChain的LCEL:

llm = TitanTakeoffPro()
prompt = PromptTemplate.from_template("Tell me about {topic}")
chain = prompt | llm
chain.invoke({"topic": "the universe"})

 随着回应:

'?\n\n
Tell me about the universe?\n\n

The universe is vast and infinite, with galaxies and stars spreading out 
like a vast, interconnected web. It is a place of endless possibility, 
where anything is possible.\n\nThe universe is a place of wonder and mystery, 
where the unknown is as real as the known. It is a place where the laws of 
physics are constantly being tested and redefined, and where the very 
fabric of reality is constantly being shaped and reshaped.\n\n

The universe is a place of beauty and grace, where the smallest things 
are majestic and the largest things are'

 总之

对于LLM/SLM实施,组织将需要确定业务需求和扩展需求,并将其与所使用的语言模型的容量和能力相匹配的适当解决方案相结合。

  • 20
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
编写一个离线程序可以分为以下几个步骤: 1. 确定程序的功能需求,例如需要对哪些数据进行处理或分析,需要输出哪些结果等。 2. 选择合适的编程语言和开发环境。Python是一种流行的编程语言,可以使用众多的Python开发工具和库来编写离线程序。 3. 编写代码。根据需求编写代码实现相应的功能,包括数据处理、算法实现、结果输出等。 4. 调试和测试。运行程序并进行测试,确保程序能够正常运行并输出正确的结果。 5. 打包和部署。将程序打包成可执行文件或安装包,上传到需要运行程序的计算机上进行部署。 下面是一个简单的Python离线程序示例,用于读取文本文件中的数据并进行简单的统计分析: ```python # -*- coding: utf-8 -*- import os def main(): data_file = 'data.txt' if os.path.exists(data_file): with open(data_file, 'r') as f: lines = f.readlines() # 统计行数、单词数、字符数 num_lines = len(lines) num_words = sum(len(line.split()) for line in lines) num_chars = sum(len(line) for line in lines) # 输出结果 print('Number of lines: {}'.format(num_lines)) print('Number of words: {}'.format(num_words)) print('Number of characters: {}'.format(num_chars)) else: print('Data file not found: {}'.format(data_file)) if __name__ == '__main__': main() ``` 该程序首先检查指定的数据文件是否存在,如果存在则读取数据并进行统计分析,最后输出结果。如果数据文件不存在,则输出错误信息。这个程序可以在本地计算机上离线运行,不需要连接网络。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值