Environment: python3.10
Install langchain
Mannagment
docs
Record In 2024/05/30
support pip or conda
#
# this package is the essential
#
pip install langchain
#
# Third party expansion package
#
pip3 install langchain-community -i http://aaa.aaa.aaa.aaa/repository/pypi/simple --trusted-host aaa.aaa.aaa.aaa
#
Collecting langchain-community
Downloading http://aaa.aaa.aaa.aaa:xxxx/repository/pypi/packages/langchain-community/0.2.1/**langchain_community-0.2.1-py3-none-any.whl** (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 76.0 MB/s eta 0:00:0
Collecting dataclasses-json<0.7,>=0.5.7 (from langchain-community)
Downloading http://aaa.aaa.aaa.aaa:xxxx/repository/pypi/packages/dataclasses-json/0.6.6/dataclasses_json-0.6.6-py3-none-any.whl (28 kB)
Collecting marshmallow<4.0.0,>=3.18.0 (from dataclasses-json<0.7,>=0.5.7->langchain-community)
Downloading http://aaa.aaa.aaa.aaa:xxxx/repository/pypi/packages/marshmallow/3.21.2/marshmallow-3.21.2-py3-none-any.whl (49 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.3/49.3 kB 11.3 MB/s eta 0:00:00
Collecting typing-inspect<1,>=0.4.0 (from dataclasses-json<0.7,>=0.5.7->langchain-community)
Collecting mypy-extensions>=0.3.0 (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain-community)
Downloading http://aaa.aaa.aaa.aaa:xxxx/repository/pypi/packages/mypy-extensions/1.0.0/mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Installing collected packages: mypy-extensions, marshmallow, typing-inspect, dataclasses-json, langchain-community
pip3 install langchain-core
# [prompt]
# langchain package contain langchain-core
#
#
# openai is import
#
pip install langchain-openai
#
# update to latest
#
pip install --upgrade --quiet langchain langchain-community langchainhub langchain-openai langchain-chroma bs4
pip install langchain-community
pip install langchain-experimental
pip install langgraph
conda
conda install langchain -c conda-forge
Source Install
Dir: PATH/TO/REPO/langchain/libs/langchain
pip install -e .
如果想要访问还是要有代理的
import os
os.environ['http_proxy'] = 'http://127.0.0.1:xxxx'
os.environ['https_proxy'] = 'https://127.0.0.1:xxxx'
from langchain.llms import OpenAI
# llm = OpenAI(openai_api_key="...")
llm = OpenAI()
Demo1
pip install -qU langchain-openai
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4")
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
Joke(setup=‘Why was the cat sitting on the computer?’, punchline=‘To keep an eye on the mouse!’, rating=None)
Demo2
pip install openai chromadb tiktoken
Win+R [control system]
set environment
GNN Frameworks(图神经网络框架)
GNNAdvisor
GNNAdvisor使用Pytorch作为前端,在底层是用CUDA写的,并用pytorch wrapper集成到pytorch里面。
它可以被看作是具有kernel优化、运行时支持的operator
embedding 并行
GNN的每个节点都有维度很大的embedding
维度可以调整
Pytorch、TensorFlow,内置了许多NN operator,针对欧几里得数据做了很多优化,但是对于图这样的非欧几里得数据,面临一些挑战:
- torch-scatter在处理大规模稀疏图和高维节点embedding时性能不佳,问题在于其kernel采用了高开销的原子操作来执行节点embedding传播,这种方法借鉴了图处理系统的设计原则,但在面临大规模稀疏图与高维节点embedding时性能不佳;
- DGL采用现成的SPMM(比如cuSparse库的csrmm2)来进行sum-reduced aggregation,同时利用自己的cuda kernel处理带有边属性的聚合。
欧几里得数据
three_3d_array = np.random.randint((3, 3, 3))
three_3d_array = [[[8 4 3]
[7 6 1]
[8 3 2]]
[[2 5 8]
[2 3 6]
[0 7 4]]
[[8 6 6]
[3 6 2]
[9 2 0]]]
使用numpy进行数据的调度
# 访问位于索引(1, 1, 1)的元素
element = random_3d_array[1, 1, 1]
print(element)
# 访问第一层的所有元素
first_layer = random_3d_array[0, :, :]
print("layer:",first_layer)
# 访问第一列的所有元素
first_column = random_3d_array[:,:, 0]
print("column:",first_column)