一、软件介绍
文末提供程序和源码下载
autoMate开源程序是AI 驱动的本地自动化助手,使用自然语言使计算机自行工作,与 Manus、Computer Use Agent (CUA) 和 Omniparser 一样,计算机使用代理。解决“自动化繁琐的工作,让时间恢复生机”
二、重新定义您与计算机的关系
与使用起来很麻烦的传统 RPA 工具不同,autoMate 利用大型语言模型的强大功能,只需用自然语言描述任务即可完成复杂的自动化流程。告别重复性工作,专注于真正创造价值的事情!
三、项目介绍
autoMate 是一款基于 OmniParser 构建的革命性 AI + RPA 自动化工具,它可以:
- 📊 Understand your requirements and automatically plan tasks
📊 了解您的需求并自动规划任务 - 🔍 Intelligently comprehend screen content, simulating human vision and operations
🔍 智能理解屏幕内容,模拟人类视觉和作 - 🧠 Make autonomous decisions, judging and taking actions based on task requirements
🧠 根据任务要求做出自主决策、判断和采取行动 - 💻 Support local deployment, protecting your data security and privacy
💻 支持本地部署,保护您的数据安全和隐私
四、Features 特征
- 🔮 No-Code Automation - Describe tasks in natural language, no programming knowledge required
🔮 无代码自动化 - 用自然语言描述任务,无需编程知识 - 🖥️ Full Interface Control - Support operations on any visual interface, not limited to specific software
🖥️ 全界面控制 - 支持在任何可视化界面上进行作,不限于特定软件 - 🚅 Simplified Installation - Support for Chinese environment, one-click deployment
🚅 简化安装 - 支持中文环境,一键部署
五、Quick Start 快速开始
文末提供下载链接
Installation 📦 安装
We strongly recommend installing miniConda first and using miniconda to install dependencies. There are many tutorials available online, or you can ask AI for help. Then follow these commands to set up the environment:
我们强烈建议您先安装 miniConda,然后使用 miniconda 安装依赖项。网上有许多教程,或者您可以向 AI 寻求帮助。然后按照以下命令设置环境:
# Clone the project
git clone https://github.com/yuruotong1/autoMate.git
cd autoMate
# Create python3.12 environment
conda create -n "automate" python==3.12
# Activate environment
conda activate automate
# Install dependencies
python install.py
After installation, you can start the application using the command line:
安装后,您可以使用命令行启动应用程序:
python main.py
Then open http://localhost:7888/
in your browser to configure your API key and basic settings.
然后在浏览器中打开 http://localhost:7888/
以配置 API 密钥和基本设置。
🔔 Note 🔔 注意
Currently tested and supported models are as follows:
目前测试和支持的型号如下:
PS: Below are the large model vendors that have been tested and are working. These vendors have no relationship with us, so we don't promise after-sales service, functional guarantees, or stability maintenance. Please consider the payment situation carefully.
PS:以下是经过测试并正在运行的大型模型供应商。这些供应商与我们没有任何关系,因此我们不承诺售后服务、功能保证或稳定性维护。请仔细考虑付款情况。
Vendor 供应商 | Model |
---|---|
yeka 耶卡 | gpt-4o,o1 GPT-4O,O1 |
openai OpenAI 公司 | gpt-4o,gpt-4o-2024-08-06,gpt-4o-2024-11-20,o1,4.gpt-4.5-preview-2025-02-27 GPT-4O,GPT-4O-2024-08-06,GPT-4O-2024-11-20,O1,4.GPT-4.5-预览-2025-02-27 |
📝 FAQ 📝 常见问题
What models are supported?
支持哪些模型?
Currently only OpenAI series models are supported. If you can't access OpenAI in China, we recommend using yeka as a proxy.
目前仅支持 OpenAI 系列模型。如果你在中国无法访问 OpenAI,我们建议使用 yeka 作为代理。
Why don't we support other models? We use multimodal + structured output capabilities, and few other model vendors support both capabilities simultaneously. Adapting to other models would require significant changes to the underlying architecture, and we can't guarantee the results. However, we are actively looking for solutions and will update immediately when available.
为什么我们不支持其他模型?我们使用多模态 + 结构化输出能力,很少有其他模型供应商同时支持这两种能力。适应其他模型需要对底层架构进行重大更改,我们无法保证结果。但是,我们正在积极寻找解决方案,并将在可用时立即更新。
Why is my execution speed slow?
为什么我的执行速度很慢?
If your computer doesn't have an NVIDIA dedicated graphics card, it will run slower because we frequently call OCR for visual annotation, which consumes a lot of GPU resources. We are actively optimizing and adapting. We recommend using an NVIDIA graphics card with at least 4GB of VRAM, and the version should match your torch version:
如果你的电脑没有 NVIDIA 专用显卡,运行速度会变慢,因为我们经常调用 OCR 进行视觉标注,这会消耗大量的 GPU 资源。我们正在积极优化和适应。我们建议使用至少具有 4GB VRAM 的 NVIDIA 显卡,并且版本应与您的手电筒版本相匹配:
- Run
pip list
to check torch version;
运行pip list
检查 torch 版本; - Check supported cuda version from official website;
从官网查看支持的 cuda 版本; - Uninstall installed torch and torchvision;
卸载已安装的 torch 和 torchvision; - Copy the official torch installation command and reinstall torch suitable for your cuda version.
复制官方的 torch 安装命令,然后重新安装适合您的 cuda 版本的 torch。
For example, if your cuda version is 12.4, you need to install torch using the following command:
例如,如果您的 cuda 版本是 12.4,则需要使用以下命令安装 torch:
pip3 uninstall -y torch torchvision
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124