Dify内置DeepResearch深度体验,抽丝剥茧带大家瞧瞧它的真实水准
原创 AI工具推荐官 AI工具推荐官 2025年03月20日 16:08 广东
我在上一篇文章《Dify 实现DeepResearch工作流拆解并再看升级版Dify能否搭建出Manus?》中,对Dify内置DeepResearch工作流进行了深度拆解,但是由于篇幅所限,没有全量展示它的输出,为了弥补这个缺憾,今天把单独写一篇文章,把它们一一展示出来。DeepResearch的具体原理和使用方法,这里不再赘述,请参考上述文章。
DeepResearch完整执行结果展示
DeepResearch网络检索深度 depth: 10
文章生成提示词如下:
全面搜索,并榜我写一篇深度教程:为自己打造一个牛马DeepResearch——Dify内置DeepResearch信源改造,增加本地文档、FireCrawl、百度搜索
DeepResearch搜索网络结果如下:
2/10th search executed.
### **Result 1:** [**Dify 实现DeepResearch工作流拆解并再看升级版Dify能否搭建出Manus?**](https://www.53ai.com/news/dify/2025032008746.html)
**URL:** [https://www.53ai.com/news/dify/2025032008746.html](https://www.53ai.com/news/dify/2025032008746.html)
**Relevance Score:** 0.6757865
**Content:**
后面我们会推出继续深入探索对Dify内置 DeepResearch 的完善和补充,包括扩充信源(本地文献、开源Firecrawl、百度和bing等搜索引集成)、工作流优化,以及深入探索 Dify Agent 相关功能,欢迎持续关注。
---
### **Result 2:** [**DeepResearch: Building a Research Automation App with Dify**](https://dify.ai/blog/deepresearch-building-a-research-automation-app-with-dify)
**URL:** [https://dify.ai/blog/deepresearch-building-a-research-automation-app-with-dify](https://dify.ai/blog/deepresearch-building-a-research-automation-app-with-dify)
**Relevance Score:** 0.5852203
**Content:**
DeepResearch automates multi-step searches and summarizes findings using LLMs. Built within Dify, "DeepResearch" uses nodes for iteration, search, and summarization, creating a workflow for efficient information gathering and report generation, saving time and effort. DeepResearch automates multi-step searches and summarizes findings using LLMs. Built within Dify, "DeepResearch" uses nodes for iteration, search, and summarization, creating a workflow for efficient information gathering and report generation, saving time and effort. DeepResearch automates multi-step searches and summarizes findings using LLMs. Built within Dify, "DeepResearch" uses nodes for iteration, search, and summarization, creating a workflow for efficient information gathering and report generation, saving time and effort. Fortunately, Dify, a low-code, open-source platform for LLM application development, solves this problem by automating workflows for multi-step searches and efficient summarization, requiring only minimal coding.
---
### **Result 3:** [**Open-Deep-Research-workflow-on-Dify/README.md at main - GitHub**](https://github.com/AdamPlatin123/Open-Deep-Research-workflow-on-Dify/blob/main/README.md)
**URL:** [https://github.com/AdamPlatin123/Open-Deep-Research-workflow-on-Dify/blob/main/README.md](https://github.com/AdamPlatin123/Open-Deep-Research-workflow-on-Dify/blob/main/README.md)
**Relevance Score:** 0.3224838
**Content:**
Open-Deep-Research-workflow-on-Dify/README.md at main · AdamPlatin123/Open-Deep-Research-workflow-on-Dify · GitHub GitHub Copilot Write better code with AI Code Search Find more, search less View all industries View all solutions GitHub Sponsors Fund open source developers The ReadME Project GitHub community articles Enterprise platform AI-powered developer platform Advanced Security Enterprise-grade security features Copilot for business Enterprise-grade AI features Search or jump to... Search code, repositories, users, issues, pull requests... Search Saved searches To see all available qualifiers, see our documentation. Cancel Create saved search You signed in with another tab or window. You signed out in another tab or window. AdamPlatin123 / Open-Deep-Research-workflow-on-Dify Public Deep Researcher On Dify - Powered by Dify.pdf Deep Researcher On Dify .yml Open-Deep-Research-workflow-on-Dify Open-Deep-Research-workflow-on-Dify style D fill:#2196F3,stroke:#1565C0 © 2025 GitHub, Inc. Footer navigation
---
### **Result 4:** [**盘点开源的 DeepResearch 实现方案 · 豆逗子的小黑屋**](https://weaxsey.org/articels/2025-03-06/)
**URL:** [https://weaxsey.org/articels/2025-03-06/](https://weaxsey.org/articels/2025-03-06/)
**Relevance Score:** 0.3175749
**Content:**
section_builder.add_edge(START, "generate_queries") section_builder.add_edge("generate_queries", "search_web") builder.add_edge("write_final_sections", "compile_final_report") Your goal is to generate {number_of_queries} web search queries that will help gather information for planning the report sections. Generate search queries that will help with planning the sections of the report. You are an expert technical writer crafting targeted web search queries that will gather comprehensive information for writing a technical report section. Your goal is to generate {number_of_queries} search queries that will help gather comprehensive information above the section topic. Generate a list of sections for the report. If the section content does not adequately address the section topic, generate {number_of_follow_up_queries} follow-up search queries to gather missing information. If further research is required, provide a Python list of up to 3 search queries.
---
### **Result 5:** [**史上最全"Deep Researcher"开源方案盘点 - 知乎 - 知乎专栏**](https://zhuanlan.zhihu.com/p/24927851812)
**URL:** [https://zhuanlan.zhihu.com/p/24927851812](https://zhuanlan.zhihu.com/p/24927851812)
**Relevance Score:** 0.21344313
**Content:**
史上最全“Deep Researcher”开源方案盘点 - 知乎 史上最全“Deep Researcher”开源方案盘点 - [https://github.com/charliedream1/ai_wiki](https://github.com/charliedream1/ai_wiki) 额外判定逻辑:通过添加判定逻辑,提高答案的准确性。Deep Research 可以采用多源验证、逻辑推导等质量控制机制,确保研究结果的可靠性,并避免了传统 RAG 中存在的盲目检索和过度检索问题。相比之下,传统 RAG 在信息整合和验证方面可能不够完善 优点很突出,但缺点也不容忽视,从前面给出的实现方案中不难看出,Deep Research 除了响应速度较慢、对算力、网络都有着更高需求之外,其答案的主要信息来源依然还是公开的网络搜索结果。 [https://huggingface.co/blog/open-deep-research](https://huggingface.co/blog/open-deep-research) [https://github.com/huggingface/smolagents/tree/main/examples/open_deep_research](https://github.com/huggingface/smolagents/tree/main/examples/open_deep_research) 2.1.2 Jina-ai复现版本 Github (2.5k stars): [https://github.com/jina-ai/node-DeepResearch](https://github.com/jina-ai/node-DeepResearch) 与 OpenAI/Gemini/Perfasciity 的“深度研究”不同,我们只专注于通过迭代过程找到正确的答案。 2.1.3 deep-research 2.1.4 open-deep-research Github (4.2k stars): [https://github.com/nickscamara/open-deep-research](https://github.com/nickscamara/open-deep-research) firecrawl.dev/extract Github (742 stars): [https://github.com/zilliztech/deep-searcher](https://github.com/zilliztech/deep-searcher) 特点: - 私有数据搜索:在保证数据安全的同时,最大化利用企业内部数据。必要时,它可以集成在线内容以获得更准确的答案。 - Vector Database Management:支持 Milvus 和其他 Vector 数据库,允许数据分区以实现高效检索。 - 灵活的嵌入选项:与多种嵌入模型兼容,以实现最佳选择。 - 多个 LLM 支持:支持 DeepSeek、OpenAI 和其他大型模型,用于智能问答和内容生成。 - Document Loader:支持本地文件加载,Web 爬虫功能正在开发中。 - Pymilvus 内置嵌入模型 - OpenAI ( 需要 env 变量)OPENAI_API_KEY - VoyageAI ( env 变量 必填)VOYAGE_API_KEY LLM 支持 - DeepSeek(需要 env 变量)DEEPSEEK_API_KEY - OpenAI ( 需要 env 变量)OPENAI_API_KEY - SiliconFlow ( env 变量 必填)SILICONFLOW_API_KEY - TogetherAI ( 需要 env 变量)TOGETHER_API_KEY - FireCrawl ( env 变量必填)FIRECRAWL_API_KEY - Jina Reader ( env 变量 必填)JINA_API_TOKEN - Crawl4AI (您应该第一次运行命令)crawl4ai-setup
---
3/10th search executed.
### **Result 1:** [**解锁免费且强大的 Web Search 方案:Firecrawl 部署并接入 Dify**](https://www.junki.cn/archives/YkMm4qOQ)
**URL:** [https://www.junki.cn/archives/YkMm4qOQ](https://www.junki.cn/archives/YkMm4qOQ)
**Relevance Score:** 0.6175132
**Content:**
解锁免费且强大的 Web Search 方案:Firecrawl 部署并接入 Dify - Junki Space 抓取 [https://www.baidu.com/s?wd={用户输入}](https://www.baidu.com/s?wd=%7B%E7%94%A8%E6%88%B7%E8%BE%93%E5%85%A5%7D) 的内容,并只获取 class 属性包含 result 的标签,最终以 links 的格式返回。 "text": "", "files": [], "sourceURL": "[https://www.baidu.com/s?wd=全球票房最高的动画片是什么](https://www.baidu.com/s?wd=%E5%85%A8%E7%90%83%E7%A5%A8%E6%88%BF%E6%9C%80%E9%AB%98%E7%9A%84%E5%8A%A8%E7%94%BB%E7%89%87%E6%98%AF%E4%BB%80%E4%B9%88)", "url": "[https://www.baidu.com/s?wd=全球票房最高的动画片是什么](https://www.baidu.com/s?wd=%E5%85%A8%E7%90%83%E7%A5%A8%E6%88%BF%E6%9C%80%E9%AB%98%E7%9A%84%E5%8A%A8%E7%94%BB%E7%89%87%E6%98%AF%E4%BB%80%E4%B9%88)", "[http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFKiIaBT5bldOg8WKw5Yl2kzyTAdhDjXg8fhOAUuSR0fvpwQ7kTPFB-00BOS0TaSwsO](http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFKiIaBT5bldOg8WKw5Yl2kzyTAdhDjXg8fhOAUuSR0fvpwQ7kTPFB-00BOS0TaSwsO)", "[http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFNvD9E44qfYsc_CRAkAaDSLLL_OZbEFrFQiDs1vzQdsHKVoczUckq3tgOnP4TvzGfy](http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFNvD9E44qfYsc_CRAkAaDSLLL_OZbEFrFQiDs1vzQdsHKVoczUckq3tgOnP4TvzGfy)", "[http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFMLSKanxg5wetFwpRfsvMLFqPjubf0Q79hiCwwLk0XKJ2ZV3M64hHqIGPfscUPLZue](http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFMLSKanxg5wetFwpRfsvMLFqPjubf0Q79hiCwwLk0XKJ2ZV3M64hHqIGPfscUPLZue)", "[http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFKDBzUshbz_zll4SvSx-EID7IndCL1Xm3QNdL78B9KuvTNq9Az180HZ6F8vySsvmuS](http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFKDBzUshbz_zll4SvSx-EID7IndCL1Xm3QNdL78B9KuvTNq9Az180HZ6F8vySsvmuS)", "[http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFLu1lPaS1L3aIN164b5IAB_0iDZV-XzWC9A5kqVBECNegvVSYKko9wD_0ZxQfl5c4S](http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFLu1lPaS1L3aIN164b5IAB_0iDZV-XzWC9A5kqVBECNegvVSYKko9wD_0ZxQfl5c4S)", "[http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFKDBzUshbz_zll4SvSx-EID-vOAlRTrK0bQRF_mm5qBOav-iqkxX7OlvWt5htLOP3y](http://www.baidu.com/link?url=xzbQnC6i_MULtA48_qlGC1q1Jna2LebBf6dui64qwFKDBzUshbz_zll4SvSx-EID-vOAlRTrK0bQRF_mm5qBOav-iqkxX7OlvWt5htLOP3y)" use_icon_as_answer_icon: false allowed_file_types: file_size_limit: 15 image_file_size_limit: 10 workflow_file_upload_limit: 10 id: 1740103540241-source-1740121470685-target id: 1740121470685-source-1740122039767-target id: 1740122039767-source-1740122637931-target iteration_id: '1740122637931' sourceType: iteration-start id: 1740122637931start-source-1740122872165-target id: 1740122637931-source-1740122936994-target type: start provider_id: firecrawl value: [https://www.baidu.com/s?wd={
{#sys.query#}}](https://www.baidu.com/s?wd=%7B%7B#sys.query#%7D%7D) type: tool type: code - text type: iteration title: '' type: iteration-start id: 1740122637931start type: custom-iteration-start iteration_id: '1740122637931' provider_id: firecrawl type: tool 在 XML标记中使用以下上下文作为您学到的知识。这些知识来源于网络搜索,不是用户提供给你的。 title: LLM type: llm type: answer #MacOS 4 #开源 0 #经验 1 #JavaScript 1 #微信小程序 1 #AIGC 9 #运动健康 3 #Linux 1 #Redis 1 #产品设计 1 #AGI 8 #Java 3 #DevOps 1 #LLM 11 #算法 1
---
### **Result 2:** [**本地部署 Firecrawl - 知乎 - 知乎专栏**](https://zhuanlan.zhihu.com/p/16646491901)
**URL:** [https://zhuanlan.zhihu.com/p/16646491901](https://zhuanlan.zhihu.com/p/16646491901)
**Relevance Score:** 0.5524679
**Content:**
今天我们就一起聊聊Firecrawl。这款爬虫产品也在 Dify ... 最简单的情况下,只需要填一个URL就可以,firecrawl会抓取到相关的内容,还可以通过LLM来提取信息。使用firecrawl的在线服务是需要付费的,免费的只有500credit,所以接下来我们看下如何自己本地运行。
---
### **Result 3:** [**本地部署 Firecrawl - CSDN博客**](https://blog.csdn.net/shujuelin/article/details/145022912)
**URL:** [https://blog.csdn.net/shujuelin/article/details/145022912](https://blog.csdn.net/shujuelin/article/details/145022912)
**Relevance Score:** 0.48996773
**Content:**
今天我们就一起聊聊Firecrawl。这款爬虫产品也在Dify上被内置,小伙伴们都可以使用。 因为有线上版本有额度限制,因此我部署到了自己的服务器,免费撸之 。 一、firecrawl. FireCrawl是一款创新的爬虫工具,它能够无需站点地图,抓取任何网站的所有可访问子页面。
---
### **Result 4:** [**fircrawl本地部署 - 知乎 - 知乎专栏**](https://zhuanlan.zhihu.com/p/19853923174)
**URL:** [https://zhuanlan.zhihu.com/p/19853923174](https://zhuanlan.zhihu.com/p/19853923174)
**Relevance Score:** 0.40613216
**Content:**
fircrawl本地部署 - 知乎 fircrawl本地部署 # ===== Required ENVS ====== NUM_WORKERS_PER_QUEUE=8 # ===== Optional ENVS ====== # Other Optionals PLAYWRIGHT_MICROSERVICE_URL= # set if you'd like to run a playwright fallback PLAYWRIGHT_MICROSERVICE_URL=[http://localhost:3000/scrape](http://localhost:3000/scrape) wget [https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh](https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh) 2. source profile source ~/.bashrc nvm -v 4. nvm 安装 "[https://docker.registry.cyou](https://docker.registry.cyou/)", "[https://dockercf.jsdelivr.fyi](https://dockercf.jsdelivr.fyi/)", "[https://dockerproxy.com](https://dockerproxy.com/)", 解决方案是修改 Docker Root Dir 的值,指向一个更大空间的目录.或者 查询到docker默认存放镜像地址为/var/lib/docker,扩展此地址下的空间就可以解决这个问题, echo 0 25165824 thin 253:2 16 | dmsetup load docker-253:0-33580915-b4e5e9410d34d3da1146fa973665d61502c20b8bc97f6b6ee72ac711ea02c627 dmsetup resume docker-253:0-33580915-b4e5e9410d34d3da1146fa973665d61502c20b8bc97f6b6ee72ac711ea02c627 修改 Docker Root Dir 的值,指向一个更大空间的目录的解决方案: docker s