自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+

陈超帅的博客

心平气和搞研究

  • 博客(35)
  • 收藏
  • 关注

原创 2024大模型的新综述速递!Training and Serving System of Foundation Models: A Comprehensive Survey

2024大模型(基座模型)的新综述速递!Training and Serving System of Foundation Models: A Comprehensive Survey

2024-01-22 01:20:08 1197

原创 Lyfe Agents:低成本实时社交交互的生成智能体(Lyfe Agents generative agents for low-cost real-time social interaction)

在人工智能的迅速发展中,生成智能体在模拟复杂社交行为上的潜力日渐显现。然而,一个挑战始终存在:如何在实时交互中保持智能体的反应速度,同时还要控制计算成本?最新的研究成果——Lyfe Agents,为这个问题提供了一个令人兴奋的解决方案。论文题目:Lyfe Agents: Generative agents for low-cost real-time social interactions

2024-01-21 23:59:22 493

原创 基于强化学习的语言代理在狼人杀中的战略对局(Language Agents with RL for Strategic Play in the Werewolf Game)

基于强化学习的语言代理在狼人杀中的战略对局(Language Agents with RL for Strategic Play in the Werewolf Game)在人工智能的领域里,大型语言模型(LLM)的发展正开辟着新的天地。大多数研究聚焦在单一智能体或者合作任务上,但对于多智能体这样的复杂场景,探索还远远不够。今天,我们来探讨一项有趣的研究—如何将强化学习(RL)技术应用到LLM,以在诸如狼人杀这种社交推理游戏中培养出具备高度战略思维的语言代理。

2024-01-21 23:58:47 597

原创 使用LM仿真沙盒识别LM代理风险(Identifying the Risks of LM Agents with an LM-Emulated Sandbox)

在AI领域,语言模型(LM)代理技术正迅猛发展,带来了诸如ChatGPT插件等强大工具。然而,随之而来的潜在风险也不容忽视——从私人数据泄露到财务损失,种种风险不断被放大。传统的风险识别方法不仅耗时耗力,且随着工具复杂性的增加,成本也水涨船高。要在这样的趋势下发现那些发生概率低但可能导致严重后果的风险,无疑是一项挑战。使用LM仿真沙盒识别LM代理风险(Identifying the Risks of LM Agents with an LM-Emulated Sandbox)

2024-01-21 23:58:07 406

原创 从社会心理学的角度探索LLM智能代理的协作机制(Exploring Collaboration Mechanisms for LLM Agents A Social Psychology View)

从社会心理学的角度探索LLM智能代理的协作机制(Exploring Collaboration Mechanisms for LLM Agents A Social Psychology View)在自然语言处理(NLP)技术日益深入人类社会生活的今天,我们不禁要问:众多大型语言模型(LLM)构成的多代理系统是否能够模仿人类之间的协作智能?

2024-01-21 23:57:35 562

原创 LLM-Co框架:为智能体协作而生(Evaluating Multi-Agent Coordination Abilities in Large Language Models)

在人工智能领域,构建能够与人类及其他系统协作的智能体是一个备受关注的课题。大型语言模型(Large Language Models,LLMs)以其卓越的自然语言理解和生成能力,成为该课题中的一股新兴力量。今天,我们来探讨一项新研究,该研究评估了采用LLMs的智能体在不同协作场景下的表现,并提出了一个专为LLMs设计的协作框架——LLM-Coordination(LLM-Co)。

2024-01-21 23:57:00 566

原创 多智能体协作,可与人类合作的Agent(Building Cooperative Embodied Agents Modularly with Large Language Models)

多智能体协作,可与人类合作的Agent(Building Cooperative Embodied Agents Modularly with Large Language Models)

2024-01-21 23:56:22 544

原创 AVALON思维游戏:通过递归思考对抗欺骗(Avalon‘s Game of Thoughts: Battle Against Deception through Recursive Contemp)

AVALON思维游戏:通过递归思考对抗欺骗论文题目:Avalon's Game of Thoughts: Battle Against Deception through Recursive Contemplation

2024-01-21 22:51:24 1954

原创 AGENTVERSE:促进多智能体协作和探索涌现行为(Agentverse: Facilitating multi-agent collaboration and exploring emergen)

AGENTVERSE:促进多智能体协作和探索涌现行为(Agentverse: Facilitating multi-agent collaboration and exploring emergen)

2024-01-21 22:12:26 1126

原创 【Agent论文】大型语言模型智能评估新尺度:AGENTBENCH(Agentbench: Evaluating llms as agents)

【Agent论文】大型语言模型智能评估新尺度:AGENTBENCH(Agentbench: Evaluating llms as agents)

2024-01-21 19:19:28 1631

原创 【论文笔记】【存储】Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication C~

【论文笔记】【存储】Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication C~

2023-11-11 04:27:13 264

原创 【论文笔记】【存储】Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction

【论文笔记】【存储】Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction

2023-11-04 12:35:05 154

原创 【论文笔记】【存储】FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks

【论文笔记】【存储】FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks

2023-11-04 10:27:10 310 2

原创 【论文笔记】【存储】SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

【论文笔记】【存储】SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

2023-11-04 02:24:29 214 2

原创 【论文笔记】【存储】SwapAdvisor: Pushing Deep Learning Beyond the GPU Memory Limit via Smart Swapping

【论文笔记】【存储】SwapAdvisor: Pushing Deep Learning Beyond the GPU Memory Limit via Smart Swapping

2023-11-04 00:44:04 286 2

原创 【论文笔记】【存储】DeepUM: Tensor Migration and Prefetching in Unified Memory

【论文笔记】【存储】DeepUM: Tensor Migration and Prefetching in Unified Memory

2023-11-03 03:39:15 343 6

原创 【论文笔记】【存储】Capuchin: Tensor-based GPU Memory Management for Deep Learning

【论文笔记】【存储】Capuchin: Tensor-based GPU Memory Management for Deep Learning

2023-11-02 23:48:17 233 2

原创 【论文笔记】【存储】论文笔记目录

记录论文笔记的目录,持续更新

2023-11-02 16:57:39 109

原创 【论文笔记】【存储】Buddy Compression: Enabling Larger Memory for Deep Learning and HPC Workloads on GPUs

【论文笔记】【存储】Buddy Compression: Enabling Larger Memory for Deep Learning and HPC Workloads on GPUs

2023-11-02 16:54:55 149

原创 【Deepspeed-DeepSpeedZeroOptimizer-02】ZeRO源码精读02:DeepSpeedZeroOptimizer(从init到ZeRO(1、2)训练流程解析)

【Deepspeed-DeepSpeedZeroOptimizer-02】ZeRO源码精读02:DeepSpeedZeroOptimizer(从init到ZeRO(1、2)训练流程解析)

2023-10-29 17:00:33 973

原创 【Deepspeed-DeepSpeedZeroOptimizer-01】ZeRO源码精读01:DeepSpeedZeroOptimizer(ZeRO-1,ZeRO-2)

【Deepspeed-DeepSpeedZeroOptimizer-01】ZeRO源码精读01:DeepSpeedZeroOptimizer(ZeRO-1,ZeRO-2)

2023-10-28 17:09:29 1751 11

原创 【Deepspeed-Adagrad】Deepspeed的Adagrad实现代码精读

【Deepspeed-Adagrad】Deepspeed的Adagrad实现代码精读

2023-10-26 01:21:43 149

原创 【Deepspeed-Adam】Deepspeed的Adam实现代码精读(cpu_adam、fused_adam)

Deepspeed的Adam实现的代码精读,其中包括了CPU版本的Adam,还有高度优化的GPU版本的Adam,代码精读与理解。

2023-10-22 13:54:49 939 2

原创 15.ZeRO-infinity: breaking the GPU memory wall for extreme scale deep learning

ZeRO-infinity: breaking the GPU memory wall for extreme scale deep learning该论文提出了一种突破GPU内存限制的方法,用于极大规模深度学习任务。

2023-09-09 02:06:24 125

转载 14.Chimera: efficiently training large-scale neural networks with bidirectional pipelines

Chimera: efficiently training large-scale neural networks with bidirectional pipelines 阅读笔记

2023-09-09 02:03:57 93

转载 13.Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM

Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM 阅读笔记

2023-09-09 02:00:26 128

原创 12.ZeRO: Memory Optimizations Toward Training Trillion Parameter Models

ZeRO: Memory Optimizations Toward Training Trillion Parameter Models 阅读笔记

2023-09-09 01:58:59 129

转载 8.Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding 阅读笔记

2023-09-09 01:03:16 58

转载 5.Decision Transformer: Reinforcement Learning via Sequence Modeling

Decision Transformer: Reinforcement Learning via Sequence Modeling 阅读笔记

2023-09-09 00:56:33 63

转载 3.GPipe: efficient training of giant neural networks using pipeline parallelism

[pipeline parallelism] GPipe: efficient training of giant neural networks using pipeline parallelism 阅读笔记

2023-09-09 00:51:19 37

转载 1.FasterMoE:Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models

[distributed MoE model training] FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models 阅读笔记

2023-09-09 00:39:54 112

原创 SpringMVC4.3+jkd1.8+tomcat8.5第一个小程序及简单测试

//嘿!手动原创标识经过多天的学习,准备上手操作加深对SpringMVC的理解了。对书上的各种代码和方法进行验证学习。1.IDEA新建progect,选spring,选springmvc,选上webapplicantion2.输入project的文件名,我的是myspringmvc3.IDEA帮我们导入好了jar包,都在lib目录下4.到apache官方下载tomcat最新版本,tomc...

2018-12-28 10:57:39 643 1

原创 配置admin后续

这一篇接上一篇的博文在setting中配置一下在sys.path.insert(0,os.path.join(BASE_DIR,“apps”))下加一行sys.path.insert(0,os.path.join(BASE_DIR,“etc_apps”))这样在命令行就能找到xadmin在app的目录下,新建一个adminx.py(userprofile由于覆盖了user表,所以不用注册...

2018-09-19 00:11:42 249

原创 开始配置后台数据库

开始配置django做好前期的安装工作之后,开始了1.配置settingDATABASES = {‘default’: {‘ENGINE’: ‘django.db.backends.mysql’,‘NAME’: “hotel”,‘USER’:‘root’,‘PASSWORD’:‘password’,‘HOST’:‘127.0.0.1’}首先配置好这里在mysql中建立一个...

2018-09-18 21:11:18 696

原创 课程设计:小型宾馆管理系统01

1.题目要求 •顾客入住、退房 •房间预订 •换房处理 •续住管理 •折扣2.架构选择:B/S B/S结构,即Browser/Server(浏览器/服务器)结构,是随着Internet技术的兴起,对C/S结构的一种变化 或者改进的结构。在这种结构下,用户界面完全通过WWW浏览器实现。3.语言选择:Python html ccs JavaScript html...

2018-09-17 16:25:48 2443

空空如也

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除