2025 | 美团大模型学术论文精选

‍‍

本文精选美团技术团队在大模型方向沉淀的最新学术论文,内容覆盖大语言模型、大模型系统与架构优化、多模态理解与生成、大模型评测等方向,希望能够给大家的学习和工作带来一些帮助或者启发。

01. Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning. NeurIPS 2024. (PDF

02.  Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models. (PDF

03. SEAS: Self-Evolving Adversarial Safety Optimization for Large Language Models. AAAI 2025. (PDF

04. Learning or Self-aligning? Rethinking Instruction Fine-tuning. ACL 2024. (PDF

05. Earlier Tokens Contribute More: Learning Direct Preference Optimization from Temporal Decay Perspective. ICLR 2025. (PDF)

06. DolphCoder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning. ACL 2024.(PDF

07. AgentRefine: Enhancing Agent Generalization through Refinement Tuning. ICLR 2025. (PDF)

01. EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference.(PDF

02. FPTQ: Fine-grained Post-Training Quantization for Large Language Models. (PDF

03. A Speed Odyssey for Deployable Quantization of LLMs. (PDF)

04. Flash Communication: Reducing Tensor Parallelization Bottleneck for Fast Large Language Model Inference. (PDF)

05. Speculative Decoding via Early-exiting for Faster LLM Inference with Thompson Sampling Control Mechanism. ACL 2024. (PDF

01. Enhancing Multilingual Speech Recognition Through Language Prompt Tuning and Frame-level Language Adapter. ICASSP 2024.(PDF

02. MobileVLM V2: Faster and Stronger Baseline for Vision Language Model.(PDF

03. Denoising with a Joint-Embedding Predictive Architecture. ICLR 2025.(PDF

04. Lumen: Unleashing Versatile Vision-centric Capabilities of Large Multimodal Models. NeurIPS 2024. (PDF

05. LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding. CVPR 2025. (PDF

01. Who’s the MVP? A Game-theoretic Evaluation Benchmark for Modular Attribution in LLM Agents.(PDF

02. Leveraging Dual Process Theory in Language Agent Framework for Real-time Simultaneous Human-AI Collaboration.(PDF

03. Q-Eval-100K: Evaluating Visual Quality and Alignment Level for Text-to-Vision Content. CVPR 2025. (PDF

04. Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs. ACM MM 2024. (PDF

05. A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily. NAACL 2024. (PDF

美团科研合作

美团科研合作致力于搭建美团技术团队与高校、科研机构、智库的合作桥梁和平台,依托美团丰富的业务场景、数据资源和真实的产业问题,开放创新,汇聚向上的力量,围绕机器人、人工智能、大数据、物联网、无人驾驶、运筹优化等领域,共同探索前沿科技和产业焦点宏观问题,促进产学研合作交流和成果转化,推动优秀人才培养。面向未来,我们期待能与更多高校和科研院所的老师和同学们进行合作。欢迎老师和同学们发送邮件至:meituan.oi@meituan.com。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值