计算机视觉基础模型汇总,13大类算法,85个变种

视觉领域的同学应该有所体会,获取大量标注数据是一件成本非常高的事。为了应对这个问题,研究者们通过借助无标注数据、图文数据或者多模态数据等,采用对比学习、掩码重建等学习方式预训练得到视觉基础模型,用于适应各种下游任务,比如物体检测、语义分割等。在过去一年中,由于LLM、多模态等领域的快速发展,更多新兴的计算机视觉基础模型被提出。

到目前为止,已发布的计算机视觉基础模型数目已经相当可观,对于视觉领域的同学来说,这些基础模型具有非常高的研究价值。为了方便同学们了解并掌握该领域的最新进展,发出属于自己的顶会,我今天就和大家分享一篇综述论文,该文作者对计算机视觉领域的基础模型进行了详细的梳理,涵盖了13大类算法模型,以及每一类模型的变种共85个,从最早的LeNet、ResNet到最新的SAM、GPT4等都有。

​综述链接:https://arxiv.org/pdf/2307.13721.pdf

除此之外,学姐也帮大家整理了120篇21年-23年必读的CV领域算法模型的代表性论文,部分代码已开源。

尽管已有的方法表现不俗,但我们清楚,计算机视觉基础模型的发展仍然有巨大的进步空间,希望同学们能通过这份资料全面掌握计算机视觉领域的发展脉络,厘清每个模型的变化历史,并从中找到更优解。

论文list:

Surveys(12)

  • Foundational Models Defining a New Era in Vision: A Survey and Outlook 2023

  • A of Large Language Models 2023

  • Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond 2023

  • Multimodal Learning with Transformers: A Survey 2023

  • Self-Supervised Multimodal Learning: A Survey

  • Vision-and-Language Pretrained Models: A Survey 2022

  • A Survey of Vision-Language Pre-Trained Models 2022

  • Vision-Language Models for Vision Tasks: A Survey 2022

  • A Comprehensive Survey on Segment Anything Model for Vision and Beyond 2023

  • Vision-language pre-training: Basics, recent advances, and future trends 2022

  • Towards Open Vocabulary Learning: A Survey 2023

  • Transformer-Based Visual Segmentation: A Survey 2023

Papers

2021(11)

  • Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision 2021-02-11

  • Learning Transferable Visual Models From Natural Language Supervision 2021-02-26

  • WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training 2021-03-11

  • Open-vocabulary Object Detection via Vision and Language Knowledge Distillation 2021-04-28

  • CLIP2Video: Mastering Video-Text Retrieval via Image CLIP 2021-06-21

  • AudioCLIP: Extending CLIP to Image, Text and Audio 2021-06-24

  • Multimodal Few-Shot Learning with Frozen Language Models 2021-06-25

  • SimVLM: Simple Visual Language Model Pretraining with Weak Supervision 2021-08-24

  • LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs 2021-11-03

  • FILIP: Fine-grained Interactive Language-Image Pre-Training 2021-11-09

  • Florence: A New Foundation Model for Computer Vision 2021-11-22

2022(14)

  • Extract Free Dense Labels from CLIP 2021-12-02

  • FLAVA: A Foundational Language And Vision Alignment Model 2021-12-08

  • Image Segmentation Using Text and Image Prompts 2021-12-18

  • Scaling Open-Vocabulary Image Segmentation with Image-Level Labels 2021-12-22

  • GroupViT: Semantic Segmentation Emerges from Text Supervision 2022-02-22

  • CoCa: Contrastive Captioners are Image-Text Foundation Models 2022-05-04

  • Simple Open-Vocabulary Object Detection with Vision Transformers 2022-05-12

  • GIT: A Generative Image-to-text Transformer for Vision and Language 2022-05-27

  • Language Models are General-Purpose Interfaces 2022-06-13

  • Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone 2022-06-15

  • A Unified Sequence Interface for Vision Tasks 2022-06-15

  • BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning 2022-06-17

  • MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge 2022-06-17

  • LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action 2022-07-10

2023(83)

  • Masked Vision and Language Modeling for Multi-modal Representation Learning 2022-08-03

  • PaLI: A Jointly-Scaled Multilingual Language-Image Model 2022-09-14

  • VIMA: General Robot Manipulation with Multimodal Prompts 2022-10-06

  • Images Speak in Images: A Generalist Painter for In-Context Visual Learning 2022-12-05

  • InternVideo: General Video Foundation Models via Generative and Discriminative Learning 2022-12-07

  • Reproducible scaling laws for contrastive language-image learning 2022-12-14

  • Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks 2023-01-12

  • BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models 2023-01-30

  • Grounding Language Models to Images for Multimodal Inputs and Outputs 2023-01-31

  • Language Is Not All You Need: Aligning Perception with Language Models 2023-02-27

  • Prismer: A Vision-Language Model with An Ensemble of Experts 2023-03-04

  • PaLM-E: An Embodied Multimodal Language Model 2023-03-06

  • Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models 2023-03-08

  • Task and Motion Planning with Large Language Models for Object Rearrangement 2023-03-10

  • GPT-4 Technical Report 2023-03-15

  • EVA-02: A Visual Representation for Neon Genesis 2023-03-20

  • MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action 2023-03-20

  • Detecting Everything in the Open World: Towards Universal Object Detection 2023-03-21

  • Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting 2023-03-24

  • EVA-CLIP: Improved Training Techniques for CLIP at Scale 2023-03-27

  • Unmasked Teacher: Towards Training-Efficient Video Foundation Models 2023-03-28

  • ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding with GPT and Prototype Guidance 2023-03-29

  • HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face 2023-03-30

  • ERRA: An Embodied Representation and Reasoning Architecture for Long-horizon Language-conditioned Manipulation Tasks 2023-04-05

  • Segment Anything 2023-04-05

  • SegGPT: Segmenting Everything In Context 2023-04-06

  • ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application 2023-04-08

  • Video ChatCaptioner: Towards Enriched Spatiotemporal Descriptions 2023-04-09

  • OpenAGI: When LLM Meets Domain Experts 2023-04-10

  • Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT 2023-04-10

  • Advancing Medical Imaging with Language Models: A Journey from N-grams to ChatGPT 2023-04-11

  • SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM 2023-04-12

  • Segment Everything Everywhere All at Once 2023-04-13

  • Visual Instruction Tuning 2023-04-17

  • Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models 2023-04-19

  • MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models 2023-04-20

  • Can GPT-4 Perform Neural Architecture Search? 2023-04-21

  • Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness 2023-04-23

  • Track Anything: Segment Anything Meets Videos 2023-04-24

  • Segment Anything in Medical Images 2023-04-24

  • Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation 2023-04-25

  • Learnable Ophthalmology SAM 2023-04-26

  • LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model 2023-04-28

  • Transfer Visual Prompt Generator across LLMs 2023-05-02

  • Caption Anything: Interactive Image Description with Diverse Multimodal Controls 2023-05-04

  • ImageBind: One Embedding Space To Bind Them All 2023-05-09

  • InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning 2023-05-11

  • Segment and Track Anything 2023-05-11

  • An Inverse Scaling Law for CLIP Training 2023-05-11

  • VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks 2023-05-18

  • Cream: Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language Models 2023-05-24

  • Voyager: An Open-Ended Embodied Agent with Large Language Models 2023-05-25

  • DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation 2023-06-01

  • Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models 2023-06-08

  • Valley: Video Assistant with Large Language model Enhanced abilitY 2023-06-12

  • mPLUG-owl: Modularization empowers large language models with multimodality 2023-04-27

  • Image Captioners Are Scalable Vision Learners Too 2023-06-13

  • XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models 2023-06-13

  • ViP: A Differentially Private Foundation Model for Computer Vision 2023-06-15

  • COSA: Concatenated Sample Pretrained Vision-Language Foundation Model 2023-06-15

  • LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models 2023-06-15

  • Segment Any Point Cloud Sequences by Distilling Vision Foundation Models 2023-06-15

  • RemoteCLIP: A Vision Language Foundation Model for Remote Sensing 2023-06-19

  • LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching 2023-06-20

  • Fast Segment Anything 2023-06-21

  • TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter 2023-06-22

  • 3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation 2023-06-23

  • How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Images 2023-06-23

  • Faster Segment Anything: Towards Lightweight SAM for Mobile Applications 2023-06-25

  • MedLSAM: Localize and Segment Anything Model for 3D Medical Images 2023-06-26

  • LVM-Med:LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching

  • Kosmos-2: Grounding Multimodal Large Language Models to the World 2023-06-26

  • ViNT: A Foundation Model for Visual Navigation 2023-06-26

  • CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy 2023-06-27

  • Stone Needle: A General Multimodal Large-scale Model Framework towards Healthcare 2023-06-28

  • RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation Model 2023-06-28

  • Towards Language Models That Can See: Computer Vision Through the LENS of Natural Language 2023-06-28

  • Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-train 2023-06-29

  • MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained on a Large-Scale Unannotated Dataset 2023-06-29

  • RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation 2023-07-03

  • SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation 2023-07-03

  • Segment Anything Meets Point Tracking 2023-07-03

  • BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs 2023-07-17

关注下方【学姐带你玩AI】🚀🚀🚀

回复“CV模型”免费领取全部论文+代码合集

码字不易,欢迎大家点赞评论收藏!

  • 0
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
计算机视觉模型构建是指使用深度学习技术构建能够实现高级视觉任务的模型。这些模型通常由多个卷积神经网络(CNN)层和全连接层组成,通过对大量标注数据进行训练,可以实现图像分类、目标检测、语义分割等任务。 在计算机视觉模型构建中,常用的网络架构包括: 1. 卷积神经网络(CNN):CNN是计算机视觉任务中最常用的网络架构,通过卷积层、池化层和全连接层来提取图像特征并进行分类或检测。 2. 残差网络(ResNet):ResNet是一种深度残差学习网络,通过引入跳跃连接来解决深层网络训练中的梯度消失问题,提高了模型的性能。 3. 注意力机制(Attention):注意力机制可以使模型在处理图像时更加关注重要的区域,提高模型的性能和鲁棒性。 4. 生成对抗网络(GAN):GAN是一种由生成器和判别器组成的网络结构,通过对抗训练的方式生成逼真的图像。 在计算机视觉模型构建中,还需要考虑以下几个方面: 1. 数据集:构建大模型需要大量的标注数据集,可以使用公开的数据集如ImageNet、COCO等,也可以自己收集和标注数据。 2. 训练策略:选择适当的优化算法、学习率调整策略和正则化方法,以提高模型的泛化能力和鲁棒性。 3. 模型评估:使用合适的评价指标来评估模型的性能,如准确率、召回率、精确率等。 4. 模型部署:将训练好的模型部署到实际应用中,可以使用深度学习框架如TensorFlow、PyTorch等进行模型的导出和部署。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值