NeurIPS2022 interesting papers

最近NeurIPS放榜了,花了几个小时整理一下自己感兴趣的文章,收集不易,点个赞吧
网站链接:https://nips.cc/Conferences/2022/Schedule?type=Poster

  • Differentially Private Model Compression
  • Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity
  • REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering
  • Dataset Inference for Self-Supervised Models
  • Neuron with Steady Response Leads to Better Generalization
  • Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
  • Structural Pruning via Latency-Saliency Knapsack
  • INRAS: Implicit Neural Representation for Audio Scenes
  • What Makes a “Good” Data Augmentation in Knowledge Distillation - A Statistical Perspective
  • On Measuring Excess Capacity in Neural Networks
  • SIREN: Shaping Representations for OOD Detection
  • Distributionally robust weighted k-nearest neighbors
  • Effects of Data Geometry in Early Deep Learning
  • Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
  • Retaining Knowledge for Learning with Dynamic Definition
  • If Influence Functions are the Answer, Then What is the Question?
  • Learning sparse features can lead to overfitting in neural networks
  • VisFIS: Improved Visual Feature Importance Supervision with Right-for-Right-Reason Objectives
  • CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification
  • ‘Why Not Other Classes?”: Towards Class-Contrastive Back-Propagation Explanations
  • Self-Supervised Fair Representation Learning without Demographics
  • Fairness without Demographics through Knowledge Distillation
  • Towards Understanding the Condensation of Neural Networks at Initial Training
  • Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
  • Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs
  • On Robust Multiclass Learnability
  • Neural Matching Fields: Implicit Representation of Matching Cost for Semantic Correspondence
  • Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination
  • Implicit Neural Representations with Levels-of-Experts
  • Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
  • A Data-Augmentation Is Worth A Thousand Samples
  • Exploring Example Influence in Continual Learning
  • Bridge the Gap Between Architecture Spaces via A Cross-Domain Predictor
  • Deconfounded Representation Similarity for Comparison of Neural Networks
  • Training with More Confidence: Mitigating Injected and Natural Backdoors During Training
  • Understanding Neural Architecture Search: Convergence and Generalization
  • Understanding Robust Learning through the Lens of Representation Similarities
  • Efficient Dataset Distillation using Random Feature Approximation
  • Distilling Representations from GAN Generator via Squeeze and Span
  • AttCAT: Explaining Transformers via Attentive Class Activation Tokens
  • On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
  • Task Discovery: Finding the Tasks that Neural Networks Generalize on
  • Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
  • Improving Self-Supervised Learning by Characterizing Idealized Representations
  • Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation
  • Domain Generalization by Learning and Removing Domain-specific Features
  • Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
  • Pruning has a disparate impact on model accuracy
  • Neural network architecture beyond width and depth
  • Explaining Graph Neural Networks with Structure-Aware Cooperative Games
  • TA-GATES: An Encoding Scheme for Neural Network Architectures
  • Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations
  • Respecting Transfer Gap in Knowledge Distillation
  • Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
  • Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation
  • Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
  • 3DB: A Framework for Debugging Computer Vision Models
  • Redundant representations help generalization in wide neural networks
  • What You See is What You Classify: Black Box Attributions
  • Efficient identification of informative features in simulation-based inference
  • Best of Both Worlds Model Selection
  • Understanding Self-Supervised Graph Representation Learning from a Data-Centric Perspective
  • Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation
  • Does GNN Pretraining Help Molecular Representation?
  • Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP
  • GlanceNets: Interpretabile, Leak-proof Concept-based Models
  • Evolution of Neural Tangent Kernels under Benign and Adversarial Training
  • Learning to Scaffold: Optimizing Model Explanations for Teaching
  • Dataset Distillation using Neural Feature Regression
  • Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
  • Palm up: Playing in the Latent Manifold for Unsupervised Pretraining
  • Spherization Layer: Representation Using Only Angles
  • Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
  • Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning
  • Task-Agnostic Graph Explanations
  • On Neural Network Pruning’s Effect on Generalization
  • In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
  • On the Symmetries of Deep Learning Models and their Internal Representations
  • Training Subset Selection for Weak Supervision
  • Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
  • Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
  • Meta-learning for Feature Selection with Hilbert-Schmidt Independence Criterion
  • GULP: a prediction-based metric between representations
  • Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations
  • Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers
  • Insights into Pre-training via Simpler Synthetic Tasks
  • Orient: Submodular Mutual Information Measures for Data Subset Selection under Distribution Shift
  • Knowledge Distillation: Bad Models Can Be Good Role Models
  • Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
  • Weakly Supervised Representation Learning with Sparse Perturbations
  • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
  • FedAvg with Fine Tuning: Local Updates Lead to Representation Learning
  • Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions
  • Efficient Architecture Search for Diverse Tasks
  • Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures
  • Reconstructing Training Data From Trained Neural Networks
  • Procedural Image Programs for Representation Learning
  • Where to Pay Attention in Sparse Training for Feature Selection?
  • Could Giant Pre-trained Image Models Extract Universal Representations?
  • On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
  • Vision GNN: An Image is Worth Graph of Nodes
  • CLEAR: Generative Counterfactual Explanations on Graphs
  • Neural Basis Models for Interpretability
  • Exploring Linear Feature Scalability of Vision Transformer for Parameter-efficient Fine-tuning
  • Private Estimation with Public Data
  • Robust Testing in High-Dimensional Sparse Models
  • Dataset Factorization for Condensation
  • One Layer is All You Need
  • Improved Fine-Tuning by Better Leveraging Pre-Training Data
  • Don’t Throw Your Model Checkpoints Away
  • Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
code for papers 可以理解为为论文编写代码的意思。在研究领域,科学家们经常需要编写代码来支持他们的研究工作。这些代码可以用于数据分析、模拟实验、算法实现等各种科学研究任务。 论文中的代码可以通过多种编程语言来实现,如Python、R、Matlab等。编写论文代码的目的是为了使研究的结果能够被其他人重现,同时还可以提高研究的可信度和可靠性。通过代码共享,其他研究者可以验证和复现研究结果,从而促进学术交流和合作。 论文中的代码应该具备一定的可读性和可维护性,以便其他研究者能够理解和使用。在编写代码时,研究者需要考虑到代码的结构和组织,使用清晰的变量和函数命名,添加注释和文档,使代码具备良好的可读性。此外,还需要考虑代码的可扩展性和复用性,以便其他人可以在代码的基础上开展进一步的研究。 值得注意的是,编写论文代码并非仅仅是实现功能,更重要的是保证代码的正确性和可靠性。因此,研究者需要进行充分的测试和验证,确保代码能够给出准确可信的结果。 总而言之,code for papers 是为论文编写代码的概念,通过共享代码,科研人员可以促进学术交流和合作,提高研究的可信度和可靠性。编写论文代码需要考虑代码的可读性、可维护性、可扩展性和正确性,以便其他人能够理解和使用,并在此基础上进行进一步的研究。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值