java colt_机器学习理论之“狂欢” COLT 2020

COLT 2020会议上展示了深度学习、优化算法和机器学习理论的最新研究成果,涵盖了从不同ially Private Hypothesis Selection到在线学习、强化学习和梯度下降算法的广泛主题。研究还包括了高维数据处理、分布式系统、矩阵问题和凸优化的高效算法,以及在噪声和隐私保护环境下学习的策略。
摘要由CSDN通过智能技术生成

Locally Private Hypothesis Selection

Sivakanth Gopi, Gautam Kamath, Janardhan D Kulkarni, Aleksandar Nikolov, Steven Wu, Huanyu Zhang

Differentially Private Mean Estimation of Heavy-Tailed Distributions

Gautam Kamath, Vikrant Singhal, Jonathan Ullman

An O(m/eps^3.5)-Cost Algorithm for Semidefinite Programs with Diagonal Constraints

Swati Padmanabhan, Yin Tat Lee

Adaptive Submodular Maximization under Stochastic Item Costs

Srinivasan Parthasarathy

Gradient descent algorithms for Bures-Wasserstein barycenters

Sinho Chewi, Philippe Rigollet, Tyler Maunu, Austin Stromme

On the gradient complexity of linear regression

Elad Hazan, Mark Braverman, Max Simchowitz, Blake E Woodworth

Improper Learning for Non-Stochastic Control

Max Simchowitz, Karan Singh, Elad Hazan

Root-n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank

Kefan Dong, Jian Peng, Yining Wang, Yuan Zhou

No-Regret Prediction in Marginally Stable Systems

Udaya Ghai, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang

Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities

Jelena Diakonikolas

PAC learning with stable and private predictions

Yuval Dagan, Vitaly Feldman

Learning a Single Neuron with Gradient Methods

Gilad Yehudai, Ohad Shamir

Universal Approximation with Deep Narrow Networks

Patrick Kidger, Terry J Lyons

Asymptotic Errors for High-Dimensional Convex Penalized Linear Regression beyond Gaussian Matrices

Alia Abbara, Florent Krzakala, Cedric Gerbelot

On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems

Dan Garber

From Nesterov’s Estimate Sequence to Riemannian Acceleration

Kwangjun Ahn, Suvrit Sra

Selfish Robustness and Equilibria in Multi-Player Bandits

Etienne Boursier, Vianney Perchet

How Good is SGD with Random Shuffling?

Itay M Safran, Ohad Shamir

Exploration by Optimisation in Partial Monitoring

Tor Lattimore, Csaba Szepesvari

Extending Learnability to Auxiliary-Input Cryptographic Primitives and Meta-PAC Learning

Mikito Nanashima

Noise-tolerant, Reliable Active Classification with Comparison Queries

Max Hopkins, Shachar Lovett, Daniel Kane, Gaurav Mahajan

Sharper Bounds for Uniformly Stable Algorithms

Olivier Bousquet, Yegor Klochkov, Nikita Zhivotovskiy

Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes

Alekh Agarwal, Sham Kakade, Jason Lee, Gaurav Mahajan

Consistent recovery threshold of hidden nearest neighbor graphs

Jian Ding, Yihong Wu, Jiaming Xu, Dana Yang

High probability guarantees for stochastic convex optimization

Damek Davis, Dmitriy Drusvyatskiy

Information Directed Sampling for Linear Partial Monitoring

Johannes Kirschner, Tor Lattimore, Andreas Krause

ID3 Learns Juntas for Smoothed Product Distributions

Eran Malach, Amit Daniely, Alon Brutzkus

Tight Lower Bounds for Combinatorial Multi-Armed Bandits

Nadav Merlis, Shie Mannor

Domain Compression and its Application to Randomness-Optimal Distributed Goodness-of-Fit

Jayadev Acharya, Clement L Canonne, Yanjun Han, Ziteng Sun, Himanshu Tyagi

Reasoning About Generalization via Conditional Mutual Information

Thomas Steinke, Lydia Zakynthinou

A Greedy Anytime Algorithm for Sparse PCA

Dan Vilenchik, Adam Soffer, Guy Holtzman

Logsmooth Gradient Concentration and Tighter Runtimes for Metropolized Hamiltonian Monte Carlo

Yin Tat Lee, Ruoqi Shen, Kevin Tian

Provably Efficient Reinforcement Learning with Linear Function Approximation

Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael Jordan

A Fast Spectral Algorithm for Mean Estimation with Sub-Gaussian Rates

Zhixian Lei, Kyle Luh, Prayaag Venkat, Fred Zhang

How to trap a gradient flow

Dan Mikulincer, Sebastien Bubeck

Near-Optimal Algorithms for Minimax Optimization

Tianyi Lin, Chi Jin, Michael Jordan

Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal

Alekh Agarwal, Sham Kakade, Lin Yang

Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise

Maksim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, Hoi-To Wai

Fast Rates for Online Prediction with Abstention

Gergely Neu, Nikita Zhivotovskiy

Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes

YICHUN HU, Nathan Kallus, Xiaojie Mao

Data-driven confidence bands for distributed nonparametric regression

Valeriy Avanesov

Tsallis-INF for Decoupled Exploration and Exploitation in Multi-armed Bandits

Chloé Rouyer, Yevgeny Seldin

Pan-Private Uniformity Testing

Kareem Amin, Matthew Joseph, Jieming Mao

ODE-Inspired Analysis for the Biological Version of Oja’s Rule in Solving Streaming PCA

Mien Brabeeba Wang, Chi-Ning Chou

Complexity Guarantees for Polyak Steps with Momentum

Mathieu Barre, Adrien B Taylor, Alexandre d’Aspremont

Calibrated Surrogate Losses for Adversarially Robust Classification

Han Bao, Clayton Scott, Masashi Sugiyama

Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond

Oliver Hinder, Aaron Sidford, Nimit S Sohoni

Faster Projection-free Online Learning

Edgar Minasyan, Elad Hazan

Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without

Sebastien Bubeck, Yuanzhi Li, Yuval Peres, Mark Sellke

Coordination without communication: optimal regret in two players multi-armed bandits

Sebastien Bubeck, Thomas Budzinski

EM Algorithm is Sample-Optimal for Learning Mixtures of Well-Separated Gaussians

Jeongyeol Kwon, Constantine Caramanis

Online Learning with Vector Costs and Bandits with Knapsacks

Thomas Kesselheim, Sahil Singla

Better Algorithms for Estimating Non-Parametric Models in Crowd-Sourcing and Rank Aggregation

Allen X Liu, Ankur Moitra

Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding

Xiaotong Yuan, Ping Li

Learning Halfspaces with Massart Noise Under Structured Distributions

Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion

William C Franks, Ankur Moitra

Active Learning for Identification of Linear Dynamical Systems

Andrew J Wagenmaker, Kevin Jamieson

Bounds in query learning

Hunter S Chase, James Freitag

Active Local Learning

Arturs Backurs, Avrim Blum, Neha Gupta

Kernel and Rich Regimes in Overparametrized Models

Blake E Woodworth, Suriya Gunasekar, Jason Lee, Edward Moroshko, Pedro Henrique Pamplona Savarese, Itay Golan, Daniel Soudry, Nathan Srebro

Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium

Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang

Finite-Time Analysis of Asynchronous Stochastic Approximation and $Q$-Learning

Guannan Qu, Adam Wierman

Parallels Between Phase Transitions and Circuit Complexity?

Colin P Sandon, Ankur Moitra, Elchanan Mossel

Hierarchical Clustering: A 0.585 Revenue Approximation

Noga Alon, Yossi Azar, Danny Vainstein

Gradient descent follows the regularization path for general losses

Ziwei Ji, Miroslav Dudik, Robert Schapire, Matus Telgarsky

Bessel Smoothing and Multi-Distribution Property Estimation

Yi Hao, Ping Li

Privately Learning Thresholds: Closing the Exponential Gap

Uri Stemmer, Moni Naor, Haim Kaplan, Yishay Mansour, Katrina Ligett

Pessimism About Unknown Unknowns Inspires Conservatism

Michael K Cohen, Marcus Hutter

The Influence of Shape Constraints on the Thresholding Bandit Problem

James Cheshire, Pierre Menard, Alexandra Carpentier

Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent

James P Bailey, Gauthier Gidel, Georgios Piliouras

Efficient and robust algorithms for adversarial linear contextual bandits

Gergely Neu, Julia Olkhovskaya

Non-asymptotic Analysis for Nonparametric Testing

Yun Yang, Zuofeng Shang, Guang Cheng

Tree-projected gradient descent for estimating gradient-sparse parameters on graphs

Sheng Xu, Zhou Fan, Sahand Negahban

Covariance-adapting algorithm for semi-bandits with application to sparse rewards

Pierre Perrault, Vianney Perchet, Michal Valko

Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems

Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar

A Corrective View of Neural Networks: Representation, Memorization and Learning

Dheeraj M Nagaraj, Guy Bresler

Learning Entangled Single-Sample Gaussians in the Subset-of-Signals Model

Yingyu Liang, Hui Yuan

Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss

Lénaïc Chizat, Francis Bach

Dimension-Free Bounds for Chasing Convex Functions

Guru Guruganesh, Anupam Gupta, Charles Argue

Optimal group testing

Oliver Gebhard, Philipp Loick, Maximilian Hahn-Klimroth, Amin Coja-Oghlan

On Suboptimality of Least Squares with Application to Estimation of Convex Bodies

Gil Kur, Alexander Rakhlin, Adityanand Guntuboyina

A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere

Marco Schmalhofer

A Closer Look at Small-loss Bounds for Bandits with Graph Feedback

Chung-Wei Lee, Haipeng Luo, Mengxiao Zhang

Extrapolating the profile of a finite population

Yihong Wu, Yury Polyanskiy, Soham Jana

Balancing Gaussian vectors in high dimension

Paxton M Turner, Raghu Meka, Philippe Rigollet

On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels

Tengyuan Liang, Alexander Rakhlin, Xiyu Zhai

From tree matching to sparse graph alignment

Luca Ganassali, Laurent Massoulie

Estimating Principal Components under Adversarial Perturbations

Pranjal Awasthi, Xue Chen, Aravindan Vijayaraghavan

Logistic Regression Regret: What’s the Catch?

Gil I Shamir

Efficient, Noise-Tolerant, and Private Learning via Boosting

Mark Bun, Marco L Carmosino, Jessica Sorrell

Highly smooth minimization of non-smooth problems

Brian Bullins

Proper Learning, Helly Number, and an Optimal SVM Bound

Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy

Efficient Parameter Estimation of Truncated Boolean Product Distributions

Dimitris Fotakis, Alkis Kalavasis, Christos Tzamos

Estimation and Inference with Trees and Forests in High Dimensions

Vasilis Syrgkanis, Emmanouil Zampetakis

Closure Properties for Private Classification and Online Prediction

Noga Alon, Amos Beimel, Shay Moran, Uri Stemmer

New Potential-Based Bounds for Prediction with Expert Advice

Vladimir A Kobzar, Robert Kohn, Zhilei Wang

Distributed Signal Detection under Communication Constraints

Jayadev Acharya, Clement L Canonne, Himanshu Tyagi

Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models

Antonio Blanca, Zongchen Chen, Eric Vigoda, Daniel Stefankovic

Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

Ilias Diakonikolas, Daniel M Kane, Vasilis Kontonis, Nikos Zarifis

Costly Zero Order Oracles

Renato Paes Leme, Jon Schneider

Precise Tradeoffs in Adversarial Training for Linear Regression

Adel Javanmard, Mahdi Soltanolkotabi, Hamed Hassani

Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity

Pritish Kamath, Omar Montasser, Nathan Srebro

Wasserstein Control of Mirror Langevin Monte Carlo

Kelvin Shuangjian Zhang, Gabriel Peyré, Jalal Fadili, Marcelo Pereyra

The estimation error of general first order methods

Michael V Celentano, Andrea Montanari, Yuchen Wu

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

Yossi Arjevani, Yair Carmon, John Duchi, Dylan Foster, Ayush Sekhari, Karthik Sridharan

Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process

Guy Blanc, Neha Gupta, Gregory Valiant, Paul Valiant

Free Energy Wells and Overlap Gap Property in Sparse PCA

Ilias Zadik, Alexander S. Wein, Gerard Ben Arous

Robust causal inference under covariate shift via worst-case subpopulation treatment effects

Sookyo Jeong, Hongseok Namkoong

Winnowing with Gradient Descent

Ehsan Amid, Manfred K. Warmuth

Embedding Dimension of Polyhedral Losses

Jessica J Finocchiaro, Rafael Frongillo, Bo Waggoner

List Decodable Subspace Recovery

Morris Yau, Prasad Raghavendra

Approximation Schemes for ReLU Regression

Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam Klivans, Mahdi Soltanolkotabi

Learning Over-parametrized Two-layer ReLU Neural Networks beyond NTK

Yuanzhi Li, Tengyu Ma, Hongyang R Zhang

Fine-grained Analysis for Linear Stochastic Approximation with Averaging: Polyak-Ruppert, Non-asymptotic Concentration and Beyond

Wenlong Mou, Chris Junchi Li, Martin Wainwright, Peter Bartlett, Michael Jordan

Learning Polynomials in Few Relevant Dimensions

Sitan Chen, Raghu Meka

Efficient improper learning for online logistic regression

Pierre Gaillard, Rémi Jézéquel, Alessandro Rudi

Lipschitz and Comparator-Norm Adaptivity in Online Learning

Zakaria Mhammedi, Wouter M Koolen

Information Theoretic Optimal Learning of Gaussian Graphical Models

Sidhant Misra, Marc D Vuffray, Andrey Lokhov

Reducibility and Statistical-Computational Gaps from Secret Leakage

Matthew S Brennan, Guy Bresler

Taking a hint: How to leverage loss predictors in contextual bandits?

Chen-Yu Wei, Haipeng Luo, Alekh Agarwal

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值