直播预告| ICML专场四~

点击蓝字

关注我们

AI TIME欢迎每一位AI爱好者的加入!

9月16日 15:00~21:00

AI TIME特别邀请了多位PhD,带来ICML-4!

哔哩哔哩直播通道

扫码关注AITIME哔哩哔哩官方账号

观看直播

链接:https://live.bilibili.com/21813994

15:00-17:00

★ 嘉宾介绍 ★

朱鑫祺

悉尼大学三年级PhD,在 Prof. Dacheng Tao 和 Dr. Chang Xu 指导下进行解耦表征学习,计算机视觉相关的研究。

报告题目:

基于可交换李群变分自编码的解耦学习

内容简介:

We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. Traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. We argue this simple structure is suboptimal since it requires the model to learn to discard the properties (e.g. different scales of changes, different levels of abstractness) of data variations, which is an extra work than equivariance learning. Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. Considering it is hard to conduct training on group structures, we focus on Lie groups and adopt a parameterization using Lie algebra. Based on the parameterization, some disentanglement learning constraints are naturally derived. A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning. Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.

陈晓晖

Tufts University 一年级 PhD,在Prof. Liping Liu 和 Prof. Michael Hughes 的指导下研究 Generative Modeling 和 Graph Learning。

报告题目:

自回归图生成模型上的节点生成顺序建模

内容简介:

A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph. However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging. Instead, in this work we derive the exact joint probability over the graph and the node ordering of the sequential process. From the joint, we approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference. We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods. Our experiments show that the log-likelihood bound is significantly tighter than the bound of previous schemes. Moreover, the models fitted with the proposed algorithm can generate high-quality graphs that match the structures of target graphs not seen during training. We have made our code publicly available at https://github.com/tufts-ml/graph-generation-vi.

张智杰

中科院计算所五年级博士生,导师为张家琳研究员。研究兴趣包括组合优化、近似算法、机器学习。最近的研究课题包括次模优化与影响力最大化。

报告题目:

网络推断与数据驱动的影响力最大化问题

内容简介:

Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.

19:30-21:00

杨智勇:

博士毕业于中国科学院信息工程研究所,现为中国科学院大学博士后。目前主要的研究方向主要为AUC优化、多任务学习、机器学习理论。在ICML、NeurIPS、T-PAMI等CCF-A类期刊/会议发表一作论文7篇。担任ICML、NeurIPS、ICLR、AAAI、IJCAI等会议PC member;IJCAI 2021 senior PC member;T-PAMI、T-IP等国际期刊审稿人。曾入选博新计划、百度AI华人新星百强榜单,曾获百度奖学金全球20强提名奖、中科院院长特别奖、NeurIPS top 10% 审稿人等荣誉。

报告题目:

TPAUC指标的end-to-end 优化方法

内容简介:

The Area Under the ROC Curve (AUC) is a crucial metric for machine learning, which evaluates the average performance over all possible True Positive Rates (TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful classifier should simultaneously embrace a high TPR and a low FPR, we turn to study a more general variant called Two-way Partial AUC (TPAUC), where only the region with TPR≥α,FPR≤β is included in the area. Moreover, a recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics where only the FPR range is restricted, opening a new problem to seek solutions to leverage high TPAUC. Motivated by this, we present the first trial in this paper to optimize this new metric. The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss. To address this issue, we propose a generic framework to construct surrogate optimization problems, which supports efficient end-to-end training with deep-learning. Moreover, our theoretical analyses show that: 1) the objective function of the surrogate problems will achieve an upper bound of the original problem under mild conditions, and 2) optimizing the surrogate problems leads to good generalization performance in terms of TPAUC with a high probability. Finally, empirical studies over several benchmark datasets speak to the efficacy of our framework.

沈广宇

普渡大学计算机系二年级在读博士,在 Prof. Xiangyu Zhang 的研究组进行神经网络安全性相关的研究,包括对抗攻击,后门攻击以及防御。

报告题目:

基于多臂老虎机优化的神经网络后门扫描

内容简介:

Back-door attack poses a severe threat to deep learning systems. It injects hidden malicious be- haviors to a model such that any input stamped with a special pattern can trigger such behaviors. Detecting back-door is hence of pressing need. Many existing defense techniques use optimiza- tion to generate the smallest input pattern that forces the model to misclassify a set of benign inputs injected with the pattern to a target label. However, the complexity is quadratic to the num- ber of class labels such that they can hardly handle models with many classes. Inspired by Multi-Arm Bandit in Reinforcement Learning, we propose a K-Arm optimization method for backdoor detec- tion. By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substan- tially reduce the complexity, allowing to handle models with many classes. Moreover, by itera- tively refining the selection of labels to optimize, it substantially mitigates the uncertainty in choos- ing the right labels, improving detection accuracy. At the time of submission, the evaluation of our method on over 4000 models in the IARPA Tro- jAI competition from round 1 to the latest round 4 achieves top performance on the leaderboard. Our technique also supersedes five state-of-the-art techniques in terms of accuracy and the scanning time needed. The code of our work is available at https://github.com/PurduePAML/ K-ARM_Backdoor_Optimization

闫雪

闫雪是中国科学院自动化所一年级博士生,研究兴趣包括机器学习,多智能体评估。

报告题目:

基于低秩矩阵填充的高效多智能体策略评估

内容简介:

Multi-agent evaluation aims at the assessment of an agent's strategy on the basis of interaction with others. Typically, existing methods such as -rank and its approximation still require to exhaustively compare all pairs of -tuple joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we intend to reduce the number of pairwise comparisons in order to recover a satisfied ranking for -players. We explore the fact that agents with similar skills may achieve similar performance payoff against others, as evidenced from our experiments. Two situations are considered: the first one is when we can obtain the true payoffs (e.g., noise-free evaluation). The other one is when we can only access noisy payoff observations (e.g., noisy evaluation). Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively leverage low-rank matrix completion. For both of these settings, we derive that  (  is num. of agents and  is the rank of the payoff matrix) comparisons are required to achieve sufficiently well evaluation performance. Empirical results on evaluating the players in three synthetic games and twelve real world games from OpenSpiel demonstrate that payoff evaluation of a few  pairs can lead to comparable performance compared to algorithms that know the complete payoff matrix.

直播结束后我们会邀请讲者在微信群中与大家答疑交流,请添加“AI TIME小助手(微信号:AITIME_HY)”,回复“icml”,将拉您进“AI TIME ICML 会议交流群”!

AI TIME微信小助手

主       办:AI TIME 

合作媒体:学术头条、AI 数据派

合作伙伴:智谱·AI、中国工程院知领直播、学堂在线、学术头条、biendata、 Ever链动

AI TIME欢迎AI领域学者投稿,期待大家剖析学科历史发展和前沿技术。针对热门话题,我们将邀请专家一起论道。同时,我们也长期招募优质的撰稿人,顶级的平台需要顶级的你,

请将简历等信息发至yun.he@aminer.cn!

微信联系:AITIME_HY

AI TIME是清华大学计算机系一群关注人工智能发展,并有思想情怀的青年学者们创办的圈子,旨在发扬科学思辨精神,邀请各界人士对人工智能理论、算法、场景、应用的本质问题进行探索,加强思想碰撞,打造一个知识分享的聚集地。

更多资讯请扫码关注

我知道你“在看”哟~

点击 阅读原文 了解更多精彩

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值