每日随记-1

某些小可爱说:你写了我哪有空看呀,还必须要我看的样子,哼,我就偏不看,凭什么让你来命令我呀,我自己的东西都看不完呢,才不要看你这东西。再者说了,万一你爸妈知道了还得怨我浪费你时间。
也罢也罢,那就写在这里吧,小可爱想看就看,不想看就不看吧~

对抗机器学习

  1. Adversarial examples are malicious inputs designed to fool machine learning
    models.

哈哈终于知道为啥子我收到的垃圾短信和垃圾邮件都要用火星文了哈哈哈哈

  1. Notice that adversial problems cannot simply be solved by learners that account for concept drift: while these learners allow the data-generating process to change over time, the do not allow this change to be a function of the classifier itself.
  2. Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption.
    在这里插入图片描述在这里插入图片描述
    U C = ∑ ( x , y ) ∈ X Y P ( x , y ) [ U C ( C ( A ( x ) ) , y ) − ∑ X i ∈ X C ( x ) V i ] U_{\mathcal{C}}=\sum_{(x, y) \in \mathcal{X} \mathcal{Y}} P(x, y)\left[U_{C}(\mathcal{C}(\mathcal{A}(x)), y)-\sum_{X_{i} \in \mathcal{X}_{C}(x)} V_{i}\right] UC=(x,y)XYP(x,y)UC(C(A(x)),y)XiXC(x)Vi
    U A = ∑ ( x , y ) ∈ X Y P ( x , y ) [ U A ( C ( A ( x ) ) , y ) − W ( x , A ( x ) ) ] U_{\mathcal{A}}=\sum_{(x, y) \in \mathcal{X} \mathcal{Y}} P(x, y)\left[U_{A}(\mathcal{C}(\mathcal{A}(x)), y)-W(x, \mathcal{A}(x))\right] UA=(x,y)XYP(x,y)[UA(C(A(x)),y)W(x,A(x))]
    U C = ( 1 / ∣ T ∣ ) ∑ ( x , y ) ∈ T [ U C ( C ( A ( x ) ) , y ) − ∑ X i ∈ X C ( x ) V i ] U_{\mathcal{C}}=(1 /|\mathcal{T}|) \sum_{(x, y) \in \mathcal{T}}\left[U_{C}(\mathcal{C}(\mathcal{A}(x)), y)-\sum_{X_{i} \in \mathcal{X}_{\mathcal{C}}(x)} V_{i}\right] UC=(1/T)(x,y)TUC(C(A(x)),y)XiXC(x)Vi
  3. Given the two players, the actions available to each, and the payoffs from each combination of actions, classical game theory is conccerned with finding a combination of strategies such that neither player can gain by unilaterally changing its strategy. This combination is known as a Nash equilibrium. In our case, the actions are classifiers C \mathcal{C} C and feature change strategies A \mathcal{A} A, and the payoffs are U C U_\mathcal{C} UC and U A U_\mathcal{A} UA. As the following theorem shows, some realizations of the adversarial classification game always have a Nash equilibrium.

THEOREM Consider a classification game with a binary cost model for ADVERSARY, i.e., given a pair of instances x x x and x ′ , x^{\prime}, x, ADVERSARY can either change x x x to x ′ x^{\prime} x (incurring a unit cost) or it cannot (the cost is infinite). This game always has a Nash equilibrium, which can be found in time polynomial in the number of instances.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值