TopK activation function(TopK激活函数)

最近看了一篇关于topk激活函数的文章[1]就顺便实现了一下,测试了一下收敛、运行速度和最后的精度基本和ReLU差别不大,topk激活函数有一个优点就是激活的节点数是确定的,不会产生死区,也可以自由控制特征向量的稀疏程度,相对来说ReLU则不可控。顺便也实现了基于topk的池化,这个池化速度较慢。有需要的可以参考。

import torch
import torch.nn
import torch.nn as nn
import torch.nn.functional as F

class TopKLU(nn.Module):
    def __init__(self,active_ratio = 0.5):  
        super(TopKLU, self).__init__()
        self.active_ratio = active_ratio

    def forward(self, x):
        size = x.size()
        topk = int(size[1]*self.active_ratio)
        topk = 1 if topk<1 else topk

        with torch.no_grad():
            z = torch.zeros_like(x)
            _,indices = torch.topk(x,topk,1,True,False)
            z.scatter_(1,indices,1)
            
        return z*x

class TopKPool2d(nn.Module):

    def __init__(self,active_ratio = 0.5,kernel_size=3,stride = 1,padding = 0):  
        super(TopKPool2d, self).__init__()
        assert kernel_size>=2,"kernel_size should >= 2"
        assert stride>=1,"stride should >= 1"

        self.kernel_size=kernel_size
        self.padding = padding
        self.stride = stride
        
        self.topk = int(active_ratio * kernel_size * kernel_size)
        self.topk = 1 if self.topk<1 else self.topk

    def forward(self, x):
        size = x.size()
        with torch.no_grad():
        
            col = F.unfold(x,self.kernel_size,dilation=1,padding=self.padding,stride=self.stride)
            col_t = col.transpose(2,1)
            col_tr = col_t.reshape(col_t.size(0),col_t.size(1)*size[1],-1)
            _,indices = torch.topk(col_tr,self.topk,-1,True,False)
            z = torch.zeros_like(col_tr)
            z.scatter_(2,indices,1)
            z = z.reshape(col_t.size())
            z = z.transpose(2,1)
            z = F.fold(z,(size[2],size[3]),self.kernel_size,dilation=1,padding=self.padding,stride=self.stride)
            z = z.gt(0).to(torch.float32)

        return z * x
        
        
if __name__=="__main__":
    act = TopKPool2d(kernel_size=2,stride=2)
    x = torch.randn(2,2,4,4)
    print(x)
    print(act(x))

    act = TopKLU()
    x = torch.randn(1,2,3,3)
    print(x)
    print(act(x))

    x = torch.randn(2,4)
    print(x)
    print(act(x))

参考

  1. Scaling and evaluating sparse autoencoders
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值