mincs滑动取块代码

def img2col_py(Ipad, block_size):
    [row, col] = Ipad.shape
    row_block = row/block_size
    col_block = col/block_size
    block_num = int(row_block*col_block)
    img_col =torch.zeros([block_size**2, block_num])
    count = 0
    for x in range(0, row-block_size+1, block_size):
        for y in range(0, col-block_size+1, block_size):
            img_col[:, count] = Ipad[x:x+block_size, y:y+block_size].reshape([-1])
            count = count + 1
    return img_col


aaaa=img2col_py(Ipad,2)

举例:输入是[6,4],block_size为2

output:
Ipad:torch.Size([6, 4])
tensor([[0.1626, 0.0097, 0.6918, 0.7405],
        [0.9155, 0.6619, 0.6529, 0.5635],
        [0.6905, 0.3335, 0.4947, 0.8756],
        [0.4966, 0.2286, 0.1669, 0.5582],
        [0.8993, 0.0986, 0.5191, 0.6391],
        [0.7639, 0.0883, 0.9952, 0.4410]])
aaaa:torch.Size([4, 6])
tensor([[0.1626, 0.6918, 0.6905, 0.4947, 0.8993, 0.5191],
        [0.0097, 0.7405, 0.3335, 0.8756, 0.0986, 0.6391],
        [0.9155, 0.6529, 0.4966, 0.1669, 0.7639, 0.9952],
        [0.6619, 0.5635, 0.2286, 0.5582, 0.0883, 0.4410]])

在这里插入图片描述

还有一个函数可以做到滑动取块
步长为2
在这里插入图片描述步长为1
在这里插入图片描述

patches_0= patches.transpose(1, 2)
output:
patches_0:torch.Size([1, 9, 4])
tensor([[[ 1.0127,  0.7128, -0.3260, -0.1183],
         [ 0.7128,  0.4439, -0.1183, -1.0749],
         [ 0.4439,  1.2473, -1.0749, -0.7125],
         [-0.3260, -0.1183,  1.1466, -1.0475],
         [-0.1183, -1.0749, -1.0475,  0.1535],
         [-1.0749, -0.7125,  0.1535,  0.9636],
         [ 1.1466, -1.0475, -0.4800,  1.3628],
         [-1.0475,  0.1535,  1.3628, -0.0427],
         [ 0.1535,  0.9636, -0.0427, -0.7177]]])

在这里插入图片描述
也就是说patches_0 = torch.nn.functional.unfold(inp, (2, 2)).transpose(1, 2)

w = torch.randn(1, 1, 2, 2)
output:
w :torch.Size([1, 1, 22])
tensor([[[[ 0.5352, -1.6392],
          [-0.3645, -1.4205]]]])

w0=w.view(w.size(0), -1)
w0:torch.Size([1, 4])
tensor([[ 0.5352, -1.6392, -0.3645, -1.4205]])
w1=w.view(w.size(0), -1).t()
w1:torch.Size([4, 1])
tensor([[ 0.5352],
        [-1.6392],
        [-0.3645],
        [-1.4205]])
shape_patches_0 = patches_0.shape
shape_patches_0:torch.Size([1, 9, 4])
w2=w.view(w.size(0), -1).t().repeat(1, shape_patches_0[1])
w2:torch.Size([4, 9])
tensor([[ 0.5352,  0.5352,  0.5352,  0.5352,  0.5352,  0.5352,  0.5352,  0.5352, 0.5352],
        [-1.6392, -1.6392, -1.6392, -1.6392, -1.6392, -1.6392, -1.6392, -1.6392,-1.6392],
        [-0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645,-0.3645],
        [-1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205,-1.4205]])
out_unf = (patches_0 - w.view(w.size(0), -1).t().repeat(1, shape_patches_0[1]))
qq:torch.Size([1,4, 9])
tensor([[[ 1.0127,  0.7128,  0.4439, -0.3260, -0.1183, -1.0749,  1.1466,-1.0475,  0.1535],
         [ 0.7128,  0.4439,  1.2473, -0.1183, -1.0749, -0.7125, -1.0475, 0.1535,  0.9636],
         [-0.3260, -0.1183, -1.0749,  1.1466, -1.0475,  0.1535, -0.4800,1.3628, -0.0427],
         [-0.1183, -1.0749, -0.7125, -1.0475,  0.1535,  0.9636,  1.3628,-0.0427, -0.7177]]])

ww:torch.Size([4, 9])
tensor([[ 0.5352,  0.5352,  0.5352,  0.5352,  0.5352,  0.5352,  0.5352,  0.5352, 0.5352],
        [-1.6392, -1.6392, -1.6392, -1.6392, -1.6392, -1.6392, -1.6392, -1.6392,-1.6392],
        [-0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645, -0.3645],
        [-1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205, -1.4205]])
        
ee=qq-ww  :torch.Size([1,4, 9])
tensor([[[ 0.4775,  0.1776, -0.0913, -0.8612, -0.6535, -1.6101,  0.6114,-1.5827, -0.3817],
         [ 2.3520,  2.0831,  2.8865,  1.5209,  0.5643,  0.9267,  0.5917,1.7927,  2.6028],
         [ 0.0385,  0.2462, -0.7104,  1.5111, -0.6830,  0.5180, -0.1155, 1.7273,  0.3218],
         [ 1.3022,  0.3456,  0.7080,  0.3730,  1.5740,  2.3841,  2.7833,1.3778,  0.7028]]])

确实是对应位置相减的。

下面看,多通道,是否能扩展。

unfold和fold作用刚好相反

        w=torch.tensor([77,1,8,8])
        inp_w = torch.nn.functional.unfold(w, (4,4),stride=4) #当kernelsize=strdie,就是非重叠取块
        afterinp_w = torch.nn.functional.fold(inp_w,output_size=(8,8), kernel_size=(4,4), stride=4) 
结果是:
w:torch.Size([77, 1, 8, 8])
inp_w:torch.Size([77, 16, 4])
afterinp_w:torch.Size([77, 1, 8, 8])

在这里插入图片描述

【为什么要学习这门课程】深度学习框架如TensorFlow和Pytorch掩盖了深度学习底层实现方法,那能否能用Python代码从零实现来学习深度学习原理呢?本课程就为大家提供了这个可能,有助于深刻理解深度学习原理。左手原理、右手代码,双管齐下!本课程详细讲解深度学习原理并进行Python代码实现深度学习网络。课程内容涵盖感知机、多层感知机、卷积神经网络、循环神经网络,并使用Python 3及Numpy、Matplotlib从零实现上述神经网络。本课程还讲述了神经网络的训练方法与实践技巧,且开展了代码实践演示。课程对于核心内容讲解深入细致,如基于计算图理解反向传播算法,并用数学公式推导反向传播算法;另外还讲述了卷积加速方法im2col。【课程收获】本课程力求使学员通过深度学习原理、算法公式及Python代码的对照学习,摆脱框架而掌握深度学习底层实现原理与方法。本课程将给学员分享深度学习的Python实现代码。课程代码通过Jupyter Notebook演示,可在Windows、ubuntu等系统上运行,且不需GPU支持。【优惠说明】 课程正在优惠中!  备注:购课后可加入白勇老师课程学习交流QQ群:957519975【相关课程】学习本课程的前提是会使用Python语言以及Numpy和Matplotlib库。相关课程链接如下:《Python编程的术与道:Python语言入门》https://edu.csdn.net/course/detail/27845《玩转Numpy计算库》https://edu.csdn.net/lecturer/board/28656《玩转Matplotlib数据绘图库》https://edu.csdn.net/lecturer/board/28720【课程内容导图及特色】
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值