conv2d简单实现

直接上代码

前置内容可以参考:conv1d简单实现

def myconv2d(infeat, convkernel, padding=0, stride=1):
    b, c, h, w = len(infeat), len(infeat[0]), len(infeat[0][0]), len(infeat[0][0][0])
    out_c, in_c, kh, kw = len(convkernel), len(convkernel[0]), len(convkernel[0][0]), len(convkernel[0][0][0])
    # 不使用分组卷积,c = in_c
    # 卷积核为正方形,kh = kw
    
    res = [[[[0] * (w-kw+1) for _ in range(h-kh+1)] for _ in range(out_c)] for _ in range(b)]
    # 最终输出形状:b*out_c*(h-kh+1)*(w-kw+1)
    # print(len(res), len(res[0]), len(res[0][0]), len(res[0][0][0]))
    for i in range(b):
        # 关于batch,目前只能串行完成
        
        for j in range(out_c):
            # 计算每一组的结果
            
            for m in range(c):
                for n in range(h-kh+1):
                    for g in range(w-kw+1):
                        # 计算每一个位置的值
                        
                        ans = 0
                        for k1 in range(kh):
                            for k2 in range(kw):
                                ans += infeat[i][m][n+k1][g+k2] * convkernel[j][m][k1][k2]
                        res[i][j][n][g] += ans
    return res


# 我的卷积
infeat = [[[[1,0,2,2,2], [0,0,0,0,0], [2,0,2,2,2], [1,0,0,0,0], [1,0,0,2,1]],
           [[1,0,1,0,1], [0,0,0,0,1], [0,0,2,0,0], [2,0,0,0,0], [0,0,1,0,0]],
           [[0,2,2,0,0], [0,0,0,0,1], [1,1,2,0,2], [2,0,0,0,0], [0,1,1,0,1]]]]

convkernel = [[[[1,0,0], [-1,0,0], [0,-1,1]],
               [[-1,-1,0], [-1,-1,1], [1,0,0]],
               [[1,1,0], [0,-1,1], [1,1,1]]],
              
              [[[1,-1,0], [-1,0,0], [0,1,-1]],
               [[0,1,-1], [0,0,0], [0,1,1]],
               [[0,-1,0], [1,1,-1], [-1,0,1]]]]
outfeat = myconv2d(infeat, convkernel)
print(outfeat)


# pytorch源码计算结果
from torch.nn.functional import conv2d
import torch
import numpy

infeat = torch.tensor(numpy.array(infeat))
convkernel = torch.tensor(numpy.array(convkernel))

outfeat_pytorch = conv2d(infeat, convkernel)
print(outfeat_pytorch)

输出结果如下,和官方的计算结果相同:

[[[[8, 6, 11], [5, -4, -2], [3, 5, 4]], [[-1, -2, -2], [-4, 3, -3], [2, -4, 1]]]]
tensor([[[[ 8,  6, 11],
          [ 5, -4, -2],
          [ 3,  5,  4]],

         [[-1, -2, -2],
          [-4,  3, -3],
          [ 2, -4,  1]]]], dtype=torch.int32)

思考

pytorch中是直接按照滑动窗口点乘相加的,与传统的卷积定义(反褶,移位,相乘,相加)不同,没有反褶的操作

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Convconv2d都是用于进行卷积操作的函数或类。Conv2dPyTorch中的一个类,用于创建卷积层。它的参数包括输入图像、卷积核、步长等,并将这些参数传递给前向通道来执行卷积操作。而conv是一个函数,通常指的是在Keras中使用的Conv1D或Conv2D函数,用于创建卷积层。它的参数包括输入图像、卷积核、步长等,并通过这些参数执行卷积操作。两者的用法和参数传递方式略有不同,因此需要注意使用时的正确写法。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [基于Keras中Conv1D和Conv2D的区别说明](https://download.csdn.net/download/weixin_38644780/12850596)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [pytorch报错整理(1):搭建卷积层时,误用Conv2dconv2d而引发的错误](https://blog.csdn.net/qq_37382769/article/details/123367097)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [Pytorch——Conv2dconv2d](https://blog.csdn.net/talkAC/article/details/121931058)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值