DL学习6--2D卷积

该代码示例展示了如何在PyTorch中从头实现2D卷积层,通过互相关运算计算卷积,并利用梯度下降法学习优化卷积核,以最小化预测输出与目标值之间的误差。在训练过程中,损失逐渐减小,最终得到接近目标卷积核的权重。
摘要由CSDN通过智能技术生成

2D卷积层的实现

import torch
from torch import nn
def corr2d(X,K):
    """计算二维互相关运算"""
    h,w = K.shape
    Y = torch.zeros(size=(X.shape[0] - h + 1 , X.shape[1] - w + 1))
    for i in range(Y.shape[0]):
        for j in range(Y.shape[1]):
            Y[i][j] = ( X[i:i+h, j:j+w] * K).sum()
    return Y

class conv2d(nn.Module):
    def __init__(self,kernal_size):
        super().__init__()
        self.weight = nn.Parameter(torch.rand(size=kernal_size))
        self.bias = nn.Parameter(torch.zeros(1))
    def forward(self,x):
        return corr2d(x, self.weight) + self.bias
X = torch.ones((6, 8))
X[:, 2:6] = 0
K = torch.tensor([[1.0, -1.0]])

# 学习由X生成Y的卷积核
conv2d = nn.Conv2d(1,1, kernel_size=(1, 2), bias=False)

X = X.reshape((1, 1, 6, 8))
Y = Y.reshape((1, 1, 6, 7))
lr = 3e-2

for i in range(10):
    conv2d.zero_grad()
    y_hat = conv2d(X)
    l = (y_hat - Y) ** 2
    l.sum().backward()

    # 梯度下降
    conv2d.weight.data[:] -= lr * conv2d.weight.grad / 1
    print(f' epoch = {i}, loss = {l.sum():.3f}')

运行结果

epoch = 0, loss = 26.152
 epoch = 1, loss = 10.717
 epoch = 2, loss = 4.393
 epoch = 3, loss = 1.802
 epoch = 4, loss = 0.739
 epoch = 5, loss = 0.304
 epoch = 6, loss = 0.125
 epoch = 7, loss = 0.052
 epoch = 8, loss = 0.021
 epoch = 9, loss = 0.009

conv2d.weight.data[:].reshape((1,2)) -----> tensor([[ 0.9809, -0.9851]])

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值