转置卷积是卷积的逆运算。即转置卷积结果的shape和input是一样的
所以转置卷积只能恢复原来的in_channel, 不能实现任意维度
转置卷积的原理就是将之前卷积得到的结果,再和原来的kernel填充后,相乘
卷积
反卷积
import torch import torch.nn.functional as F def get_kernel_matrix(kernel, input_size): #基于kernel和输入特征图大小来得到填充后拉直的kernel堆叠后的矩阵 kernel_h, kernel_w = kernel.shape input_h, input_w = input_size out_h, out_w = (input_h - kernel_h + 1), (input_w - kernel_w + 1) result = torch.zeros(out_h*out_w, input_h*input_w) cnt = 0 for i in range(0, input_h-kernel_h+1, 1): for j in range(0, input_w-kernel_w+1, 1): #将kernel补0补成input一样大小的 padded_kernel = F.pad(kernel, (i, input_h-kernel_h-i, j, input_w-kernel_w-j)) result[cnt] = padded_kernel.flatten() cnt += 1 return result kernel = torch.randn(3,3) input = torch.rand(7,7) kernel_matrix = get_kernel_matrix(kernel, input.shape) my_output = kernel_matrix @ input.reshape(-1,1) my_transposed_output = kernel_matrix.transpose(1,0) @ my_output output = F.conv2d(input.unsqueeze(0).unsqueeze(0), kernel.unsqueeze(0).unsqueeze(0)) transposed_output = F.conv_transpose2d(output, kernel.unsqueeze(0).unsqueeze(0)) assert torch.allclose(my_transposed_output.reshape(1,1,7,7), transposed_output)
Pytorch手动实现转置卷积/反卷积(deconvolution/transposed convolution/fractional strided convolution) (upsample)
最新推荐文章于 2024-05-07 16:15:12 发布