pytorch中的双线性插值上采样(Bilinear Upsampling)、F.upsample_bilinear

在Fully Convolutional Networks for Semantic Segmentation这篇文章中,介绍到Bilinear Upsampling这种上采样的方式,虽然文章最后用的是deconvolution,给出的理由就是不希望upsampling filter是固定的= =!
因为以前用的upsampling的方式是很简单的,比如放大两倍,就是把一个像素点复制一下变成四个。这样的做法会导致图像变得模糊。

线性插值

在介绍双线性插值前,先介绍一下线性插值。
在这里插入图片描述
其实说白了就很简单,就是两点确定一条线,然后在这条线上知道了x,自然可以推出y。同样的,已知y的话,自然也可以推导出x。

双线性插值

在图像中,我们面对的往往是两维,甚至三维(包含channel)的图像,那么,在进行upsampling的时候我们就要用到双线性插值和三线性插值。
所谓双线性插值,原理和线性插值相同,并且也是通过使用三次线性插值实现的。首先看图。
在这里插入图片描述
在这里插入图片描述

双线性插值的效果

参考博客:二次线性插值原理+代码详解【python】
双线型内插值算法就是一种比较好的图像缩放算法,它充分的利用了源图中虚拟点四周的四个真实存在的像素值来共同决定目标图中的一个像素值,因此缩放效果比简单的最邻近插值要好很多。由于图像双线性插值只会用相邻的4个点,因此以下公式的分母都是1。
在这里插入图片描述
在这里插入图片描述

F.upsample_bilinear

如果在pytorch的项目中使用到了F.upsample_bilinear函数,会出现如下警告(表明此函数已经过时了,推荐使用nn.functional.interpolate来代替)

UserWarning: nn.functional.upsample_bilinear is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample_bilinear is deprecated. Use nn.functional.interpolate instead.")
### PyTorch `upsample_bilinear2d` Channels Last Implementation and Usage In PyTorch, the operation of upsampling with bilinear interpolation can be performed using the function `torch.nn.functional.interpolate`. When working with tensors that have a memory format set to channels last, specific considerations must be taken into account. The primary difference between channels-first (NCHW) and channels-last (NHWC) formats lies in how data is stored in memory. For operations like `upsample_bilinear2d`, which involves resizing images or feature maps through bilinear interpolation, ensuring compatibility with channels-last storage may require explicit handling depending on the version of PyTorch being used[^1]. To use `upsample_bilinear2d` with channels-last formatted inputs: ```python import torch import torch.nn.functional as F # Create an NHWC tensor input_tensor = torch.randn((batch_size, height, width, channels)) # Ensure input is contiguous in channels_last format if not input_tensor.is_contiguous(memory_format=torch.channels_last): input_tensor = input_tensor.contiguous(memory_format=torch.channels_last) output_tensor = F.interpolate( input=input_tensor, scale_factor=2, # Example scaling factor mode='bilinear', align_corners=False, recompute_scale_factor=True ) ``` When performing this operation, it's important to verify whether the current version of PyTorch supports efficient execution paths optimized for channels-last layouts. In some cases, converting back from channels-last to channels-first might offer better performance due to more mature optimization support within certain layers or functions[^2]. For integrating custom modules such as those mentioned earlier regarding backbone structures, one should ensure proper registration and import statements are correctly placed according to project structure guidelines provided by frameworks like YOLOv5 where applicable.
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值