双线性插值法(Bilinear Interpolation)

1、单线性插值

        先讲一下单线性插值:已知数据 (x0, y0) 与 (x1, y1),要计算 [x0, x1] 区间内某一位置 x 在直线上的y值。因为直线上的函数值是线性变化的,我们只需通过计算x0、x两点斜率和x0、x1两点的斜率,令二者相等可以得到一个方程,如下所示。

c5b850dc9710486fa2c556c834dbf69c.png

eq?%5Cdpi%7B150%7D%20%5Cfrac%7By-y_%7B0%7D%7D%7Bx-x_%7B0%7D%7D%20%3D%20%5Cfrac%7By_%7B1%7D-y_%7B0%7D%7D%7Bx_%7B1%7D-x_%7B0%7D%7D

eq?%5Cdpi%7B150%7D%20y%20%3D%20%5Cfrac%7Bx_%7B1%7D-x%7D%7Bx_%7B1%7D-x_%7B0%7D%7Dy_%7B0%7D%20+%20%5Cfrac%7Bx-x_%7B0%7D%7D%7Bx_%7B1%7D-x_%7B0%7D%7Dy_%7B1%7D

        通过计算就能算出x点对应的函数值y了

2、双线性插值

        所谓双线性插值,就是在两个方向上进行了插值,总共进行了三次插值

38dbd82dd9834e7d9ac80656074346aa.png

 在X方向做插值:

eefd24de092e477eaeb89360054686fc.png

 在Y方向做插值:

d78f65e2852942579fc457c92886f26a.png

综合起来:

2f7f69aa7a074442997facd3e9b2dfd2.png

 映射公式:(A为原图B为目标图,按几何中心对应,scale为放大倍数)

AX = (BX + 0.5) * ( AW / BW) - 0.5
AY = (BY + 0.5) * ( AH / BH) - 0.5

AX = (BX + 0.5) /scale - 0.5(scale是放大缩小倍数)
AY = (BY + 0.5) /scale - 0.5

        原图像和目标图像的原点(0,0)均选择左上角,然后根据插值公式计算目标图像每点像素,假设你需要将一幅5x5的图像缩小成3x3,那么源图像和目标图像各个像素之间的对应关系如下。如果没有这个中心对齐,根据基本公式去算,就会得到左边这样的结果;而用了对齐,就会得到右边的结果:

d0a8110002ff469a93f13cfd1b7bdd6b.png

 3、代码实现

"""
@author: 绯雨千叶

双线性插值法(使用几何中心对应)
AX=(BX+0.5)*(AW/BW)-0.5
AY=(BY+0.5)*(AH/BH)-0.5
或
AX=(BX+0.5)/scale-0.5(scale是放大倍数)
AY=(BY+0.5)/scale-0.5
A是原图,B是目标图
"""
import numpy as np
import cv2


def bilinear(img, scale):
    AH, AW, channel = img.shape
    BH, BW = int(AH * scale), int(AW * scale)
    dst_img = np.zeros((BH, BW, channel), np.uint8)
    for k in range(channel):
        for dst_x in range(BW):
            for dsy_y in range(BH):
                # 找到目标图x、y在原图中对应的坐标
                AX = (dst_x + 0.5) / scale - 0.5
                AY = (dsy_y + 0.5) / scale - 0.5
                # 找到将用于计算插值的点的坐标
                x1 = int(np.floor(AX))  # 取下限整数
                y1 = int(np.floor(AY))
                x2 = min(x1 + 1, AW - 1)  # 返回最小值
                y2 = min(y1 + 1, AH - 1)
                # 计算插值
                R1 = (x2 - AX) * img[y1, x1, k] + (AX - x1) * img[y1, x2, k]
                R2 = (x2 - AX) * img[y2, x1, k] + (AX - x1) * img[y2, x2, k]
                dst_img[dsy_y, dst_x, k] = int((y2 - AY) * R1 + (AY - y1) * R2)
    return dst_img


if __name__ == '__main__':
    img = cv2.imread('../img/lrn.jpg')
    dst = bilinear(img, 1.5)  # 设置放大1.5倍
    cv2.imshow('bilinear', dst)
    cv2.imshow('img', img)
    cv2.waitKey()

 效果展示:bc75737acc8b4646a3978e0c8788b8fa.png

  • 3
    点赞
  • 36
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是 Python 实现 ResNet50 网络结构的代码: ``` import tensorflow as tf def conv_layer(inputs, filters, kernel_size, strides): return tf.layers.conv2d(inputs=inputs, filters=filters, kernel_size=kernel_size, strides=strides, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer()) def identity_block(inputs, filters): F1, F2, F3 = filters X_shortcut = inputs X = conv_layer(inputs, filters=F1, kernel_size=1, strides=1) X = tf.layers.batch_normalization(X) X = tf.nn.relu(X) X = conv_layer(X, filters=F2, kernel_size=3, strides=1) X = tf.layers.batch_normalization(X) X = tf.nn.relu(X) X = conv_layer(X, filters=F3, kernel_size=1, strides=1) X = tf.layers.batch_normalization(X) X = tf.add(X, X_shortcut) X = tf.nn.relu(X) return X def convolutional_block(inputs, filters, strides): F1, F2, F3 = filters X_shortcut = inputs X = conv_layer(inputs, filters=F1, kernel_size=1, strides=strides) X = tf.layers.batch_normalization(X) X = tf.nn.relu(X) X = conv_layer(X, filters=F2, kernel_size=3, strides=1) X = tf.layers.batch_normalization(X) X = tf.nn.relu(X) X = conv_layer(X, filters=F3, kernel_size=1, strides=1) X = tf.layers.batch_normalization(X) X_shortcut = conv_layer(X_shortcut, filters=F3, kernel_size=1, strides=strides) X_shortcut = tf.layers.batch_normalization(X_shortcut) X = tf.add(X, X_shortcut) X = tf.nn.relu(X) return X def ResNet50(inputs): X = conv_layer(inputs, filters=64, kernel_size=7, strides=2) X = tf.layers.batch_normalization(X) X = tf.nn.relu(X) X = tf.layers.max_pooling2d(X, pool_size=3, strides=2, padding='same') X = convolutional_block(X, filters=[64, 64, 256], strides=1) X = identity_block(X, filters=[64, 64, 256]) X = identity_block(X, filters=[64, 64, 256]) X = convolutional_block(X, filters=[128, 128, 512], strides=2) X = identity_block(X, filters=[128, 128, 512]) X = identity_block(X, filters=[128, 128, 512]) X = identity_block(X, filters=[128, 128, 512]) X = convolutional_block(X, filters=[256, 256, 1024], strides=2) X = identity_block(X, filters=[256, 256, 1024]) X = identity_block(X, filters=[256, 256, 1024]) X = identity_block(X, filters=[256, 256, 1024]) X = identity_block(X, filters=[256, 256, 1024]) X = identity_block(X, filters=[256, 256, 1024]) X = convolutional_block(X, filters=[512, 512, 2048], strides=2) X = identity_block(X, filters=[512, 512, 2048]) X = identity_block(X, filters=[512, 512, 2048]) X = tf.layers.average_pooling2d(X, pool_size=7, strides=1, padding='valid') X = tf.layers.flatten(X) X = tf.layers.dense(X, units=1000, activation=None, kernel_initializer=tf.contrib.layers.xavier_initializer()) return X inputs = tf.placeholder(tf.float32, [None, 224, 224, 3]) logits = ResNet50(inputs) predictions = tf.nn.softmax(logits) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值