图像相似度匹配——距离大全

说明:

  1. PIL.Image读取图片并resize同一尺寸
  2. scipy.spatial.distance库计算距离(也可用sklearn.metrics.pairwise_distances
  3. 距离越小越匹配

一、测试图片

图片来源见下方链接。

在这里插入图片描述

1.jpg 分辨率604×900

在这里插入图片描述
2.jpg 分辨率423×640

在这里插入图片描述

3.jpg 分辨率900×750

在这里插入图片描述
4.jpg 分辨率404×600




二、欧氏距离

d = ∑ i = 1 N ( x i 1 − x i 2 ) 2 d=\sqrt{\sum_{i=1}^N{\left( x_{i1}-x_{i2} \right) ^2}} d=i=1N(xi1xi2)2

点到点的距离,越大越不匹配

考虑权值:标准欧氏距离,seuclidean
平方:欧式距离平方,sqeuclidean

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def euclidean(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'euclidean')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(euclidean(image1, image2))
图片1234
10408199926642672




三、曼哈顿距离

d = ∑ i = 1 N ∣ x i 1 − x i 2 ∣ d=\sum_{i=1}^N{| x_{i1}-x_{i2} | } d=i=1Nxi1xi2

又称城市街区距离,两坐标轴距离之和

考虑权值:堪培拉距离,canberra。用于比较排名列表和计算机安全入侵检测

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def manhattan(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'cityblock')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(manhattan(image1, image2))
图片1234
10411221939763125239064477

堪培拉距离:

图片1234
10497302848611354084




四、切比雪夫距离

d = ∑ i = 1 N ( max ⁡ ( ∣ x i 1 − x i 2 ∣ , ∣ y i 1 − y i 2 ∣ ) ) d=\sum_{i=1}^N{\left( \max \left( |x_{i1}-x_{i2}|,|y_{i1}-y_{i2}| \right) \right)} d=i=1N(max(xi1xi2,yi1yi2))

各座标数值差绝对值的最大值,取值范围为0-255

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def chebyshev(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'chebyshev')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(chebyshev(image1, image2))
图片1234
10218255204




五、余弦距离

d = ∑ i = 1 N ( x i 1 x i 2 + y i 1 y i 2 ( x i 1 2 + y i 1 2 ) ( x i 2 2 + y i 2 2 ) ) d=\sum_{i=1}^N{\left( \frac{x_{i1}x_{i2}+y_{i1}y_{i2}}{\sqrt{\left( x_{i1}^{2}+y_{i1}^{2} \right) \left( x_{i2}^{2}+y_{i2}^{2} \right)}} \right)} d=i=1N((xi12+yi12)(xi22+yi22) xi1xi2+yi1yi2)

又称余弦相似度,根据向量方向来判断向量相似度

运算速度超级慢

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def cosine(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'cosine')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(cosine(image1, image2))
图片1234
100.07150.43320.0782




六、皮尔逊相关系数

d = ∑ i = 1 N ( x i 1 − x ˉ 1 ) ( x i 2 − x ˉ 2 ) ∑ i = 1 N ( x i 1 − x ˉ 1 ) 2 ∑ i = 1 N ( x i 2 − x ˉ 2 ) 2 d=\frac{\sum_{i=1}^N{\left( x_{i1}-\bar{x}_1 \right) \left( x_{i2}-\bar{x}_2 \right)}}{\sqrt{\sum_{i=1}^N{\left( x_{i1}-\bar{x}_1 \right) ^2}}\sqrt{\sum_{i=1}^N{\left( x_{i2}-\bar{x}_2 \right) ^2}}} d=i=1N(xi1xˉ1)2 i=1N(xi2xˉ2)2 i=1N(xi1xˉ1)(xi2xˉ2)

与余弦相似度类似,并且具有平移不变性的优点,越大越相关

import numpy as np
from PIL import Image


def pearson(image1, image2):
    X = np.vstack([image1, image2])
    return np.corrcoef(X)[0][1]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(pearson(image1, image2))
图片1234
110.87770.08500.7413

皮尔逊距离 = 1 - 皮尔逊相关系数

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def manhattan(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'correlation')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(manhattan(image1, image2))




七、汉明距离

d = ∑ i = 1 N ( { 1 ,   x i 1 = x i 2 0 ,   x i 1 ≠ x i 2 ) d=\sum_{i=1}^N{\left( \left\{ \begin{array}{l} 1,\ x_{i1}=x_{i2}\\ 0,\ x_{i1}\ne x_{i2}\\ \end{array} \right. \right)} d=i=1N({1, xi1=xi20, xi1=xi2)

通过比较向量每一位是否相同,若不同则汉明距离加1

一般用于信息编码

import numpy as np
from PIL import Image


def hamming(image1, image2):
    return np.shape(np.nonzero(image1 - image2)[0])[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1)
image2 = np.asarray(image2)

print(hamming(image1, image2))
图片1234
100.98650.99330.9853




八、杰卡德距离

d = A △ B ∣ A ∪ B ∣ d=\frac{A\bigtriangleup B}{\left| A\cup B \right|} d=ABAB

两个集合中不同元素占所有元素的比例来衡量,其相似度=1-d

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def jaccard(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'jaccard')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(jaccard(image1, image2))
图片1234
100.98650.99360.9853




九、布雷柯蒂斯距离

生态学中用来衡量不同样地物种组成差异的测度

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def braycurtis(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'braycurtis')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(braycurtis(image1, image2))
图片1234
100.20080.48770.1746




十、马氏距离

协方差距离,考虑各种特性之间的联系

两两之间计算,计算量过大

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def mahalanobis(image1, image2):
    X = np.vstack([image1, image2])
    XT = X.T
    return pdist(XT, 'mahalanobis')


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

x=np.random.random(10)
y=np.random.random(10)
print(mahalanobis(x, y))

#print(mahalanobis(image1, image2))




十一、JS散度

测量两个概率分布之间相似距离,常用于生物信息学和基因组比较 ,历史定量研究,机器学习

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def jensenshannon(image1, image2):
    X = np.vstack([image1, image2])
    return pdist(X, 'jensenshannon')[0]


image1 = Image.open('image/1.jpg')
image2 = Image.open('image/2.jpg')
image2 = image2.resize(image1.size)
image1 = np.asarray(image1).flatten()
image2 = np.asarray(image2).flatten()

print(jensenshannon(image1, image2))
图片1234
100.20080.48770.1746




十二、image-match匹配库

image-match

image-match文档

该库类似pHash库,包括一个数据库后端,可轻松扩展到数十亿张图像,并支持持续的高速图像插入

匹配原理是pHash离散余弦变换,归一化距离小于0.40很可能匹配

安装

pip install image_match
norm_diff = np.linalg.norm(b - a)
norm1 = np.linalg.norm(b)
norm2 = np.linalg.norm(a)
return norm_diff / (norm1 + norm2)
from image_match.goldberg import ImageSignature


def open(image):
    return ImageSignature().generate_signature(image)


def distance(image1, image2):
    return ImageSignature.normalized_distance(image1, image2)


image1 = open('image/1.jpg')
image2 = open('image/2.jpg')

print(distance(image1, image2))
图片1234
100.23600.68310.4296

加个滤镜:
在这里插入图片描述
计算得到0.2027,匹配。




十三、不装库匹配

匹配代码源自原库

import numpy as np
from skimage.io import imread

def read(image):
    # Step 1:    Load image as array of grey-levels
    im_array = imread(image, as_grey=True)

    # Step 2a:   Determine cropping boundaries
    rw = np.cumsum(np.sum(np.abs(np.diff(im_array, axis=1)), axis=1))
    cw = np.cumsum(np.sum(np.abs(np.diff(im_array, axis=0)), axis=0))
    upper_column_limit = np.searchsorted(cw, np.percentile(cw, 95), side='left')
    lower_column_limit = np.searchsorted(cw, np.percentile(cw, 5), side='right')
    upper_row_limit = np.searchsorted(rw, np.percentile(rw, 95), side='left')
    lower_row_limit = np.searchsorted(rw, np.percentile(rw, 5), side='right')
    if lower_row_limit > upper_row_limit:
        lower_row_limit = int(5 / 100. * im_array.shape[0])
        upper_row_limit = int(95 / 100. * im_array.shape[0])
    if lower_column_limit > upper_column_limit:
        lower_column_limit = int(5 / 100. * im_array.shape[1])
        upper_column_limit = int(95 / 100. * im_array.shape[1])
    image_limits = [(lower_row_limit, upper_row_limit), (lower_column_limit, upper_column_limit)]

    # Step 2b:   Generate grid centers
    x_coords = np.linspace(image_limits[0][0], image_limits[0][1], 11, dtype=int)[1:-1]
    y_coords = np.linspace(image_limits[1][0], image_limits[1][1], 11, dtype=int)[1:-1]

    # Step 3:    Compute grey level mean of each P x P square centered at each grid point
    P = max([2.0, int(0.5 + min(im_array.shape) / 20.)])
    avg_grey = np.zeros((x_coords.shape[0], y_coords.shape[0]))
    for i, x in enumerate(x_coords):
        lower_x_lim = int(max([x - P / 2, 0]))
        upper_x_lim = int(min([lower_x_lim + P, im_array.shape[0]]))
        for j, y in enumerate(y_coords):
            lower_y_lim = int(max([y - P / 2, 0]))
            upper_y_lim = int(min([lower_y_lim + P, im_array.shape[1]]))
            avg_grey[i, j] = np.mean(im_array[lower_x_lim:upper_x_lim,lower_y_lim:upper_y_lim])

    # Step 4a:   Compute array of differences for each grid point vis-a-vis each neighbor
    right_neighbors = -np.concatenate((np.diff(avg_grey), np.zeros(avg_grey.shape[0]).reshape((avg_grey.shape[0], 1))),axis=1)
    left_neighbors = -np.concatenate((right_neighbors[:, -1:], right_neighbors[:, :-1]), axis=1)
    down_neighbors = -np.concatenate((np.diff(avg_grey, axis=0),np.zeros(avg_grey.shape[1]).reshape((1, avg_grey.shape[1]))))
    up_neighbors = -np.concatenate((down_neighbors[-1:], down_neighbors[:-1]))
    diagonals = np.arange(-avg_grey.shape[0] + 1, avg_grey.shape[0])
    upper_left_neighbors = sum([np.diagflat(np.insert(np.diff(np.diag(avg_grey, i)), 0, 0), i) for i in diagonals])
    lower_right_neighbors = -np.pad(upper_left_neighbors[1:, 1:], (0, 1), mode='constant')
    flipped = np.fliplr(avg_grey)
    upper_right_neighbors = sum([np.diagflat(np.insert(np.diff(np.diag(flipped, i)), 0, 0), i) for i in diagonals])
    lower_left_neighbors = -np.pad(upper_right_neighbors[1:, 1:], (0, 1), mode='constant')
    diff_mat = np.dstack(np.array([upper_left_neighbors, up_neighbors, np.fliplr(upper_right_neighbors), left_neighbors, right_neighbors,np.fliplr(lower_left_neighbors), down_neighbors, lower_right_neighbors]))

    # Step 4b: Bin differences to only 2n+1 values
    mask = np.abs(diff_mat) < 2 / 255.
    diff_mat[mask] = 0.
    positive_cutoffs = np.percentile(diff_mat[diff_mat > 0.], np.linspace(0, 100, 3))
    negative_cutoffs = np.percentile(diff_mat[diff_mat < 0.], np.linspace(100, 0, 3))
    for level, interval in enumerate([positive_cutoffs[i:i + 2] for i in range(positive_cutoffs.shape[0] - 1)]):
        diff_mat[(diff_mat >= interval[0]) & (diff_mat <= interval[1])] = level + 1
    for level, interval in enumerate([negative_cutoffs[i:i + 2] for i in range(negative_cutoffs.shape[0] - 1)]):
        diff_mat[(diff_mat <= interval[0]) & (diff_mat >= interval[1])] = -(level + 1)

    # Step 5: Flatten array and return signature
    return np.ravel(diff_mat).astype('int8')


def distance(image1, image2):
    norm_diff = np.linalg.norm(image1 - image2)
    norm1 = np.linalg.norm(image1)
    norm2 = np.linalg.norm(image2)
    return norm_diff / (norm1 + norm2)


if __name__ == '__main__':
    image1 = read('image/1.jpg')
    image2 = read('image/2.jpg')
    print(distance(image1, image2))

结果与十二同。




十四、利用Keras预训练模型提取特征进行匹配

此处预训练模型使用VGG16,越大越匹配

import numpy as np
from numpy import linalg as LA
from keras.preprocessing import image
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input


class VGGNet:
    def __init__(self):
        self.input_shape = (224, 224, 3)
        self.model = VGG16(weights='imagenet', pooling='max', include_top=False,
                           input_shape=(self.input_shape[0], self.input_shape[1], self.input_shape[2]))

    def extract_feat(self, img_path):
        '''提取图像特征

        :param img_path: 图像路径
        :return: 归一化后的图像特征
        '''
        img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))
        img = image.img_to_array(img)
        img = np.expand_dims(img, axis=0)
        img = preprocess_input(img)
        feat = self.model.predict(img)
        norm_feat = feat[0] / LA.norm(feat[0])
        return norm_feat


if __name__ == '__main__':
    model = VGGNet()
    image1 = model.extract_feat('image/1.jpg')
    image2 = model.extract_feat('image/2.jpg')
    print(np.dot(image1, image2.T))
图片1234
110.87147620.606632770.67468536




脚本

1. 综合比较

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def euclidean(image1, image2):
    '''欧氏距离'''
    X = np.vstack([image1, image2])
    return pdist(X, 'euclidean')[0]


def manhattan(image1, image2):
    '''曼哈顿距离'''
    X = np.vstack([image1, image2])
    return pdist(X, 'cityblock')[0]


def chebyshev(image1, image2):
    '''切比雪夫距离'''
    X = np.vstack([image1, image2])
    return pdist(X, 'chebyshev')[0]


def cosine(image1, image2):
    '''余弦距离'''
    X = np.vstack([image1, image2])
    return pdist(X, 'cosine')[0]


def pearson(image1, image2):
    '''皮尔逊相关系数'''
    X = np.vstack([image1, image2])
    return np.corrcoef(X)[0][1]


def hamming(image1, image2):
    '''汉明距离'''
    return np.shape(np.nonzero(image1 - image2)[0])[0]


def jaccard(image1, image2):
    '''杰卡德距离'''
    X = np.vstack([image1, image2])
    return pdist(X, 'jaccard')[0]


def braycurtis(image1, image2):
    '''布雷柯蒂斯距离'''
    X = np.vstack([image1, image2])
    return pdist(X, 'braycurtis')[0]


def mahalanobis(image1, image2):
    '''马氏距离'''
    X = np.vstack([image1, image2])
    XT = X.T
    return pdist(XT, 'mahalanobis')


def jensenshannon(image1, image2):
    '''JS散度'''
    X = np.vstack([image1, image2])
    return pdist(X, 'jensenshannon')[0]


def image_match(image1, image2):
    '''image-match匹配库'''
    try:
        from image_match.goldberg import ImageSignature
    except:
        return -1
    image1 = ImageSignature().generate_signature(image1)
    image2 = ImageSignature().generate_signature(image2)
    return ImageSignature.normalized_distance(image1, image2)


def vgg_match(image1, image2):
    '''VGG16特征匹配'''
    try:
        from numpy import linalg as LA
        from keras.preprocessing import image
        from keras.applications.vgg16 import VGG16
        from keras.applications.vgg16 import preprocess_input
    except:
        return -1

    input_shape = (224, 224, 3)
    model = VGG16(weights='imagenet', pooling='max', include_top=False, input_shape=input_shape)

    def extract_feat(img_path):
        '''提取图像特征'''
        img = image.load_img(img_path, target_size=(input_shape[0], input_shape[1]))
        img = image.img_to_array(img)
        img = np.expand_dims(img, axis=0)
        img = preprocess_input(img)
        feat = model.predict(img)
        norm_feat = feat[0] / LA.norm(feat[0])
        return norm_feat

    image1 = extract_feat(image1)
    image2 = extract_feat(image2)
    return np.dot(image1, image2.T)


if __name__ == '__main__':
    # 初始化
    image1_name = 'image/1.jpg'
    image2_name = 'image/2.jpg'
    image3_name = 'image/3.jpg'

    # 图像预处理
    image1 = Image.open(image1_name).convert('L')  # 转灰度图,若考虑颜色则去掉
    image2 = Image.open(image2_name).convert('L')
    image3 = Image.open(image3_name).convert('L')
    image2 = image2.resize(image1.size)
    image3 = image3.resize(image1.size)
    image1 = np.asarray(image1).flatten()
    image2 = np.asarray(image2).flatten()
    image3 = np.asarray(image3).flatten()

    # 相似度匹配
    print('欧氏距离', euclidean(image1, image2), euclidean(image1, image3))
    print('曼哈顿距离', manhattan(image1, image2), manhattan(image1, image3))
    print('切比雪夫距离', chebyshev(image1, image2), chebyshev(image1, image3))
    print('余弦距离', cosine(image1, image2), cosine(image1, image3))
    print('皮尔逊相关系数', pearson(image1, image2), pearson(image1, image3))
    print('汉明距离', hamming(image1, image2), hamming(image1, image3))
    print('杰卡德距离', jaccard(image1, image2), jaccard(image1, image3))
    print('布雷柯蒂斯距离', braycurtis(image1, image2), braycurtis(image1, image3))
    # print('马氏距离', mahalanobis(image1, image2), mahalanobis(image1, image3))
    print('JS散度', jensenshannon(image1, image2), jensenshannon(image1, image3))

    print('image-match匹配库', image_match(image1_name, image2_name), image_match(image1_name, image3_name))
    print('VGG16特征匹配', vgg_match(image1_name, image2_name), vgg_match(image1_name, image3_name))

2. 欧式相似度

import numpy as np
from PIL import Image
from scipy.spatial.distance import pdist


def euclidean(image1, image2, size):
    '''欧氏相似度'''
    black = Image.new('RGB', size, color=(0, 0, 0))
    white = Image.new('RGB', size, color=(255, 255, 255))
    white = np.asarray(white).flatten()
    black = np.asarray(black).flatten()
    X = np.vstack([white, black])
    _max = pdist(X, 'euclidean')[0]  # 两图最大欧氏距离
    X = np.vstack([image1, image2])
    return (_max - pdist(X, 'euclidean')[0]) / _max




总结

任务使用距离
文本相似度余弦距离
用户相似度皮尔逊相关系数




参考文献

  1. 利用python PIL库进行图像模式的转换
  2. 常见距离公式 numpy 实现
  3. EdjoLabs/image-match: ? Quickly search over billions of images
  4. Python计算图片之间的相似度
  5. 相似度计算——欧氏距离、汉明距离、余弦相似度
  6. Distance computations (scipy.spatial.distance)
  7. 距离度量以及python实现(一)
  8. 距离度量以及python实现(二)
  9. 基于VGG-16的海量图像检索系统(以图搜图升级版)
  10. 灰度值比较获得图片指纹
  11. sklearn.metrics.pairwise.paired_distances
  12. 数据科学中的 9 种距离度量
  13. 电商商品同款识别图像算法研究
### 回答1: OpenCV中提供了多种图像相似度匹配算法,这些算法可以用来衡量两幅图像之间的相似度。 其中最常用的算法之一就是结构相似性(SSIM)算法。SSIM算法是一种无参考的图像质量评价指标,它考虑了亮度、对比度和结构三个方面。通过计算图像的均值、方差和协方差来比较两幅图像之间的结构相似性。 另一个常用的算法是均方误差(MSE)算法。MSE算法计算了两幅图像之间的像素值的平方差的平均值。MSE越小,表示两幅图像之间的差异越小,相似度越高。 还有一种常见的算法是标准互相关(SCC)算法。SCC算法通过将两幅图像进行频域变换,然后计算它们之间的互相关系数来衡量它们的相似度。 除了以上常见的算法外,OpenCV还提供了许多其他的相似度匹配算法,如结构相异度(SD)算法、互信息(MI)算法等。这些算法各自有不同的计算方式和适用场景,可以根据具体的需求选择合适的算法。 总之,OpenCV提供了多种图像相似度匹配算法,可以根据需求选择合适的算法进行图像相似度比较。 ### 回答2: OpenCV中提供了几种图像相似度匹配算法,通过对比两幅图像相似程度,来判断它们的相似度。主要有以下几种算法: 1. 均方差算法(Mean Squared Error, MSE):计算两幅图像像素之间的差异。具体计算方法是将两幅图像的每个像素值相减,然后平方,最后求平均值。 2. 结构相似性算法(Structural Similarity Index, SSIM):综合考虑亮度、对比度和结构三个方面的相似性。通过计算亮度、对比度和结构三个分量的相似值,然后加权求和得到最终的相似度。 3. 相关系数算法(Correlation Coefficient, CC):计算两幅图像之间的相关性,基于每个像素点的亮度值进行计算。相关系数取值范围在-1到1之间,越接近1表示相似度越高。 4. 相互信息算法(Mutual Information, MI):通过计算两幅图像之间的信息增益来评估相似度。具体计算方法是在两幅图像上进行联合概率分布计算,然后计算信息熵和互信息。 5. 傅里叶变换算法(Fourier Transform, FT):将图像从空间域转换到频率域,通过计算两幅图像之间的频谱差异来评估相似度。具体计算方法是对图像进行傅里叶变换,计算频谱差异。 这些算法各有优缺点,适用于不同的场景和特定需求。在实际应用中,通常根据具体需求选择合适的算法进行图像相似度匹配
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

XerCis

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值