a*算法代码 python,python isalpha方法

大家好,小编来为大家解答以下问题,a*算法代码 python,python isalpha方法,今天让我们一起来看看吧!

更新:

GitHub - pierrepaleo/sift_pyocl: An implementation of SIFT on GPU with OpenCL

python-siftgpu


特征点:

Add ASIFT to OpenCV · Issue #4399 · opencv/opencv · GitHub

https://bitbucket.org/xa4a/diploma/src/1046293ccb8c/prog/features/

main | Mariano Rodríguez

https://rdguez-mariano.github.io/pages/hyperdeors

research-papers | Mariano Rodríguez

GitHub - rdguez-mariano/fast_imas_IPOL: Fast Image Matching by Affine Simulation. Source code for the IPOL demo.

GitHub - rdguez-mariano/imas_analytics: A compilation of C++ and MATLAB functions that were used to apply, analyse and visualise IMAS methods based on available state-of-the-art SIIM methods.

https://github.com/opencv/opencv/blob/master/samples/python/asift.py

http://www.cmap.polytechnique.fr/~yu/research/ASIFT/demo.html

OpenCV ASift C++ Implementation – Matt Sheckells

Latest Image Processing Techology – Image processing, openCV, SIFT, SURF, logo identification, python

IPOL Journal · ASIFT: An Algorithm for Fully Affine Invariant Comparison

http://www.pudn.com/Download/item/id/3316303.html

Affine invariant feature-based image matching

https://www.cnblogs.com/dwdxdy/p/3580038.html

OpenCV-Python实践之ASIFT框架仿射特征匹配算法_asift匹配例图-CSDN博客

GPU-ASIFT - CODE: A Wide-Baseline Feature Matcher

https://github.com/opencv/opencv/blob/master/samples/python/asift.py

'''
Affine invariant feature-based image matching sample.
This sample is similar to find_obj.py, but uses the affine transformation
space sampling technique, called ASIFT [1]. While the original implementation
is based on SIFT, you can try to use SURF or ORB detectors instead. Homography RANSAC
is used to reject outliers. Threading is used for faster affine sampling.
[1] http://www.ipol.im/pub/algo/my_affine_sift/
USAGE
  asift.py [--feature=<sift|surf|orb|brisk>[-flann]] [ <image1> <image2> ]
  --feature  - Feature to use. Can be sift, surf, orb or brisk. Append '-flann'
               to feature name to use Flann-based matcher instead bruteforce.
  Press left mouse button on a feature point to see its matching point.
'''

# Python 2/3 compatibility
from __future__ import print_function

import numpy as np
import cv2 as cv

# built-in modules
import itertools as it
from multiprocessing.pool import ThreadPool

# local modules
from common import Timer
from find_obj import init_feature, filter_matches, explore_match


def affine_skew(tilt, phi, img, mask=None):
    '''
    affine_skew(tilt, phi, img, mask=None) -> skew_img, skew_mask, Ai
    Ai - is an affine transform matrix from skew_img to img
    '''
    h, w = img.shape[:2]
    if mask is None:
        mask = np.zeros((h, w), np.uint8)
        mask[:] = 255
    A = np.float32([[1, 0, 0], [0, 1, 0]])
    if phi != 0.0:
        phi = np.deg2rad(phi)
        s, c = np.sin(phi), np.cos(phi)
        A = np.float32([[c,-s], [ s, c]])
        corners = [[0, 0], [w, 0], [w, h], [0, h]]
        tcorners = np.int32( np.dot(corners, A.T) )
        x, y, w, h = cv.boundingRect(tcorners.reshape(1,-1,2))
        A = np.hstack([A, [[-x], [-y]]])
        img = cv.warpAffine(img, A, (w, h), flags=cv.INTER_LINEAR, borderMode=cv.BORDER_REPLICATE)
    if tilt != 1.0:
        s = 0.8*np.sqrt(tilt*tilt-1)
        img = cv.GaussianBlur(img, (0, 0), sigmaX=s, sigmaY=0.01)
        img = cv.resize(img, (0, 0), fx=1.0/tilt, fy=1.0, interpolation=cv.INTER_NEAREST)
        A[0] /= tilt
    if phi != 0.0 or tilt != 1.0:
        h, w = img.shape[:2]
        mask = cv.warpAffine(mask, A, (w, h), flags=cv.INTER_NEAREST)
    Ai = cv.invertAffineTransform(A)
    return img, mask, Ai


def affine_detect(detector, img, mask=None, pool=None):
    '''
    affine_detect(detector, img, mask=None, pool=None) -> keypoints, descrs
    Apply a set of affine transformations to the image, detect keypoints and
    reproject them into initial image coordinates.
    See http://www.ipol.im/pub/algo/my_affine_sift/ for the details.
    ThreadPool object may be passed to speedup the computation.
    '''
    params = [(1.0, 0.0)]
    for t in 2**(0.5*np.arange(1,6)):
        for phi in np.arange(0, 180, 72.0 / t):
            params.append((t, phi))

    def f(p):
        t, phi = p
        timg, tmask, Ai = affine_skew(t, phi, img)
        keypoints, descrs = detector.detectAndCompute(timg, tmask)
        for kp in keypoints:
            x, y = kp.pt
            kp.pt = tuple( np.dot(Ai, (x, y, 1)) )
        if descrs is None:
            descrs = []
        return keypoints, descrs

    keypoints, descrs = [], []
    if pool is None:
        ires = it.imap(f, params)
    else:
        ires = pool.imap(f, params)

    for i, (k, d) in enumerate(ires):
        print('affine sampling: %d / %d\r' % (i+1, len(params)), end='')
        keypoints.extend(k)
        descrs.extend(d)

    print()
    return keypoints, np.array(descrs)



class LensDistortion(object):
    def __init__(self, coeffs={}):
        # SAVED CALIBRATION RESULTS
        self._coeffs = coeffs
        # SAVED CALIBRATION SETTINGS
        self.opts = {}
        self.mapx, self.mapy = None, None
        self.newCameraMatrix = None

    def setCameraParams(self, fx, fy, cx, cy, k1, k2, k3, p1, p2):
        c = self._coeffs['cameraMatrix'] = np.zeros(shape=(3, 3))
        c[0, 0] = fx
        c[1, 1] = fy
        c[0, 2] = cx
        c[1, 2] = cy
        c[2, 2] = 1
        self._coeffs['distortionCoeffs'] = np.array([[k1, k2, p1, p2, k3]])

    def getUndistortRectifyMap(self, imgWidth, imgHeight):

        if self.mapx is not None and self.mapx.shape == (imgHeight, imgWidth):
            return self.mapx, self.mapy

        cam = self._coeffs['cameraMatrix']
        d = self._coeffs['distortionCoeffs']

        (newCameraMatrix, self.roi) = cv.getOptimalNewCameraMatrix(cam,
                                                                    d, (imgWidth,
                                                                        imgHeight), 1,
                                                                    (imgWidth, imgHeight))
        self.newCameraMatrix = newCameraMatrix

        self.mapx, self.mapy = cv.initUndistortRectifyMap(cam,
                                                           d, None, newCameraMatrix,
                                                           (imgWidth, imgHeight), cv.CV_32FC1)
        return self.mapx, self.mapy

    def getDistortRectifyMap(self, sizex, sizey):
        posy, posx = np.mgrid[0:sizey, 0:sizex].astype(np.float32)
        mapx, mapy = self.getUndistortRectifyMap(sizex, sizey)
        dx = posx - mapx
        dy = posy - mapy
        posx += dx
        posy += dy
        return posx, posy

    def distortImage(self, image):
        '''
        opposite of 'correct'
        '''
        (imgHeight, imgWidth) = image.shape[:2]
        mapx, mapy = self.getDistortRectifyMap(imgWidth, imgHeight)
        return cv.remap(image, mapx, mapy, cv.INTER_LINEAR,
                         borderValue=(0, 0, 0))

def showme(dst, name):
    import matplotlib.pyplot as plt
    plt.figure()
    plt.title(name)
    plt.imshow(dst)
    plt.show()



def main():
    #    img1 = cv2.imread("./save_ply/1_IMG_Texture_8Bit.png", -1)
    import sys, getopt
    opts, args = getopt.getopt(sys.argv[1:], '', ['feature='])
    # opts = dict(opts)
    # feature_name = opts.get('--feature', 'brisk-flann')

    feature_name = 'sift'
    try:
        fn1, fn2 = args
    except:
        #fn1 = "./save_ply/1_IMG_Texture_8Bit.png"
        #fn2 = "./save_ply/7_IMG_coeffsTexture_8Bit.png"
        fn1 = "./save_ply/11_IMG_Texture_8Bit.png"
        fn2 = "./save_ply/22_IMG_Texture_8Bit.png"
        #fn1 = "./save_ply/img1.ppm"
        #fn2 = "./save_ply/img6.ppm"

    l = LensDistortion()
    img11 = cv.imread(fn1, cv.IMREAD_GRAYSCALE)
    img22 = cv.imread(fn2, cv.IMREAD_GRAYSCALE)
    showme(img11, "img1")
    showme(img22, "img2")
    l.setCameraParams(2269.16, 2268.4, 1065.54, 799.032, -0.121994, 0.154463, -0.0307676, 0.000367495, -0.000926385)
    #img1 = l.distortImage(img11)
    #img2 = l.distortImage(img22)
    img1 = img11
    img2 = img22
    showme(img1, "distimg1")
    showme(img2, "distimg2")

    x, y = img1.shape[0:2]
    #img1 = cv.resize(img1, (int(y / 2), int(x / 2)))
    #img2 = cv.resize(img2, (int(y / 2), int(x / 2)))
    #img1 = cv.imread(fn1, -1)
    #img2 = cv.imread(fn2, -1)
    #img1 = cv.cvtColor(img1, cv.COLOR_BGR2GRAY)
    #img1 = cv.cvtColor(img1, cv.COLOR_BGR2GRAY)
    # img1 = np.ones_like(img1_)
    # img2 = np.ones_like(img2_)
    # img1 = cv.resize(img1_,(500,800,3),img1)
    # img2 = cv.resize(img1_,(500,800,3),img2)
    detector, matcher = init_feature(feature_name)

    if img1 is None:
        print('Failed to load fn1:', fn1)
        sys.exit(1)

    if img2 is None:
        print('Failed to load fn2:', fn2)
        sys.exit(1)

    if detector is None:
        print('unknown feature:', feature_name)
        sys.exit(1)

    print('using', feature_name)

    pool=ThreadPool(processes = cv.getNumberOfCPUs())
    kp1, desc1 = affine_detect(detector, img1, pool=pool)
    kp2, desc2 = affine_detect(detector, img2, pool=pool)
    print('img1 - %d features, img2 - %d features' % (len(kp1), len(kp2)))

    def match_and_draw(win):
        with Timer('matching'):
            raw_matches = matcher.knnMatch(desc1, trainDeors = desc2, k = 2) #2
        p1, p2, kp_pairs = filter_matches(kp1, kp2, raw_matches)
        if len(p1) >= 4:
            H, status = cv.findHomography(p1, p2, cv.RANSAC, 20.0)
            print('%d / %d  inliers/matched' % (np.sum(status), len(status)))
            # do not draw outliers (there will be a lot of them)
            kp_pairs = [kpp for kpp, flag in zip(kp_pairs, status) if flag]
        else:
            H, status = None, None
            print('%d matches found, not enough for homography estimation' % len(p1))

        explore_match(win, img1, img2, kp_pairs, None, H, l.newCameraMatrix)


    match_and_draw('affine find_obj')
    cv.waitKey()
    print('Done')


if __name__ == '__main__':
    print(__doc__)
    main()
    cv.destroyAllWindows()
'''
Feature-based image matching sample.
Note, that you will need the https://github.com/opencv/opencv_contrib repo for SIFT and SURF
USAGE
  find_obj.py [--feature=<sift|surf|orb|akaze|brisk>[-flann]] [ <image1> <image2> ]
  --feature  - Feature to use. Can be sift, surf, orb or brisk. Append '-flann'
               to feature name to use Flann-based matcher instead bruteforce.
  Press left mouse button on a feature point to see its matching point.
'''

# Python 2/3 compatibility
from __future__ import print_function

import numpy as np
import cv2 as cv

from common import anorm, getsize

FLANN_INDEX_KDTREE = 1  # bug: flann enums are missing
FLANN_INDEX_LSH    = 6


def init_feature(name):
    chunks = name.split('-')
    if chunks[0] == 'sift':
        detector = cv.xfeatures2d.SIFT_create()
        norm = cv.NORM_L2
    elif chunks[0] == 'surf':
        detector = cv.xfeatures2d.SURF_create(800)
        norm = cv.NORM_L2
    elif chunks[0] == 'orb':
        detector = cv.ORB_create(400)
        norm = cv.NORM_HAMMING
    elif chunks[0] == 'akaze':
        detector = cv.AKAZE_create()
        norm = cv.NORM_HAMMING
    elif chunks[0] == 'brisk':
        detector = cv.BRISK_create()
        norm = cv.NORM_HAMMING
    else:
        return None, None
    if 'flann' in chunks:
        if norm == cv.NORM_L2:
            flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
        else:
            flann_params= dict(algorithm = FLANN_INDEX_LSH,
                               table_number = 6, # 12
                               key_size = 12,     # 20
                               multi_probe_level = 1) #2
        matcher = cv.FlannBasedMatcher(flann_params, {})  # bug : need to pass empty dict (#1329)
    else:
        matcher = cv.BFMatcher(norm)
    return detector, matcher


def filter_matches(kp1, kp2, matches, ratio = 0.75):
    mkp1, mkp2 = [], []
    for m in matches:
        if len(m) == 2 and m[0].distance < m[1].distance * ratio:
            m = m[0]
            mkp1.append( kp1[m.queryIdx] )
            mkp2.append( kp2[m.trainIdx] )
    p1 = np.float32([kp.pt for kp in mkp1])
    p2 = np.float32([kp.pt for kp in mkp2])
    kp_pairs = zip(mkp1, mkp2)
    return p1, p2, list(kp_pairs)

def explore_match(win, img1, img2, kp_pairs, status = None, H = None, newCameraMatrix=None):
    h1, w1 = img1.shape[:2]
    h2, w2 = img2.shape[:2]
    vis = np.zeros((max(h1, h2), w1+w2), np.uint8)
    vis[:h1, :w1] = img1
    vis[:h2, w1:w1+w2] = img2
    vis = cv.cvtColor(vis, cv.COLOR_GRAY2BGR)

    if H is not None:
        corners = np.float32([[0, 0], [w1, 0], [w1, h1], [0, h1]])
        corners = np.int32( cv.perspectiveTransform(corners.reshape(1, -1, 2), H).reshape(-1, 2) + (w1, 0) )
        cv.polylines(vis, [corners], True, (255, 255, 255))

    if status is None:
        status = np.ones(len(kp_pairs), np.bool_)
    p1, p2 = [], []  # python 2 / python 3 change of zip unpacking
    newp1, newp2 = [], []
    for kpp in kp_pairs:
        p1.append(np.int32(kpp[0].pt))
        newp1.append(np.int32(kpp[0].pt))
        p2.append(np.int32(np.array(kpp[1].pt) + [w1, 0]))
        newp2.append(np.int32(np.array(kpp[1].pt)))


    '''
    (k1, k2, p1, p2[, k3[, k4, k5, k6[, s1, s2, s3, s4[, tx, ty]]]])
    (-0.121994, 0.154463, 0.000367495, -0.000926385[, -0.0307676[, 0, 0, 0[, 0, 0, 0, 0[, 0, 0]]]])
    '''
    #https://www.cnblogs.com/riddick/p/7811877.html
    #https://answers.opencv.org/question/189490/stereo-re-calibration-using-opencvs-findessentialmat-recoverpose/
    #https://answers.opencv.org/question/104134/translation-vector-upto-a-scale-factor-odometry/
    #https://dsp.stackexchange.com/questions/25971/how-are-the-scaling-parameters-included-extracted-from-an-essential-matrix
    intrinsics = [[2269.16, 0., 1065.54], [0., 2268.4, 799.032], [0., 0., 1.]]
    K0 = np.array(intrinsics)
    #newCameraMatrix = np.array(newCameraMatrix)
    newCameraMatrix = K0
    import cv2
    retval, mask = cv2.findEssentialMat(np.array(newp1), np.array(newp2), newCameraMatrix, method=cv2.RANSAC, threshold=20)
    NN, R, t, _ = cv2.recoverPose(retval, np.array(newp1), np.array(newp2), newCameraMatrix, mask=mask)
    print(R)
    print(t)


    dep1 = cv2.imread("./save_ply/11_IMG_DepthMap.tif", -1)
    dep2 = cv2.imread("./save_ply/22_IMG_DepthMap.tif", -1)
    trp1 = []
    trp2 = []
    for key, ((x1, y1), (x2, y2)) in enumerate(zip(newp1, newp2)):
        left_d = dep1[y1][x1]
        left_z = float(left_d) / newCameraMatrix[2][2]
        left_x = (x1 - newCameraMatrix[0][2]) * left_z / newCameraMatrix[0][0]
        left_y = (y1 - newCameraMatrix[1][2]) * left_z / newCameraMatrix[1][1]
        points1 = np.array([left_x, left_y, left_z])
        flag1 = (np.sum(abs(points1))<0.001)
        right_d = dep2[y2][x2]
        right_z = float(right_d) / newCameraMatrix[2][2]
        right_x = (x2 - newCameraMatrix[0][2]) * right_z / newCameraMatrix[0][0]
        right_y = (y2 - newCameraMatrix[1][2]) * right_z / newCameraMatrix[1][1]
        points2 = np.array([right_x, right_y, right_z])
        flag2 = (np.sum(abs(points2)) < 0.001)

        if mask[key]==True and flag1==False and flag2==False:
        #if flag1 == False and flag2 == False:
            trp1.append(points1)
            trp2.append(points2)
            a = 1


    def best_fit_transform(A, B):
        '''
        Calculates the least-squares best-fit transform that maps corresponding points A to B in m spatial dimensions
        Input:
          A: Nxm numpy array of corresponding points
          B: Nxm numpy array of corresponding points
        Returns:
          T: (m+1)x(m+1) homogeneous transformation matrix that maps A on to B
          R: mxm rotation matrix
          t: mx1 translation vector
        '''
        from sklearn.neighbors import NearestNeighbors
        assert A.shape == B.shape

        # get number of dimensions
        m = A.shape[1]

        # translate points to their centroids
        centroid_A = np.mean(A, axis=0)
        centroid_B = np.mean(B, axis=0)
        AA = A - centroid_A
        BB = B - centroid_B

        # rotation matrix
        H = np.dot(AA.T, BB)
        U, S, Vt = np.linalg.svd(H)
        R = np.dot(Vt.T, U.T)

        # special reflection case
        if np.linalg.det(R) < 0:
            Vt[m - 1, :] *= -1
            R = np.dot(Vt.T, U.T)

        # translation
        t = centroid_B.T - np.dot(R, centroid_A.T)

        # homogeneous transformation
        T = np.identity(m + 1)
        T[:m, :m] = R
        T[:m, m] = t
        return T, R, t

    newtrp1 = np.array(trp1)
    newtrp2 = np.array(trp2)

    T, R, t = best_fit_transform(newtrp1, newtrp2)
    print(R.astype(np.float16))
    print(t.astype(np.float16))

    green = (0, 255, 0)
    red = (0, 0, 255)
    kp_color = (51, 103, 236)
    for (x1, y1), (x2, y2), inlier in zip(p1, p2, status):
        if inlier:
            col = green
            cv.circle(vis, (x1, y1), 2, col, -1)
            cv.circle(vis, (x2, y2), 2, col, -1)
        else:
            col = red
            r = 2
            thickness = 3
            cv.line(vis, (x1-r, y1-r), (x1+r, y1+r), col, thickness)
            cv.line(vis, (x1-r, y1+r), (x1+r, y1-r), col, thickness)
            cv.line(vis, (x2-r, y2-r), (x2+r, y2+r), col, thickness)
            cv.line(vis, (x2-r, y2+r), (x2+r, y2-r), col, thickness)
    vis0 = vis.copy()
    for (x1, y1), (x2, y2), inlier in zip(p1, p2, status):
        if inlier:
            cv.line(vis, (x1, y1), (x2, y2), green)

    cv.imshow(win, vis)
    cv.imwrite("./save_ply/22.png", vis)

    def onmouse(event, x, y, flags, param):
        cur_vis = vis
        if flags & cv.EVENT_FLAG_LBUTTON:
            cur_vis = vis0.copy()
            r = 8
            m = (anorm(np.array(p1) - (x, y)) < r) | (anorm(np.array(p2) - (x, y)) < r)
            idxs = np.where(m)[0]

            kp1s, kp2s = [], []
            for i in idxs:
                (x1, y1), (x2, y2) = p1[i], p2[i]
                col = (red, green)[status[i][0]]
                cv.line(cur_vis, (x1, y1), (x2, y2), col)
                kp1, kp2 = kp_pairs[i]
                kp1s.append(kp1)
                kp2s.append(kp2)
            cur_vis = cv.drawKeypoints(cur_vis, kp1s, None, flags=4, color=kp_color)
            cur_vis[:,w1:] = cv.drawKeypoints(cur_vis[:,w1:], kp2s, None, flags=4, color=kp_color)

        cv.imshow(win, cur_vis)
    cv.setMouseCallback(win, onmouse)
    import matplotlib.pyplot as plt
    def showimg(dst, name):
        plt.figure()
        plt.title(name)
        plt.imshow(dst)
        plt.show()
    showimg(vis, "result")
    return vis


def main():
    import sys, getopt
    opts, args = getopt.getopt(sys.argv[1:], '', ['feature='])
    opts = dict(opts)
    feature_name = opts.get('--feature', 'brisk')
    try:
        fn1, fn2 = args
    except:
        fn1 = 'box.png'
        fn2 = 'box_in_scene.png'

    img1 = cv.imread(cv.samples.findFile(fn1), cv.IMREAD_GRAYSCALE)
    img2 = cv.imread(cv.samples.findFile(fn2), cv.IMREAD_GRAYSCALE)
    detector, matcher = init_feature(feature_name)

    if img1 is None:
        print('Failed to load fn1:', fn1)
        sys.exit(1)

    if img2 is None:
        print('Failed to load fn2:', fn2)
        sys.exit(1)

    if detector is None:
        print('unknown feature:', feature_name)
        sys.exit(1)

    print('using', feature_name)

    kp1, desc1 = detector.detectAndCompute(img1, None)
    kp2, desc2 = detector.detectAndCompute(img2, None)
    print('img1 - %d features, img2 - %d features' % (len(kp1), len(kp2)))

    def match_and_draw(win):
        print('matching...')
        raw_matches = matcher.knnMatch(desc1, trainDeors = desc2, k = 2) #2
        p1, p2, kp_pairs = filter_matches(kp1, kp2, raw_matches)
        if len(p1) >= 4:
            H, status = cv.findHomography(p1, p2, cv.RANSAC, 5.0)
            print('%d / %d  inliers/matched' % (np.sum(status), len(status)))
        else:
            H, status = None, None
            print('%d matches found, not enough for homography estimation' % len(p1))

        _vis = explore_match(win, img1, img2, kp_pairs, status, H)

    match_and_draw('find_obj')
    cv.waitKey()

    print('Done')


if __name__ == '__main__':
    print(__doc__)
    main()
    cv.destroyAllWindows()

匹配多组:

common.py

'''
This module contains some common routines used by other samples.
'''

# Python 2/3 compatibility
from __future__ import print_function
import sys
PY3 = sys.version_info[0] == 3

if PY3:
    from functools import reduce

import numpy as np
import cv2 as cv

# built-in modules
import os
import itertools as it
from contextlib import contextmanager

image_extensions = ['.bmp', '.jpg', '.jpeg', '.png', '.tif', '.tiff', '.pbm', '.pgm', '.ppm']

class Bunch(object):
    def __init__(self, **kw):
        self.__dict__.update(kw)
    def __str__(self):
        return str(self.__dict__)

def splitfn(fn):
    path, fn = os.path.split(fn)
    name, ext = os.path.splitext(fn)
    return path, name, ext

def anorm2(a):
    return (a*a).sum(-1)
def anorm(a):
    return np.sqrt( anorm2(a) )

def homotrans(H, x, y):
    xs = H[0, 0]*x + H[0, 1]*y + H[0, 2]
    ys = H[1, 0]*x + H[1, 1]*y + H[1, 2]
    s  = H[2, 0]*x + H[2, 1]*y + H[2, 2]
    return xs/s, ys/s

def to_rect(a):
    a = np.ravel(a)
    if len(a) == 2:
        a = (0, 0, a[0], a[1])
    return np.array(a, np.float64).reshape(2, 2)

def rect2rect_mtx(src, dst):
    src, dst = to_rect(src), to_rect(dst)
    cx, cy = (dst[1] - dst[0]) / (src[1] - src[0])
    tx, ty = dst[0] - src[0] * (cx, cy)
    M = np.float64([[ cx,  0, tx],
                    [  0, cy, ty],
                    [  0,  0,  1]])
    return M


def lookat(eye, target, up = (0, 0, 1)):
    fwd = np.asarray(target, np.float64) - eye
    fwd /= anorm(fwd)
    right = np.cross(fwd, up)
    right /= anorm(right)
    down = np.cross(fwd, right)
    R = np.float64([right, down, fwd])
    tvec = -np.dot(R, eye)
    return R, tvec

def mtx2rvec(R):
    w, u, vt = cv.SVDecomp(R - np.eye(3))
    p = vt[0] + u[:,0]*w[0]    # same as np.dot(R, vt[0])
    c = np.dot(vt[0], p)
    s = np.dot(vt[1], p)
    axis = np.cross(vt[0], vt[1])
    return axis * np.arctan2(s, c)

def draw_str(dst, target, s):
    x, y = target
    cv.putText(dst, s, (x+1, y+1), cv.FONT_HERSHEY_PLAIN, 1.0, (0, 0, 0), thickness = 2, lineType=cv.LINE_AA)
    cv.putText(dst, s, (x, y), cv.FONT_HERSHEY_PLAIN, 1.0, (255, 255, 255), lineType=cv.LINE_AA)

class Sketcher:
    def __init__(self, windowname, dests, colors_func):
        self.prev_pt = None
        self.windowname = windowname
        self.dests = dests
        self.colors_func = colors_func
        self.dirty = False
        self.show()
        cv.setMouseCallback(self.windowname, self.on_mouse)

    def show(self):
        cv.imshow(self.windowname, self.dests[0])

    def on_mouse(self, event, x, y, flags, param):
        pt = (x, y)
        if event == cv.EVENT_LBUTTONDOWN:
            self.prev_pt = pt
        elif event == cv.EVENT_LBUTTONUP:
            self.prev_pt = None

        if self.prev_pt and flags & cv.EVENT_FLAG_LBUTTON:
            for dst, color in zip(self.dests, self.colors_func()):
                cv.line(dst, self.prev_pt, pt, color, 5)
            self.dirty = True
            self.prev_pt = pt
            self.show()


# palette data from matplotlib/_cm.py
_jet_data =   {'red':   ((0., 0, 0), (0.35, 0, 0), (0.66, 1, 1), (0.89,1, 1),
                         (1, 0.5, 0.5)),
               'green': ((0., 0, 0), (0.125,0, 0), (0.375,1, 1), (0.64,1, 1),
                         (0.91,0,0), (1, 0, 0)),
               'blue':  ((0., 0.5, 0.5), (0.11, 1, 1), (0.34, 1, 1), (0.65,0, 0),
                         (1, 0, 0))}

cmap_data = { 'jet' : _jet_data }

def make_cmap(name, n=256):
    data = cmap_data[name]
    xs = np.linspace(0.0, 1.0, n)
    channels = []
    eps = 1e-6
    for ch_name in ['blue', 'green', 'red']:
        ch_data = data[ch_name]
        xp, yp = [], []
        for x, y1, y2 in ch_data:
            xp += [x, x+eps]
            yp += [y1, y2]
        ch = np.interp(xs, xp, yp)
        channels.append(ch)
    return np.uint8(np.array(channels).T*255)

def nothing(*arg, **kw):
    pass

def clock():
    return cv.getTickCount() / cv.getTickFrequency()

@contextmanager
def Timer(msg):
    print(msg, '...',)
    start = clock()
    try:
        yield
    finally:
        print("%.2f ms" % ((clock()-start)*1000))

class StatValue:
    def __init__(self, smooth_coef = 0.5):
        self.value = None
        self.smooth_coef = smooth_coef
    def update(self, v):
        if self.value is None:
            self.value = v
        else:
            c = self.smooth_coef
            self.value = c * self.value + (1.0-c) * v

class RectSelector:
    def __init__(self, win, callback):
        self.win = win
        self.callback = callback
        cv.setMouseCallback(win, self.onmouse)
        self.drag_start = None
        self.drag_rect = None
    def onmouse(self, event, x, y, flags, param):
        x, y = np.int16([x, y]) # BUG
        if event == cv.EVENT_LBUTTONDOWN:
            self.drag_start = (x, y)
            return
        if self.drag_start:
            if flags & cv.EVENT_FLAG_LBUTTON:
                xo, yo = self.drag_start
                x0, y0 = np.minimum([xo, yo], [x, y])
                x1, y1 = np.maximum([xo, yo], [x, y])
                self.drag_rect = None
                if x1-x0 > 0 and y1-y0 > 0:
                    self.drag_rect = (x0, y0, x1, y1)
            else:
                rect = self.drag_rect
                self.drag_start = None
                self.drag_rect = None
                if rect:
                    self.callback(rect)
    def draw(self, vis):
        if not self.drag_rect:
            return False
        x0, y0, x1, y1 = self.drag_rect
        cv.rectangle(vis, (x0, y0), (x1, y1), (0, 255, 0), 2)
        return True
    @property
    def dragging(self):
        return self.drag_rect is not None


def grouper(n, iterable, fillvalue=None):
    '''grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx'''
    args = [iter(iterable)] * n
    if PY3:
        output = it.zip_longest(fillvalue=fillvalue, *args)
    else:
        output = it.izip_longest(fillvalue=fillvalue, *args)
    return output

def mosaic(w, imgs):
    '''Make a grid from images.
    w    -- number of grid columns
    imgs -- images (must have same size and format)
    '''
    imgs = iter(imgs)
    if PY3:
        img0 = next(imgs)
    else:
        img0 = imgs.next()
    pad = np.zeros_like(img0)
    imgs = it.chain([img0], imgs)
    rows = grouper(w, imgs, pad)
    return np.vstack(map(np.hstack, rows))

def getsize(img):
    h, w = img.shape[:2]
    return w, h

def mdot(*args):
    return reduce(np.dot, args)

def draw_keypoints(vis, keypoints, color = (0, 255, 255)):
    for kp in keypoints:
        x, y = kp.pt
        cv.circle(vis, (int(x), int(y)), 2, color)

find_obj.py

'''
Feature-based image matching sample.
Note, that you will need the https://github.com/opencv/opencv_contrib repo for SIFT and SURF
USAGE
  find_obj.py [--feature=<sift|surf|orb|akaze|brisk>[-flann]] [ <image1> <image2> ]
  --feature  - Feature to use. Can be sift, surf, orb or brisk. Append '-flann'
               to feature name to use Flann-based matcher instead bruteforce.
  Press left mouse button on a feature point to see its matching point.
'''

# Python 2/3 compatibility
from __future__ import print_function

import numpy as np
import cv2 as cv

from common import anorm, getsize

FLANN_INDEX_KDTREE = 1  # bug: flann enums are missing
FLANN_INDEX_LSH    = 6


def init_feature(name):
    chunks = name.split('-')
    if chunks[0] == 'sift':
        detector = cv.xfeatures2d.SIFT_create()
        norm = cv.NORM_L2
    elif chunks[0] == 'surf':
        detector = cv.xfeatures2d.SURF_create(800)
        norm = cv.NORM_L2
    elif chunks[0] == 'orb':
        detector = cv.ORB_create(400)
        norm = cv.NORM_HAMMING
    elif chunks[0] == 'akaze':
        detector = cv.AKAZE_create()
        norm = cv.NORM_HAMMING
    elif chunks[0] == 'brisk':
        detector = cv.BRISK_create()
        norm = cv.NORM_HAMMING
    else:
        return None, None
    if 'flann' in chunks:
        if norm == cv.NORM_L2:
            flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
        else:
            flann_params= dict(algorithm = FLANN_INDEX_LSH,
                               table_number = 6, # 12
                               key_size = 12,     # 20
                               multi_probe_level = 1) #2
        matcher = cv.FlannBasedMatcher(flann_params, {})  # bug : need to pass empty dict (#1329)
    else:
        matcher = cv.BFMatcher(norm)
    return detector, matcher


def filter_matches(kp1, kp2, matches, ratio = 0.75):
    mkp1, mkp2 = [], []
    for m in matches:
        if len(m) == 2 and m[0].distance < m[1].distance * ratio:
            m = m[0]
            mkp1.append( kp1[m.queryIdx] )
            mkp2.append( kp2[m.trainIdx] )
    p1 = np.float32([kp.pt for kp in mkp1])
    p2 = np.float32([kp.pt for kp in mkp2])
    kp_pairs = zip(mkp1, mkp2)
    return p1, p2, list(kp_pairs)

def best_fit_transform(A, B):
        '''
        Calculates the least-squares best-fit transform that maps corresponding points A to B in m spatial dimensions
        Input:
          A: Nxm numpy array of corresponding points
          B: Nxm numpy array of corresponding points
        Returns:
          T: (m+1)x(m+1) homogeneous transformation matrix that maps A on to B
          R: mxm rotation matrix
          t: mx1 translation vector
        '''
        from sklearn.neighbors import NearestNeighbors
        assert A.shape == B.shape

        # get number of dimensions
        m = A.shape[1]

        # translate points to their centroids
        centroid_A = np.mean(A, axis=0)
        centroid_B = np.mean(B, axis=0)
        AA = A - centroid_A
        BB = B - centroid_B

        # rotation matrix
        H = np.dot(AA.T, BB)
        U, S, Vt = np.linalg.svd(H)
        R = np.dot(Vt.T, U.T)

        # special reflection case
        if np.linalg.det(R) < 0:
            Vt[m - 1, :] *= -1
            R = np.dot(Vt.T, U.T)

        # translation
        t = centroid_B.T - np.dot(R, centroid_A.T)

        # homogeneous transformation
        T = np.identity(m + 1)
        T[:m, :m] = R
        T[:m, m] = t
        return T, R, t

def explore_match(img1, img2, dep1, dep2, name1, name2, kp_pairs, status = None, H = None):
    h1, w1 = img1.shape[:2]
    h2, w2 = img2.shape[:2]
    vis = np.zeros((max(h1, h2), w1+w2), np.uint8)
    vis[:h1, :w1] = img1
    vis[:h2, w1:w1+w2] = img2
    vis = cv.cvtColor(vis, cv.COLOR_GRAY2BGR)

    if H is not None:
        corners = np.float32([[0, 0], [w1, 0], [w1, h1], [0, h1]])
        corners = np.int32( cv.perspectiveTransform(corners.reshape(1, -1, 2), H).reshape(-1, 2) + (w1, 0) )
        cv.polylines(vis, [corners], True, (255, 255, 255))

    if status is None:
        status = np.ones(len(kp_pairs), np.bool_)
    p1, p2 = [], []  # python 2 / python 3 change of zip unpacking
    newp1, newp2 = [], []
    for kpp in kp_pairs:
        p1.append(np.int32(kpp[0].pt))
        newp1.append(np.int32(kpp[0].pt))
        p2.append(np.int32(np.array(kpp[1].pt) + [w1, 0]))
        newp2.append(np.int32(np.array(kpp[1].pt)))

    intrinsics = [[2269.16, 0., 1065.54], [0., 2268.4, 799.032], [0., 0., 1.]]
    K0 = np.array(intrinsics)
    newCameraMatrix = K0

    retval, mask = cv.findEssentialMat(np.array(newp1), np.array(newp2), newCameraMatrix, method=cv.RANSAC, threshold=20)

    trp1 = []
    trp2 = []
    for key, ((x1, y1), (x2, y2)) in enumerate(zip(newp1, newp2)):
        left_d = dep1[y1][x1]
        left_z = float(left_d) / newCameraMatrix[2][2]
        left_x = (x1 - newCameraMatrix[0][2]) * left_z / newCameraMatrix[0][0]
        left_y = (y1 - newCameraMatrix[1][2]) * left_z / newCameraMatrix[1][1]
        points1 = np.array([left_x, left_y, left_z])
        flag1 = (np.sum(abs(points1))<0.001)
        right_d = dep2[y2][x2]
        right_z = float(right_d) / newCameraMatrix[2][2]
        right_x = (x2 - newCameraMatrix[0][2]) * right_z / newCameraMatrix[0][0]
        right_y = (y2 - newCameraMatrix[1][2]) * right_z / newCameraMatrix[1][1]
        points2 = np.array([right_x, right_y, right_z])
        flag2 = (np.sum(abs(points2)) < 0.001)

        if mask[key]==True and flag1==False and flag2==False:
            trp1.append(points1)
            trp2.append(points2)

    newtrp1 = np.array(trp1)
    newtrp2 = np.array(trp2)
    if len(newtrp1) < 20:
        return

    T, R, t = best_fit_transform(newtrp1, newtrp2)

    writeR = R.reshape((9,))
    writet = t.reshape((3,))
    writeRT = np.concatenate((writeR,writet),axis=0)
    file_write_obj = open("./RT_txt/"+name1+"_split_"+name2+".txt", 'w')
    for var in writeRT:
        file_write_obj.writelines(str(var))
        file_write_obj.write('\n')
    file_write_obj.close()
    green = (0, 255, 0)
    red = (0, 0, 255)
    kp_color = (51, 103, 236)
    for (x1, y1), (x2, y2), inlier in zip(p1, p2, status):
        if inlier:
            col = green
            cv.circle(vis, (x1, y1), 2, col, -1)
            cv.circle(vis, (x2, y2), 2, col, -1)
        else:
            col = red
            r = 2
            thickness = 3
            cv.line(vis, (x1-r, y1-r), (x1+r, y1+r), col, thickness)
            cv.line(vis, (x1-r, y1+r), (x1+r, y1-r), col, thickness)
            cv.line(vis, (x2-r, y2-r), (x2+r, y2+r), col, thickness)
            cv.line(vis, (x2-r, y2+r), (x2+r, y2-r), col, thickness)
    for (x1, y1), (x2, y2), inlier in zip(p1, p2, status):
        if inlier:
            cv.line(vis, (x1, y1), (x2, y2), green)
    cv.imwrite("./pairs_save_image/"+name1+"_split_"+name2+".jpg", vis)

pairs_list.txt 

1,2,3
4,5,6,7,8,9,10,11,12,13,14,15,16
17,18,19,20,21,22,23,24,25,26,27
28,29,30,31,32,33
34,35,36,37,38,39,40
41,42,43,44,45,46,47,48,49,50,51,52,53,54
55,56,57,58,59,60
61,62,63,64,65
66,67,68,69,70,71,74,73,75,76,77,78,79,80,81,84,85,86,82,83

asift.py

'''
Affine invariant feature-based image matching sample.
This sample is similar to find_obj.py, but uses the affine transformation
space sampling technique, called ASIFT [1]. While the original implementation
is based on SIFT, you can try to use SURF or ORB detectors instead. Homography RANSAC
is used to reject outliers. Threading is used for faster affine sampling.
[1] http://www.ipol.im/pub/algo/my_affine_sift/
USAGE
  asift.py [--feature=<sift|surf|orb|brisk>[-flann]] [ <image1> <image2> ]
  --feature  - Feature to use. Can be sift, surf, orb or brisk. Append '-flann'
               to feature name to use Flann-based matcher instead bruteforce.
  Press left mouse button on a feature point to see its matching point.
'''

# Python 2/3 compatibility
from __future__ import print_function

import numpy as np
import cv2 as cv
import os

# built-in modules
import itertools as it
from multiprocessing.pool import ThreadPool

# local modules
from common import Timer
from find_obj import init_feature, filter_matches, explore_match


def affine_skew(tilt, phi, img, mask=None):
    '''
    affine_skew(tilt, phi, img, mask=None) -> skew_img, skew_mask, Ai
    Ai - is an affine transform matrix from skew_img to img
    '''
    h, w = img.shape[:2]
    if mask is None:
        mask = np.zeros((h, w), np.uint8)
        mask[:] = 255
    A = np.float32([[1, 0, 0], [0, 1, 0]])
    if phi != 0.0:
        phi = np.deg2rad(phi)
        s, c = np.sin(phi), np.cos(phi)
        A = np.float32([[c,-s], [ s, c]])
        corners = [[0, 0], [w, 0], [w, h], [0, h]]
        tcorners = np.int32( np.dot(corners, A.T) )
        x, y, w, h = cv.boundingRect(tcorners.reshape(1,-1,2))
        A = np.hstack([A, [[-x], [-y]]])
        img = cv.warpAffine(img, A, (w, h), flags=cv.INTER_LINEAR, borderMode=cv.BORDER_REPLICATE)
    if tilt != 1.0:
        s = 0.8*np.sqrt(tilt*tilt-1)
        img = cv.GaussianBlur(img, (0, 0), sigmaX=s, sigmaY=0.01)
        img = cv.resize(img, (0, 0), fx=1.0/tilt, fy=1.0, interpolation=cv.INTER_NEAREST)
        A[0] /= tilt
    if phi != 0.0 or tilt != 1.0:
        h, w = img.shape[:2]
        mask = cv.warpAffine(mask, A, (w, h), flags=cv.INTER_NEAREST)
    Ai = cv.invertAffineTransform(A)
    return img, mask, Ai


def affine_detect(detector, img, mask=None, pool=None):
    '''
    affine_detect(detector, img, mask=None, pool=None) -> keypoints, descrs
    Apply a set of affine transformations to the image, detect keypoints and
    reproject them into initial image coordinates.
    See http://www.ipol.im/pub/algo/my_affine_sift/ for the details.
    ThreadPool object may be passed to speedup the computation.
    '''
    params = [(1.0, 0.0)]
    for t in 2**(0.5*np.arange(1,6)):
        for phi in np.arange(0, 180, 72.0 / t):
            params.append((t, phi))

    def f(p):
        t, phi = p
        timg, tmask, Ai = affine_skew(t, phi, img)
        keypoints, descrs = detector.detectAndCompute(timg, tmask)
        for kp in keypoints:
            x, y = kp.pt
            kp.pt = tuple( np.dot(Ai, (x, y, 1)) )
        if descrs is None:
            descrs = []
        return keypoints, descrs

    keypoints, descrs = [], []
    if pool is None:
        ires = it.imap(f, params)
    else:
        ires = pool.imap(f, params)

    for i, (k, d) in enumerate(ires):
        print('affine sampling: %d / %d\r' % (i+1, len(params)), end='')
        keypoints.extend(k)
        descrs.extend(d)

    print()
    return keypoints, np.array(descrs)

def main():

    feature_name = 'sift'
    mypath = "./11-14-tif"
    if not os.path.exists("./pairs_save_image"):
        os.makedirs("./pairs_save_image")
    if not os.path.exists("./RT_txt"):
        os.makedirs("./RT_txt")

    detector, matcher = init_feature(feature_name)
    print('using', feature_name)

    print(cv.getNumberOfCPUs())
    pool=ThreadPool(processes = cv.getNumberOfCPUs())

    import shutil
    with open("pairs_list.txt") as files:
        for line in files:
            line_t = line.replace('\n','').split(',')
            for key1 in range(len(line_t)-1):
                name = str(line_t[key1]) + "_IMG_Texture_8Bit_split_" + str(line_t[key1+1]) + "_IMG_Texture_8Bit.jpg"
                img1 = cv.imread(os.path.join(mypath,str(line_t[key1])+"_IMG_Texture_8Bit.png"), cv.IMREAD_GRAYSCALE)
                dep1 = cv.imread(os.path.join(mypath,str(line_t[key1])+"_IMG_DepthMap.tif"), -1)

                img2 = cv.imread(os.path.join(mypath,str(line_t[key1+1])+"_IMG_Texture_8Bit.png"), cv.IMREAD_GRAYSCALE)
                dep2 = cv.imread(os.path.join(mypath,str(line_t[key1+1])+"_IMG_DepthMap.tif"), -1)

                kp1, desc1 = affine_detect(detector, img1, pool=pool)
                kp2, desc2 = affine_detect(detector, img2, pool=pool)
                name1 = str(line_t[key1])+"_IMG_Texture_8Bit"
                name2 = str(line_t[key1+1])+"_IMG_Texture_8Bit"
                print('%s - %d features, %s - %d features' % (name1, len(kp1), name2, len(kp2)))

                raw_matches = matcher.knnMatch(desc1, trainDeors = desc2, k = 2)
                p1, p2, kp_pairs = filter_matches(kp1, kp2, raw_matches)

                if len(p1) >= 100:
                    H, status = cv.findHomography(p1, p2, cv.RANSAC, 20.0)
                    print('%d / %d  inliers/matched' % (np.sum(status), len(status)))
                    # do not draw outliers (there will be a lot of them)
                    kp_pairs = [kpp for kpp, flag in zip(kp_pairs, status) if flag]
                else:
                    continue
                    print('%d matches found, not enough for homography estimation' % len(p1))
                explore_match(img1, img2, dep1, dep2, name1, name2, kp_pairs, None, H)
        print('Done')


if __name__ == '__main__':
    print(__doc__)
    main()

C++格式实现:

ASiftDetector.cpp ASiftDetector.h main.cpp CMakeLists.txt

ASiftDetector.cpp 

//
// Created by spple on 19-12-6.
//

#include "ASiftDetector.h"

#include <iostream>

#include <opencv2/xfeatures2d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

ASiftDetector::ASiftDetector()
{
    detector = cv::xfeatures2d::SiftFeatureDetector::create();
    num = 0;

}

void ASiftDetector::detectAndCompute(const Mat& img, std::vector< KeyPoint >& keypoints, Mat& deors)
{

//    params = [(1.0, 0.0)]
//    for t in 2**(0.5*np.arange(1,6)):
//    for phi in np.arange(0, 180, 72.0 / t):
//    params.append((t, phi))

//    //affine参数
//    vector<vector<double>> params;
//    vector<double> temp;
//    temp.push_back(1.0);
//    temp.push_back(0.0);
//    params.push_back(temp);
//
//    for(int tl = 1; tl < 6; tl++) {
//        double t = pow(2, 0.5 * tl);
//        for (double phi = 0; phi < 180; phi += 72.0/t) {
//            temp.clear();
//            temp.push_back(t);
//            temp.push_back(phi);
//            params.push_back(temp);
//        }
//    }


    keypoints.clear();
    deors = Mat(0, 128, CV_32F);
    int flag = 0;
    for(int tl = 0; tl < 6; tl++)
    {
        double t = pow(2, 0.5*tl);
        for(int phi = 0; phi < 180; phi += 72.0/t)
        {
            std::vector<KeyPoint> kps;
            Mat desc;

            Mat timg, mask, Ai;
            img.copyTo(timg);

            affineSkew(t, phi, timg, mask, Ai);

//            debug:
//            flag += 1;
//            Mat img_disp;
//            cv::bitwise_and(timg, timg, img_disp, mask);
//            std::stringstream ssTemp;
//            ssTemp<<flag;
//            std::string strDst=ssTemp.str();
//            imwrite("/home/spple/CLionProjects/ASIFT/11.14/P10_"+strDst+".jpg", img_disp);

            detector->detectAndCompute(timg, mask, kps, desc);

            for(unsigned int i = 0; i < kps.size(); i++)
            {
                Point3f kpt(kps[i].pt.x, kps[i].pt.y, 1);
                Mat kpt_t = Ai*Mat(kpt);
                kps[i].pt.x = kpt_t.at<float>(0,0);
                kps[i].pt.y = kpt_t.at<float>(1,0);
                //越界判断
                if ((kps[i].pt.x < 0) || (kps[i].pt.y < 0 || (kps[i].pt.x > img.cols-1) || (kps[i].pt.y > img.rows-1)))
                {
                    num++;
                    //std::cout << kps[i].pt.x <<":::" << kps[i].pt.y << std::endl;
                    if (kps[i].pt.x < 0)
                        kps[i].pt.x = 0;
                    if (kps[i].pt.y < 0)
                        kps[i].pt.y = 0;
                    if (kps[i].pt.x > img.cols-1)
                        kps[i].pt.x = img.cols-1;
                    if (kps[i].pt.y > img.rows-1)
                        kps[i].pt.y = img.rows-1;
                }
            }
            keypoints.insert(keypoints.end(), kps.begin(), kps.end());
            deors.push_back(desc);
        }
    }
    std::cout <<"num:::" << num << std::endl;
}

void ASiftDetector::affineSkew(double tilt, double phi, Mat& img, Mat& mask, Mat& Ai)
{
    int h = img.rows;
    int w = img.cols;

    mask = Mat(h, w, CV_8UC1, Scalar(255));

    Mat A = Mat::eye(2,3, CV_32F);

    if(phi != 0.0)
    {
        phi *= M_PI/180.;
        double s = sin(phi);
        double c = cos(phi);

        A = (Mat_<float>(2,2) << c, -s, s, c);

        Mat corners = (Mat_<float>(4,2) << 0, 0, w, 0, w, h, 0, h);
        Mat tcorners = corners*A.t();
        Mat tcorners_x, tcorners_y;
        tcorners.col(0).copyTo(tcorners_x);
        tcorners.col(1).copyTo(tcorners_y);
        std::vector<Mat> channels;
        channels.push_back(tcorners_x);
        channels.push_back(tcorners_y);
        merge(channels, tcorners);

        Rect rect = boundingRect(tcorners);
        A =  (Mat_<float>(2,3) << c, -s, -rect.x, s, c, -rect.y);

        warpAffine(img, img, A, Size(rect.width, rect.height), INTER_LINEAR, BORDER_REPLICATE);
    }
    if(tilt != 1.0)
    {
        double s = 0.8*sqrt(tilt*tilt-1);
        GaussianBlur(img, img, Size(0,0), s, 0.01);
        resize(img, img, Size(0,0), 1.0/tilt, 1.0, INTER_NEAREST);
        A.row(0) = A.row(0)/tilt;
    }
    if(tilt != 1.0 || phi != 0.0)
    {
        h = img.rows;
        w = img.cols;
        warpAffine(mask, mask, A, Size(w, h), INTER_NEAREST);
    }
    invertAffineTransform(A, Ai);
}

ASiftDetector.h

//
// Created by spple on 19-12-6.
//

#ifndef ASIFT_ASIFTDETECTOR_H
#define ASIFT_ASIFTDETECTOR_H

#include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/xfeatures2d.hpp>

using namespace cv;

class ASiftDetector
{
public:
    ASiftDetector();

    void detectAndCompute(const Mat& img, std::vector< KeyPoint >& keypoints, Mat& deors);

private:
    void affineSkew(double tilt, double phi, Mat& img, Mat& mask, Mat& Ai);

    cv::Ptr<cv::xfeatures2d::SiftFeatureDetector> detector;
    int num;
};


#endif //ASIFT_ASIFTDETECTOR_H

main.cpp

#include <iostream>
#include <iomanip>
#include <vector>
#include <cmath>
#include <cassert>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <opencv2/flann/flann.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include "ASiftDetector.h"
using namespace std;
using namespace cv;
#include <time.h>

//    drawMatches(OriginalGrayImage, asiftKeypoints_query, targetGrayImage, asiftKeypoints_object, goodgoodMatches, img_RR_matches,
//            Scalar(0, 255, 0), Scalar(0, 255, 0), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

cv::Mat UpDownDrawInlier(const cv::Mat &queryImage, const cv::Mat &objectImage,
                         const vector<cv::Point2f> &queryCoord, const vector<cv::Point2f> &objectCoord) {
    Size sz = Size(queryImage.size().width,
                   queryImage.size().height + objectImage.size().height);
    Mat matchingImage = Mat::zeros(sz, CV_8UC3);

    // 设置matchingImage的感兴趣区域大小并赋予原图
    Mat roi1 = Mat(matchingImage, Rect(0, 0, queryImage.size().width, queryImage.size().height));
    queryImage.copyTo(roi1);
    Mat roi2 = Mat(matchingImage,
                   Rect(0, queryImage.size().height, objectImage.size().width, objectImage.size().height));
    objectImage.copyTo(roi2);

    //画出点
    for (int i = 0; i < (int) queryCoord.size(); ++i) {
        Point2f pt1 = queryCoord[i];
        Point2f pt2 = objectCoord[i];
        Point2f from = pt1;
        Point2f to = Point(pt2.x, queryImage.size().height + pt2.y);
        line(matchingImage, from, to, Scalar(0, 255, 255));
    }
    return matchingImage;
}


cv::Mat LeftUpRightDownDrawInlier(const cv::Mat &queryImage, const cv::Mat &objectImage,
                                  const vector<cv::Point2f> &queryCoord, const vector<cv::Point2f> &objectCoord)
{
    Size sz = Size(queryImage.size().width + objectImage.size().width,
                   queryImage.size().height + objectImage.size().height);
    Mat matchingImage = Mat::zeros(sz, CV_8UC3);

    // 设置matchingImage的感兴趣区域大小并赋予原图
    Mat roi1 = Mat(matchingImage, Rect(0, 0, queryImage.size().width, queryImage.size().height));
    queryImage.copyTo(roi1);
    Mat roi2 = Mat(matchingImage, Rect(queryImage.size().width, queryImage.size().height, objectImage.size().width, objectImage.size().height));
    objectImage.copyTo(roi2);

    //画出点
    for (int i = 0; i < (int)queryCoord.size(); ++i) {
        Point2f pt1 = queryCoord[i];
        Point2f pt2 = objectCoord[i];
        Point2f from = pt1;
        Point2f to = Point(pt2.x + queryImage.size().width, pt2.y + queryImage.size().height);
        line(matchingImage, from, to, Scalar(0, 255, 255));
    }
    return matchingImage;
}

/*
 * Calculates the least-squares best-fit transform that maps corresponding points A to B in m spatial dimensions
 * Input:
 *  A: Nxm numpy array of corresponding points
 *  B: Nxm numpy array of corresponding points
 * Returns:
 *  T: (m+1)x(m+1) homogeneous transformation matrix that maps A on to B
 *  R: mxm rotation matrix
 *  t: mx1 translation vector
 *  //https://blog.csdn.net/kewei9/article/details/74157236
 */
void best_fit_transform(std::vector<cv::Point3f> A, std::vector<cv::Point3f> B, cv::Mat & T, cv::Mat & R, cv::Mat & t)
{
    if (A.size()!=B.size())
    {
        std::cout <<"error:::" << "A.size()!=B.size()" << std::endl;
    }
    int pointsNum = A.size();

    //# translate points to their centroids
    double srcSumX = 0.0f;
    double srcSumY = 0.0f;
    double srcSumZ = 0.0f;

    double dstSumX = 0.0f;
    double dstSumY = 0.0f;
    double dstSumZ = 0.0f;

    for (int i = 0; i < pointsNum; ++ i)
    {
        srcSumX += A[i].x;
        srcSumY += A[i].y;
        srcSumZ += A[i].z;

        dstSumX += B[i].x;
        dstSumY += B[i].y;
        dstSumZ += B[i].z;
    }

    cv::Point3f centerSrc, centerDst;

    centerSrc.x = float(srcSumX / pointsNum);
    centerSrc.y = float(srcSumY / pointsNum);
    centerSrc.z = float(srcSumZ / pointsNum);

    centerDst.x = float(dstSumX / pointsNum);
    centerDst.y = float(dstSumY / pointsNum);
    centerDst.z = float(dstSumZ / pointsNum);

    int m  = 3;
    cv::Mat srcMat(m, pointsNum, CV_32FC1);
    cv::Mat dstMat(m, pointsNum, CV_32FC1);

    float* srcDat = (float*)(srcMat.data);
    float* dstDat = (float*)(dstMat.data);
    for (int i = 0; i < pointsNum; ++ i)
    {
        srcDat[i] = A[i].x - centerSrc.x;
        srcDat[pointsNum + i] = A[i].y - centerSrc.y;
        srcDat[pointsNum * 2 + i] = A[i].z - centerSrc.z;

        dstDat[i] = B[i].x - centerDst.x;
        dstDat[pointsNum + i] = B[i].y - centerDst.y;
        dstDat[pointsNum * 2 + i] = B[i].z - centerDst.z;
    }

    //# rotation matrix
    cv::Mat matS = srcMat * dstMat.t();

    cv::Mat matU, matW, matV;
    cv::SVDecomp(matS, matW, matU, matV);
    R = matV.t() * matU.t();

    //# special reflection case
    double det = cv::determinant(R);
    if (det < 0)
    {
        //https://blog.csdn.net/xiaowei_cqu/article/details/19839019
        float* data= matV.ptr<float>(m-1);
        for (int i=0; i<matV.cols; i++) {
            *data++= (*data * (-1));
        }
        R = matV.t() * matU.t();
    }

    //t = centroid_B.T - np.dot(R, centroid_A.T)
    //# translation
    //Mat(centerDst)=3*1
    t = Mat(centerDst) - (R * Mat(centerSrc));

    //T = np.identity(m + 1)
    //T[:m, :m] = R
    //T[:m, m] = t
    //# homogeneous transformation
    double datM[] = {1, 0, 0, 0,   0, 1, 0, 0,   0, 0, 1, 0,   0, 0, 0, 1};
    cv::Mat matM(4, 4, CV_32FC1, datM);
    T = matM.clone();

}

int main() {
    cv::Mat OriginalGrayImage = cv::imread("/home/spple/CLionProjects/ASIFT/11.14/22_IMG_Texture_8Bit.png", -1);
    cv::Mat targetGrayImage = cv::imread("/home/spple/CLionProjects/ASIFT/11.14/33_IMG_Texture_8Bit.png", -1);


    clock_t begin = clock();

    ASiftDetector asiftDetector;
    vector<KeyPoint> asiftKeypoints_query;
    Mat asiftDeors_query;
    asiftDetector.detectAndCompute(OriginalGrayImage, asiftKeypoints_query, asiftDeors_query);
    vector<KeyPoint> asiftKeypoints_object;
    Mat asiftDeors_object;
    asiftDetector.detectAndCompute(targetGrayImage, asiftKeypoints_object, asiftDeors_object);
    //overlap

    clock_t end = clock();

    cout<<"Running time: "<<(double)(end-begin)/CLOCKS_PER_SEC*1000<<"ms"<<endl;

    std::cout <<"Keypoints_query_num:::" << asiftKeypoints_query.size() << std::endl;
    std::cout <<"Keypoints_object_num:::" << asiftKeypoints_object.size() << std::endl;


    //匹配
    //    -   `BruteForce` (it uses L2 )
    //    -   `BruteForce-L1`
    //    -   `BruteForce-Hamming`
    //    -   `BruteForce-Hamming(2)`
    //    -   `FlannBased`
    cv::Ptr<cv::DeorMatcher> matcher = cv::DeorMatcher::create("FlannBased");
    vector<vector<DMatch> > matches;
    matcher->knnMatch(asiftDeors_query,asiftDeors_object, matches, 2);

    std::vector<cv::Point2f> queryPoints;
    std::vector<cv::Point2f> trainPoints;
    vector<DMatch> goodMatches;
    //https://blog.csdn.net/linxihe123/article/details/70173476
    vector<pair<KeyPoint,KeyPoint> >kp_pairs_temp;
    for (size_t i = 0; i < matches.size(); i++)
    {
        if ((matches[i][0].distance < 0.75 * matches[i][1].distance) && (matches[i].size()==2))
        {
            KeyPoint temp1 = asiftKeypoints_query[matches[i][0].queryIdx];
            KeyPoint temp2 = asiftKeypoints_object[matches[i][0].trainIdx];
            goodMatches.push_back(matches[i][0]);
            queryPoints.push_back(temp1.pt);
            trainPoints.push_back(temp2.pt);
            kp_pairs_temp.push_back(make_pair(temp1,temp2));
        }
    }
    Mat img_RR_matches;
    img_RR_matches = LeftUpRightDownDrawInlier(OriginalGrayImage, targetGrayImage, queryPoints, trainPoints);
    imwrite("../11.14/c.jpg",img_RR_matches);

    // 4点条件判断
    const int minNumberMatchesAllowed = 4;
    if (queryPoints.size() < minNumberMatchesAllowed)
        return false;

    double reprojectionThreshold=20.0;
    std::vector<unsigned char> inliersMask(goodMatches.size());
    Mat homography = findHomography(queryPoints,
                                    trainPoints,
                                    FM_RANSAC,
                                    reprojectionThreshold,
                                    inliersMask,
                                    2000,
                                    0.995);
//    Mat homography = findHomography(queryPoints,
//                                    trainPoints,
//                                    inliersMask,
//                                    FM_RANSAC);

//    std::vector<unsigned char> RansacStatus(goodMatches.size());

//    Mat Fundamental = findFundamentalMat(queryPoints,
//                                         trainPoints,
//                                         FM_RANSAC,
//                                         20,
//                                         0.99,
//                                         RansacStatus);

//    Mat Fundamental = findFundamentalMat(queryPoints,
//                                         trainPoints,
//                                         RansacStatus,
//                                         FM_RANSAC);

    vector<pair<KeyPoint,KeyPoint> >kp_pairs;
    std::vector<cv::Point2f> newp1;
    std::vector<cv::Point2f> newp2;
    vector<DMatch> goodgoodMatches;
    for (size_t i=0; i<inliersMask.size(); i++)
    {
        if (inliersMask[i])
        {
            goodgoodMatches.push_back(goodMatches[i]);
            kp_pairs.push_back(kp_pairs_temp[i]);
            newp1.push_back(kp_pairs_temp[i].first.pt);
            newp2.push_back(kp_pairs_temp[i].second.pt);
        }

    }

    img_RR_matches = LeftUpRightDownDrawInlier(OriginalGrayImage, targetGrayImage, newp1, newp2);
    imwrite("../11.14/a.jpg",img_RR_matches);

    //https://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findessentialmat
    //https://docs.opencv.org/3.4.8/d9/d0c/group__calib3d.html#ga13f7e34de8fa516a686a56af1196247f
    std::vector<unsigned char> EssentialMask(newp1.size());
    cv::Mat intrinsics = (cv::Mat_<float>(3, 3) << 2269.16, 0, 1065.54, 0, 2268.4, 799.032, 0, 0, 1);
    Mat Essential = findEssentialMat(newp1, newp2, intrinsics, cv::RANSAC, 0.999, 20, EssentialMask);

    cv::Mat OriginalDepthImage = cv::imread("/home/spple/CLionProjects/ASIFT/11.14/22_IMG_DepthMap.tif", -1);
    cv::Mat targetDepthImage = cv::imread("/home/spple/CLionProjects/ASIFT/11.14/33_IMG_DepthMap.tif", -1);

    std::vector<cv::Point3f> trp1;
    std::vector<cv::Point3f> trp2;
    std::vector<cv::Point2f> trp1_temp;
    std::vector<cv::Point2f> trp2_temp;
    for (size_t key=0; key<newp1.size(); key++)
    {
        float x1 = newp1[key].x;
        float y1 = newp1[key].y;
        float x2 = newp2[key].x;
        float y2 = newp2[key].y;
        float d1 = OriginalDepthImage.at<float>(int(y1),int(x1));
        cv::Point3f p1;
        p1.z = float(d1) / intrinsics.ptr<float>(2)[2];
        p1.x = (x1 - intrinsics.ptr<float>(0)[2]) * p1.z / intrinsics.ptr<float>(0)[0];
        p1.y = (y1 - intrinsics.ptr<float>(1)[2]) * p1.z / intrinsics.ptr<float>(1)[1];

        float d2 = targetDepthImage.at<float>(int(y2),int(x2));
        cv::Point3f p2;
        p2.z = float(d2) / intrinsics.ptr<float>(2)[2];
        p2.x = (x2 - intrinsics.ptr<float>(0)[2]) * p2.z / intrinsics.ptr<float>(0)[0];
        p2.y = (y2 - intrinsics.ptr<float>(1)[2]) * p2.z / intrinsics.ptr<float>(1)[1];

        if(EssentialMask[key])
        {
            trp1.push_back(p1);
            trp2.push_back(p2);

            trp1_temp.push_back(newp1[key]);
            trp2_temp.push_back(newp2[key]);
        }
    }

    img_RR_matches = LeftUpRightDownDrawInlier(OriginalGrayImage, targetGrayImage, trp1_temp, trp2_temp);
    imwrite("../11.14/b.jpg",img_RR_matches);
    cout << "ICP start" << endl;
    Mat T, R, t;
    best_fit_transform(trp1, trp2, T, R, t);

    /*
     * python下的结果
     * 11-22
    [[ 0.9893    0.04623   0.1395  ]
     [-0.04523   0.999    -0.0105  ]
     [-0.1399    0.004074  0.99    ]]
    [-42.38  92.5  -93.5 ]

     * 11-33
    [[ 0.979   -0.03976 -0.2006 ]
     [ 0.01021  0.9893  -0.1462 ]
     [ 0.2042   0.1411   0.9688 ]]
    [ -2.81   162.   -51.78]
     */

    cout << R << endl;
    cout << t << endl;
    cout << "end code" << endl;
    return 0;
}

CMakeLists.txt

cmake_minimum_required(VERSION 3.12)
project(ASIFT)

set(CMAKE_CXX_STANDARD 14)

set(OpenCV_DIR "/media/spple/新加卷/Dataset/opencv-3.4.8/install/share/OpenCV/")
find_package(OpenCV REQUIRED)

INCLUDE_DIRECTORIES(
        ${OpenCV_INCLUDE_DIRS}
)

add_executable(ASIFT main.cpp ASiftDetector.h ASiftDetector.cpp)
target_link_libraries(ASIFT ${OpenCV_LIBS})

内容概要:本文是一份针对2025年中国企业品牌传播环境撰写的《全网媒体发稿白皮书》,聚焦企业媒体发稿的策略制定、渠道选择与效果评估难题。通过分析当前企业面临的资源分散、内容同质、效果难量化等核心痛点,系统性地介绍了新闻媒体、央媒、地方官媒和自媒体四大渠道的特点与适用场景,并深度融合“传声港”AI驱动的新媒体平台能力,提出“策略+工具+落地”的一体化解决方案。白皮书详细阐述了传声港在资源整合、AI智能匹配、舆情监测、合规审核及全链路效果追踪方面的技术优势,构建了涵盖曝光、互动、转化与品牌影响力的多维评估体系,并通过快消、科技、零售等行业的实战案例验证其有效性。最后,提出了按企业发展阶段和营销节点定制的媒体组合策略,强调本土化传播与政府关系协同的重要性,助力企业实现品牌声量与实际转化的双重增长。; 适合人群:企业市场部负责人、品牌方管理者、公关传播从业者及从事数字营销的相关人员,尤其适用于初创期至成熟期不同发展阶段的企业决策者。; 使用场景及目标:①帮助企业科学制定媒体发稿策略,优化预算分配;②解决渠道对接繁琐、投放不精准、效果不可衡量等问题;③指导企业在重大营销节点(如春节、双11)开展高效传播;④提升品牌权威性、区域渗透力与危机应对能力; 阅读建议:建议结合自身企业所处阶段和发展目标,参考文中提供的“传声港服务组合”与“预算分配建议”进行策略匹配,同时重视AI工具在投放、监测与优化中的实际应用,定期复盘数据以实现持续迭代。
先展示下效果 https://pan.quark.cn/s/987bb7a43dd9 VeighNa - By Traders, For Traders, AI-Powered. Want to read this in english ? Go here VeighNa是一套基于Python的开源量化交易系统开发框架,在开源社区持续不断的贡献下一步步成长为多功能量化交易平台,自发布以来已经积累了众多来自金融机构或相关领域的用户,包括私募基金、证券公司、期货公司等。 在使用VeighNa进行二次开发(策略、模块等)的过程中有任何疑问,请查看VeighNa项目文档,如果无法解决请前往官方社区论坛的【提问求助】板块寻求帮助,也欢迎在【经验分享】板块分享你的使用心得! 想要获取更多关于VeighNa的资讯信息? 请扫描下方二维码添加小助手加入【VeighNa社区交流微信群】: AI-Powered VeighNa发布十周年之际正式推出4.0版本,重磅新增面向AI量化策略的vnpy.alpha模块,为专业量化交易员提供一站式多因子机器学习(ML)策略开发、投研和实盘交易解决方案: :bar_chart: dataset:因子特征工程 * 专为ML算法训练优化设计,支持高效批量特征计算与处理 * 内置丰富的因子特征表达式计算引擎,实现快速一键生成训练数据 * Alpha 158:源于微软Qlib项目的股票市场特征集合,涵盖K线形态、价格趋势、时序波动等多维度量化因子 :bulb: model:预测模型训练 * 提供标准化的ML模型开发模板,大幅简化模型构建与训练流程 * 统一API接口设计,支持无缝切换不同算法进行性能对比测试 * 集成多种主流机器学习算法: * Lass...
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值