opencv-java 特征匹配SURF算法

网上一直搜索opencv的相关匹配,模板匹配对尺寸要求太高了,而且识别率不算太高,网上大多数都是有关C的资料,java下载的opencv也不支持SURF算法,后来搞了半天解析opencv,终于把带有SURF的jar包下载下来了!下面晒一下成果!!

        //-- 步骤1:使用SURF Detector检测关键点,计算描述符
        double hessianThreshold = 400;
        int nOctaves = 4;
        int nOctaveLayers = 3;
        boolean extended = false;
        boolean upright = false;
        SURF detector = SURF.create(hessianThreshold, nOctaves, nOctaveLayers, extended, upright);
        MatOfKeyPoint keypointsObject = new MatOfKeyPoint();
        MatOfKeyPoint keypointsScene = new MatOfKeyPoint();
        Mat descriptorsObject = new Mat();
        Mat descriptorsScene = new Mat();
        detector.detectAndCompute(imgObject, new Mat(), keypointsObject, descriptorsObject);
        detector.detectAndCompute(imgScene, new Mat(), keypointsScene, descriptorsScene);
//-- 步骤2:将描述符向量与基于FLANN的匹配器进行匹配
        // 由于SURF是浮点描述符,因此使用NORM_L2
        DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
        List<MatOfDMatch> knnMatches = new ArrayList<>();
        matcher.knnMatch(descriptorsObject, descriptorsScene, knnMatches, 2);
        //-- 使用Lowe比率测试过滤匹配项
        float ratioThresh = 0.75f;
        List<DMatch> listOfGoodMatches = new ArrayList<>();
        for (int i = 0; i < knnMatches.size(); i++) {
            if (knnMatches.get(i).rows() > 1) {
                DMatch[] matches = knnMatches.get(i).toArray();
                if (matches[0].distance < ratioThresh * matches[1].distance) {
                    listOfGoodMatches.add(matches[0]);
                }
            }
        }
        MatOfDMatch goodMatches = new MatOfDMatch();
        goodMatches.fromList(listOfGoodMatches);
        //-- 匹配
        Mat imgMatches = new Mat();
        Features2d.drawMatches(imgObject, keypointsObject, imgScene, keypointsScene, goodMatches, imgMatches, Scalar.all(-1),
                Scalar.all(-1), new MatOfByte(), Features2d.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS);
        //-- 定位对象
        List<Point> obj = new ArrayList<>();
        List<Point> scene = new ArrayList<>();
        List<KeyPoint> listOfKeypointsObject = keypointsObject.toList();
        List<KeyPoint> listOfKeypointsScene = keypointsScene.toList();
        for (int i = 0; i < listOfGoodMatches.size(); i++) {
            //-- 从良好的匹配中获取关键点
            obj.add(listOfKeypointsObject.get(listOfGoodMatches.get(i).queryIdx).pt);
            scene.add(listOfKeypointsScene.get(listOfGoodMatches.get(i).trainIdx).pt);
        }
        MatOfPoint2f objMat = new MatOfPoint2f();
        MatOfPoint2f sceneMat = new MatOfPoint2f();
        objMat.fromList(obj);
        sceneMat.fromList(scene);
        double ransacReprojThreshold = 3.0;
        Mat H = Calib3d.findHomography( objMat, sceneMat, Calib3d.RANSAC, ransacReprojThreshold );
        //-- 从image_1(要“检测”的对象)获取角
        Mat objCorners = new Mat(4, 1, CvType.CV_32FC2), sceneCorners = new Mat();
        float[] objCornersData = new float[(int) (objCorners.total() * objCorners.channels())];
        objCorners.get(0, 0, objCornersData);
        objCornersData[0] = 0;
        objCornersData[1] = 0;
        objCornersData[2] = imgObject.cols();
        objCornersData[3] = 0;
        objCornersData[4] = imgObject.cols();
        objCornersData[5] = imgObject.rows();
        objCornersData[6] = 0;
        objCornersData[7] = imgObject.rows();
        objCorners.put(0, 0, objCornersData);
        Core.perspectiveTransform(objCorners, sceneCorners, H);
        float[] sceneCornersData = new float[(int) (sceneCorners.total() * sceneCorners.channels())];
        sceneCorners.get(0, 0, sceneCornersData);
        //-- 在角之间绘制线(场景中的映射对象-image_2)
        Imgproc.line(imgMatches, new Point(sceneCornersData[0] + imgObject.cols(), sceneCornersData[1]),
                new Point(sceneCornersData[2] + imgObject.cols(), sceneCornersData[3]), new Scalar(0, 255, 0), 4);
        Imgproc.line(imgMatches, new Point(sceneCornersData[2] + imgObject.cols(), sceneCornersData[3]),
                new Point(sceneCornersData[4] + imgObject.cols(), sceneCornersData[5]), new Scalar(0, 255, 0), 4);
        Imgproc.line(imgMatches, new Point(sceneCornersData[4] + imgObject.cols(), sceneCornersData[5]),
                new Point(sceneCornersData[6] + imgObject.cols(), sceneCornersData[7]), new Scalar(0, 255, 0), 4);
        Imgproc.line(imgMatches, new Point(sceneCornersData[6] + imgObject.cols(), sceneCornersData[7]),
                new Point(sceneCornersData[0] + imgObject.cols(), sceneCornersData[1]), new Scalar(0, 255, 0), 4);
        

在这里插入图片描述

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
Java使用OpenCV进行特征提取需要遵循以下步骤: 1. 安装OpenCV库和JavaCV库 在Java使用OpenCV需要先安装OpenCVJavaCV库。可以通过以下链接获取最新的库: - OpenCV:https://opencv.org/releases/ - JavaCV:https://github.com/bytedeco/javacv/releases 2. 加载图像 使用OpenCV加载图像需要使用Imgcodecs类的imread方法。例如: ``` Mat image = Imgcodecs.imread("path/to/image.jpg"); ``` 3. 特征提取 OpenCV提供了多种特征提取算法,例如SIFT、SURF、ORB等。这些算法都有对应的Java接口。例如,使用SIFT算法提取关键点和描述符可以这样实现: ``` SIFT sift = SIFT.create(); MatOfKeyPoint keypoints = new MatOfKeyPoint(); Mat descriptors = new Mat(); sift.detectAndCompute(image, new Mat(), keypoints, descriptors); ``` 4. 应用特征匹配算法 特征提取之后,可以使用OpenCV提供的特征匹配算法进行匹配。例如,使用FLANN算法匹配两张图像的特征描述符: ``` FlannBasedMatcher matcher = new FlannBasedMatcher(); List<MatOfDMatch> matches = new ArrayList<>(); matcher.knnMatch(descriptors1, descriptors2, matches, 2); ``` 5. 显示匹配结果 可以将匹配结果可视化展示出来,例如将匹配点用线连接起来: ``` Mat output = new Mat(); Features2d.drawMatches(image1, keypoints1, image2, keypoints2, matches, output); HighGui.imshow("Matching Result", output); HighGui.waitKey(); ``` 以上就是Java使用OpenCV进行特征提取的主要步骤。需要注意的是,不同的特征提取算法特征匹配算法使用方法可能有所不同,需要根据具体情况进行调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值