cv2.FeatureDetector_create Examples

The following are 3 code examples for showing how to use cv2.FeatureDetector_create. They are extracted from open source Python projects. You can click vote to vote up the examples you like, or click vote to vote down the exmaples you don't like. Your votes will be used in our system to extract more high-quality examples.

You may also check out all available functions/classes of the module cv2 , or try the search function .


Example 1

From project opencv, under directory doc, in source file check_docs2.py.

Score: 10
vote
vote
def get_cv2_object(name):
    if name.startswith("cv2."):
        name = name[4:]
    if name.startswith("cv."):
        name = name[3:]
    if name == "Algorithm":
        return cv2.Algorithm__create("Feature2D.ORB"), name
    elif name == "FeatureDetector":
        return cv2.FeatureDetector_create("ORB"), name
    elif name == "DescriptorExtractor":
        return cv2.DescriptorExtractor_create("ORB"), name
    elif name == "BackgroundSubtractor":
        return cv2.createBackgroundSubtractorMOG(), name
    elif name == "StatModel":
        return cv2.KNearest(), name
    else:
        try:
            obj = getattr(cv2, name)()
        except AttributeError:
            obj = getattr(cv2, "create" + name)()
        return obj, name

 

Example 2

From project SimpleCV, under directory SimpleCV, in source file ImageClass.py.

Score: 8
vote
vote
def _getRawKeypoints(self,thresh=500.00,flavor="SURF", highQuality=1, forceReset=False):
        """
        .. _getRawKeypoints:
        This method finds keypoints in an image and returns them as the raw keypoints
        and keypoint descriptors. When this method is called it caches a the features
        and keypoints locally for quick and easy access.

        Parameters:
        min_quality - The minimum quality metric for SURF descriptors. Good values
                      range between about 300.00 and 600.00

        flavor - a string indicating the method to use to extract features.
                 A good primer on how feature/keypoint extractiors can be found here:

                 http://en.wikipedia.org/wiki/Feature_detection_(computer_vision)
                 http://www.cg.tu-berlin.de/fileadmin/fg144/Courses/07WS/compPhoto/Feature_Detection.pdf


                 "SURF" - extract the SURF features and descriptors. If you don't know
                 what to use, use this.
                 See: http://en.wikipedia.org/wiki/SURF

                 "STAR" - The STAR feature extraction algorithm
                 See: http://pr.willowgarage.com/wiki/Star_Detector

                 "FAST" - The FAST keypoint extraction algorithm
                 See: http://en.wikipedia.org/wiki/Corner_detection#AST_based_feature_detectors

                 All the flavour specified below are for OpenCV versions >= 2.4.0 :

                 "MSER" - Maximally Stable Extremal Regions algorithm

                 See: http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions

                 "Dense" - Dense Scale Invariant Feature Transform.

                 See: http://www.vlfeat.org/api/dsift.html

                 "ORB" - The Oriented FAST and Rotated BRIEF

                 See: http://www.willowgarage.com/sites/default/files/orb_final.pdf

                 "SIFT" - Scale-invariant feature transform

                 See: http://en.wikipedia.org/wiki/Scale-invariant_feature_transform

                 "BRISK" - Binary Robust Invariant Scalable Keypoints

                  See: http://www.asl.ethz.ch/people/lestefan/personal/BRISK

                 "FREAK" - Fast Retina Keypoints

                  See: http://www.ivpe.com/freak.htm
                  Note: It's a keypoint descriptor and not a KeyPoint detector. SIFT KeyPoints
                  are detected and FERAK is used to extract keypoint descriptor.

        highQuality - The SURF descriptor comes in two forms, a vector of 64 descriptor
                      values and a vector of 128 descriptor values. The latter are "high"
                      quality descriptors.

        forceReset - If keypoints have already been calculated for this image those
                     keypoints are returned veresus recalculating the values. If
                     force reset is True we always recalculate the values, otherwise
                     we will used the cached copies.

        Returns:
        A tuple of keypoint objects and optionally a numpy array of the descriptors.

        Example:
        >>> img = Image("aerospace.jpg")
        >>> kp,d = img._getRawKeypoints()

        Notes:
        If you would prefer to work with the raw keypoints and descriptors each image keeps
        a local cache of the raw values. These are named:

        self._mKeyPoints # A tuple of keypoint objects
        See: http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_feature_detectors.html#keypoint-keypoint
        self._mKPDescriptors # The descriptor as a floating point numpy array
        self._mKPFlavor = "NONE" # The flavor of the keypoints as a string.

        See Also:
         ImageClass._getRawKeypoints(self,thresh=500.00,forceReset=False,flavor="SURF",highQuality=1)
         ImageClass._getFLANNMatches(self,sd,td)
         ImageClass.findKeypointMatch(self,template,quality=500.00,minDist=0.2,minMatch=0.4)
         ImageClass.drawKeypointMatches(self,template,thresh=500.00,minDist=0.15,width=1)

        """
        try:
            import cv2
            ver = cv2.__version__
            new_version = 0
            #For OpenCV versions till 2.4.0,  cv2.__versions__ are of the form "$Rev: 4557 $"
            if not ver.startswith('$Rev:'):
                if int(ver.replace('.','0'))>=20400:
                    new_version = 1
        except:
            warnings.warn("Can't run Keypoints without OpenCV >= 2.3.0")
            return (None, None)

        if( forceReset ):
            self._mKeyPoints = None
            self._mKPDescriptors = None

        _detectors = ["SIFT", "SURF", "FAST", "STAR", "FREAK", "ORB", "BRISK", "MSER", "Dense"]
        _descriptors = ["SIFT", "SURF", "ORB", "FREAK", "BRISK"]
        if flavor not in _detectors:
            warnings.warn("Invalid choice of keypoint detector.")
            return (None, None)

        if self._mKeyPoints != None and self._mKPFlavor == flavor:
            return (self._mKeyPoints, self._mKPDescriptors)

        if hasattr(cv2, flavor):

            if flavor == "SURF":
                # cv2.SURF(hessianThreshold, nOctaves, nOctaveLayers, extended, upright)
                detector = cv2.SURF(thresh, 4, 2, highQuality, 1)
                if new_version == 0:
                    self._mKeyPoints, self._mKPDescriptors = detector.detect(self.getGrayNumpy(), None, False)
                else:
                    self._mKeyPoints, self._mKPDescriptors = detector.detectAndCompute(self.getGrayNumpy(), None, False)
                if len(self._mKeyPoints) == 0:
                    return (None, None)
                if highQuality == 1:
                    self._mKPDescriptors = self._mKPDescriptors.reshape((-1, 128))
                else:
                    self._mKPDescriptors = self._mKPDescriptors.reshape((-1, 64))

            elif flavor in _descriptors:
                detector = getattr(cv2,  flavor)()
                self._mKeyPoints, self._mKPDescriptors = detector.detectAndCompute(self.getGrayNumpy(), None, False)
            elif flavor == "MSER":
                if hasattr(cv2, "FeatureDetector_create"):
                    detector = cv2.FeatureDetector_create("MSER")
                    self._mKeyPoints = detector.detect(self.getGrayNumpy())
        elif flavor == "STAR":
            detector = cv2.StarDetector()
            self._mKeyPoints = detector.detect(self.getGrayNumpy())
        elif flavor == "FAST":
            if not hasattr(cv2, "FastFeatureDetector"):
                warnings.warn("You need OpenCV >= 2.4.0 to support FAST")
                return None, None
            detector = cv2.FastFeatureDetector(int(thresh), True)
            self._mKeyPoints = detector.detect(self.getGrayNumpy(), None)
        elif hasattr(cv2, "FeatureDetector_create"):
            if flavor in _descriptors:
                extractor = cv2.DescriptorExtractor_create(flavor)
                if flavor == "FREAK":
                    if new_version == 0:
                        warnings.warn("You need OpenCV >= 2.4.3 to support FAST")
                    flavor = "SIFT"
                detector = cv2.FeatureDetector_create(flavor)
                self._mKeyPoints = detector.detect(self.getGrayNumpy())
                self._mKeyPoints, self._mKPDescriptors = extractor.compute(self.getGrayNumpy(), self._mKeyPoints)
            else:
                detector = cv2.FeatureDetector_create(flavor)
                self._mKeyPoints = detector.detect(self.getGrayNumpy())
        else:
            warnings.warn("SimpleCV can't seem to find appropriate function with your OpenCV version.")
            return (None, None)
        return (self._mKeyPoints, self._mKPDescriptors)

     

Example 3

From project SimpleCV, under directory SimpleCV/Tracking, in source file SURFTracker.py.

Score: 5
vote
vote
def surfTracker(img, bb, ts, **kwargs):
    """
    **DESCRIPTION**
    
    (Dev Zone)

    Tracking the object surrounded by the bounding box in the given
    image using SURF keypoints.

    Warning: Use this if you know what you are doing. Better have a 
    look at Image.track()

    **PARAMETERS**

    * *img* - Image - Image to be tracked.
    * *bb*  - tuple - Bounding Box tuple (x, y, w, h)
    * *ts*  - TrackSet - SimpleCV.Features.TrackSet.

    Optional PARAMETERS:

    eps_val     - eps for DBSCAN
                  The maximum distance between two samples for them 
                  to be considered as in the same neighborhood. 
                
    min_samples - min number of samples in DBSCAN
                  The number of samples in a neighborhood for a point 
                  to be considered as a core point. 
                  
    distance    - thresholding KNN distance of each feature
                  if KNN distance > distance, point is discarded.

    **RETURNS**

    SimpleCV.Features.Tracking.SURFTracker

    **HOW TO USE**

    >>> cam = Camera()
    >>> ts = []
    >>> img = cam.getImage()
    >>> bb = (100, 100, 300, 300) # get BB from somewhere
    >>> ts = surfTracker(img, bb, ts, eps_val=0.7, distance=150)
    >>> while (some_condition_here):
        ... img = cam.getImage()
        ... bb = ts[-1].bb
        ... ts = surfTracker(img, bb, ts, eps_val=0.7, distance=150)
        ... ts[-1].drawBB()
        ... img.show()

    This is too much confusing. Better use
    Image.track() method.

    READ MORE:

    SURF based Tracker:
    Matches keypoints from the template image and the current frame.
    flann based matcher is used to match the keypoints.
    Density based clustering is used classify points as in-region (of bounding box)
    and out-region points. Using in-region points, new bounding box is predicted using
    k-means.
    """
    eps_val = 0.69
    min_samples = 5
    distance = 100

    for key in kwargs:
        if key == 'eps_val':
            eps_val = kwargs[key]
        elif key == 'min_samples':
            min_samples = kwargs[key]
        elif key == 'dist':
            distance = kwargs[key]

    from scipy.spatial import distance as Dis
    from sklearn.cluster import DBSCAN

    if len(ts) == 0:
        # Get template keypoints
        bb = (int(bb[0]), int(bb[1]), int(bb[2]), int(bb[3]))
        templateImg = img
        detector = cv2.FeatureDetector_create("SURF")
        descriptor = cv2.DescriptorExtractor_create("SURF")

        templateImg_cv2 = templateImg.getNumpyCv2()[bb[1]:bb[1]+bb[3], bb[0]:bb[0]+bb[2]]
        tkp = detector.detect(templateImg_cv2)
        tkp, td = descriptor.compute(templateImg_cv2, tkp)

    else:
        templateImg = ts[-1].templateImg
        tkp = ts[-1].tkp
        td = ts[-1].td
        detector = ts[-1].detector
        descriptor = ts[-1].descriptor

    newimg = img.getNumpyCv2()

    # Get image keypoints
    skp = detector.detect(newimg)
    skp, sd = descriptor.compute(newimg, skp)

    if td is None:
        print "Descriptors are Empty"
        return None

    if sd is None:
        track = SURFTracker(img, skp, detector, descriptor, templateImg, skp, sd, tkp, td)
        return track

    # flann based matcher
    flann_params = dict(algorithm=1, trees=4)
    flann = cv2.flann_Index(sd, flann_params)
    idx, dist = flann.knnSearch(td, 1, params={})
    del flann

    # filter points using distnace criteria
    dist = (dist[:,0]/2500.0).reshape(-1,).tolist()
    idx = idx.reshape(-1).tolist()
    indices = sorted(range(len(dist)), key=lambda i: dist[i])

    dist = [dist[i] for i in indices]
    idx = [idx[i] for i in indices]
    skp_final = []
    skp_final_labelled=[]
    data_cluster=[]
    
    for i, dis in itertools.izip(idx, dist):
        if dis < distance:
            skp_final.append(skp[i])
            data_cluster.append((skp[i].pt[0], skp[i].pt[1]))

    #Use Denstiy based clustering to further fitler out keypoints
    n_data = np.asarray(data_cluster)
    D = Dis.squareform(Dis.pdist(n_data))
    S = 1 - (D/np.max(D))
    
    db = DBSCAN(eps=eps_val, min_samples=min_samples).fit(S)
    core_samples = db.core_sample_indices_
    labels = db.labels_
    for label, i in zip(labels, range(len(labels))):
        if label==0:
            skp_final_labelled.append(skp_final[i])

    track = SURFTrack(img, skp_final_labelled, detector, descriptor, templateImg, skp, sd, tkp, td)

    return track

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值