Qrcode Decode : Detect improving Weekly Report on July 27th

Weekly Report on July 27th

by ZhengQiushi

This week I have done two tasks :
1.Complete the decode process of numerical and byte mode.
2.Improve the ability of sampling from the QR-code image.

detectAndDecode
detect
decode
payload
detect_
checkQRInputImage
decode_
localization
computeTransformationPoints
unmask_data
read_format
read_data
rearrange_blocks
decode_payload
get_bits
decode_numeric
decode_byte

1.Complete the Decode Process

1.1 The Process of Decoding the Mode

The data information are encoded in the following format:
D a t a = M o d e + C h a r a c t e r C o u n t I n d i c a t o r + B i n a r y D a t a + T e r m i n a t o r Data = Mode + Character Count Indicator + Binary Data + Terminator Data=Mode+CharacterCountIndicator+BinaryData+Terminator
The mode is always a 4-bit information ,and the total decoding mode are defined as follows :

    typedef enum {
        QR_MODE_NUL        = 0b0000,   // Terminator
        QR_MODE_ECI        = 0b0111,   // ECI mode
        QR_MODE_NUM        = 0b0001,   // Numeric mode
        QR_MODE_ALPHA      = 0b0010,   // Alphabet-numeric mode
        QR_MODE_BYTE       = 0b0100,   // 8-bit data mode
        QR_MODE_KANJI      = 0b1000,   // Kanji (shift-jis) mode
        QR_MODE_STRUCTURE  = 0b0011,   // Internal use only
        QR_MODE_FNC1FIRST  = 0b0101,   // FNC1, first position
        QR_MODE_FNC1SECOND = 0b1001,   // FNC1, second position
    } QRencodeMode;

The bits of the character counter differs according to its mode and version.
The binary data is also divided into groups and is transformed to other coding standard.
The terminator always consists of zeros , so it is more efficient to create a QR_MODE_NUL pattern in the Decoding Mode .

I improve this part by using the Mat to store the bit instead of u_int8_t , which simplifies the process of getting bits . We are able to fetch the bits directly and avoid doing the additional transformation from u_int8_t to 8-bits.

This flowchat dipicts the whole process of decoding :

yes
no
Start
Remaining bit s>= 4
GettingModeBits
Decoding
Over
    decode_error QRDecode::decode_payload(){
        decode_error err ;
        uint8_t * ptr = &final_data.ptr(0)[0];
        while(bits_remaining(ptr)>=4){
            int mode=get_bits(4,ptr);
            switch (mode){
                case QR_MODE_NUL:
                    ptr = &final_data.ptr(0)[final_data.cols-1];
                    break;
                case QR_MODE_NUM:
                    err = decode_numeric(ptr);
                    break;
                case QR_MODE_BYTE:
                    err = decode_byte(ptr);
                    break;
            }
        }
        return SUCCESS;

1.2 Numerical Mode

yes
no
Start
UpdateCharacterCounter
GettingDataBits
Remaining bit s>= 3
Decoding 3 bits
Decoding 2/1 bits
Over
    decode_error QRDecode::decode_numeric(uint8_t * &ptr){
        int count = 0;
        int head =payload_len;
	/*Update the Character counter*/
        int bits = 10;
        if(version>=27)
            bits=14;
        else if(version>=10)
            bits=12;
    /*Get the corresponding Data*/
        count = get_bits(bits,ptr);
        if (payload_len + count + 1 > MAX_PAYLOAD){
            cout<<"ERROR_DATA_OVERFLOW"<<endl;
            return ERROR_DATA_OVERFLOW;
        }
   /*divided 3 numerical char into a 10bit group*/
        while (count >= 3) {
            int num = get_bits(10,ptr);
            payload[payload_len++]= num/100+'0';
            payload[payload_len++]= (num%100)/10+'0';
            payload[payload_len++]= num%10+'0';
            count -= 3;
        }
   /*the final group*/
        if(count == 2){
            /*7 bit group*/
            int num = get_bits(7,ptr);
            payload[payload_len++] = (num%100)/10+'0';
            payload[payload_len++] =  num%10+'0';
        }
        else if (count == 1){
            /*4 bit group*/
            int num = get_bits(4,ptr);
            payload[payload_len++] = num%10+'0';
        }
        return SUCCESS;
    }

1.3 Byte Mode

yes
no
Start
UpdateCharacterCounter
GettingDataBits
i < count
Decoding 3 bits
Get 8 bits
Load
Over
    decode_error QRDecode::decode_byte(uint8_t * &ptr){
    /*Update the Character counter*/
        int bits = 8;
        int count = 0;
        if(version>9)
            bits=16;
	/*Get the corresponding Data*/
        count = get_bits(bits,ptr);
        if (payload_len + count + 1 > MAX_PAYLOAD){
            return ERROR_DATA_OVERFLOW;
        }
        if (bits_remaining(ptr) < count * 8){
            return ERROR_DATA_UNDERFLOW;
        }
        for (int i = 0; i < count; i++){
            int tmp =get_bits(8,ptr);
            payload[payload_len++]= tmp;
        }
        return SUCCESS;
    }

2.Improve the sampling ability

The original code is not stable in decoding some QRcode. And the problem is its bad sampling ability to transform from image to the straight array,which is caused by the inaccurate position of corner points . So I’m gonna improve it .
在这里插入图片描述
The whole process is as follows :

Start
Find Locator
Get Corner Points
Transform
Over

2.1 Find Locator

The orignal method is using the fixed ratio of 1:1:3:1:1 to detect the Locator , which is not reliable because of distortion of pictures.
For example the original detector can’t detect this image according to its visual distortion , which makes the fixed ratio changed.
在这里插入图片描述
And here is my method :

Start
Canny
getPossibleContours
getPatternCenters
getTruePattern
Over
2.1.1 Canny

Doing a Canny for later Contour detection.

2.1.2 getPossibleContours

This step is to get all the possible contours in the first round .

Start
isPossibleCorner
checkRatioOfContours
Over

isPossibleCorner is to exculde the contour which does not have a child .
checkPattern is to exclude contour which has some obviously unexpected inside pattern ratio of light area and dark area.The standard ratio is 5 2 / 6 2 5^2/6^2 52/62.

void getPossibleContours(const vector<vector<Point>> &contours,
                             const vector<Vec4i> &hierarchy,
                             const int &levelsNum,
                             vector<pair<vector<Point>,int>> &patterns){
        int len = hierarchy.size();
        for(int i = 0 ; i < len ; i ++ )
           if(is_Possible_Corner(contours,hierarchy,i,levelsNum))
                patterns.push_back(pair<vector<Point>,int> (contours[i],i));
		return ; 

The green contours are the possible ones and we wiil do a further screening.
在这里插入图片描述

2.1.3 getPatternCenters

This step is to exculde some unexpected contours in the second round by comparing detailed image features.

Start
childInList
checkRatioOfContours
compareWithStandard
computeCenter
Over
 void getPatternCenters(const Mat& bin_barcode,
                           const vector<pair<vector<Point>,int>> &first_patterns,
                           const vector<Vec4i> &hierarchy,
                           vector<Point>& centers,vector<vector<Point>>& location_patterns,
                           double &locator_size){
        classifer::my_classifer locator("../sample.png");
        locator_size=0;
        int len = first_patterns.size() ;
        for(int i = 0 ; i <  len ; i++ ){
            int is_patten;
            for(int j = 0 ; j < len ; j++){
                is_patten = 1;
                /*check current contour's direct parents if in the list or not */
                if(hierarchy[first_patterns[i].second][3] -1  == first_patterns[j].second){
                    /*if so ,can't be our interest pattern */
                    is_patten = 0;
                    break;
                }
            }
            if(is_patten == 1){
                /*get the possible locator*/
                RotatedRect tmp = minAreaRect(first_patterns[i].first);
                float ratio = (tmp.size.height)/tmp.size.width;
                /*exclude by the shape (must be the square)*/
                if(ratio < 2.5 && ratio > 0.5){
                    Mat results = getSubMat(bin_barcode , tmp);
                    if(!locator.compare(results))
                        continue;
                    locator_size += (tmp.size.height+tmp.size.width)/2;
                    vector<Point> hull;
                    convexHull(first_patterns[i].first,hull, true);
                    Point cur_center = getCenterOfMass(first_patterns[i].first);
                    centers.push_back(cur_center);
                    location_patterns.push_back(hull);
                }
            }
        }
        return ;
    }

first_patterns[i] is a pair <vector<Point> , int >, vector<Point> is the point set of contour and the int is the index of the contour.

childInList is to exclude contour whose direct parent is in the Possible list.

checkRatioOfContours is to exclude contour which has some obviously unexpected size.

compareWithStandard is to compare the target image with a standard 64*64 locator image Mat .And evaluate its score by scanning every pixel .If the total score is lower than the fixed threshold , then remove the contour from the Possible List.

    bool my_classifer::compare(const cv::Mat & t){
        int score = 0;
        for (uint32_t i = 0; i < size; ++i){ 
          /*compare by each pixel*/
            const uchar* lhs = sample.ptr<uchar>(i);
            const uchar* rhs = test.ptr<uchar>(i);
            for (uint32_t j = 0; j < size; ++j){
                bool l = *lhs++, r = *rhs++;
                if(!l && !r)
                    continue;
                /*the same +3
                 *diff     -2 */
                score += (l == r) ? 3 : -2;
            }
        }
        return score>thresh;
    }

omputeCenter is to calculate the center of the possible contour.

    Point getCenterOfMass(const vector<Point2f>& contour){
        Moments moment = moments(contour);
        return Point(int(moment.m10/moment.m00),int(moment.m01/moment.m00));
    }
2.1.4 getTruePattern

This step is to exculde some unexpected contours in the third round by calculating geometric features.
We select three points as a group (p1,p2,p3) and line up p1-p2 ,p1-p3 to get two lines.Then we calculate the distance of two lines ,the area ratio, the angle and the difference between two lines. When p1 is the up-left locator , the standard angle is 90 degree ,the area ratio is 1 and the difference between two lines is 0 .
By doing this , we can get three locators and the up-left one is always at the beginning of pattern_points.

    void getTruePattern(const vector<Point>& centers ,const vector<vector<Point>>& location_patterns,
                        vector<Point2f> &pattern_points,vector<vector<Point>>&pattern_hull){
        int len_2 =centers.size();
        int dist_diff_min =INT_MAX;
        /*line up two centers and compare the length of two line*/
        for(int i = 0 ; i < len_2; i++ ){
            for(int j = 0 ; j < len_2 ; j++){
                if(i == j )
                    continue;
                /*line1 : i -> j*/
                Point p1=centers[i];
                Point p2=centers[j];
                int dist_1 = norm(p1-p2);
                for(int p = 0 ; p < len_2 ; p++){
                    if(i == p || j == p)
                        continue;
                    /*line2 : i -> p*/
                    Point p3 = centers[p];
                    double dist_2 = norm(p1-p3);
                    double line1_k = (p2.y-p1.y)/(p2.x-p1.x+0.0001);
                    double line2_k = (p3.y-p1.y)/(p3.x-p1.x+0.0001);

                    /*the angle between two lines*/
                    double angle = getLinesArctan(line1_k,line2_k, true);
                    double area_i =contourArea(location_patterns[i]);
                    double area_j = ...

                    double ratio_i_j = area_i/area_j;
                    double ratio_i_p = area_i/area_p;
                    if(angle<0)
                        continue;
                    /*the angle should be near 90 degree */
                    if(angle<120 && angle > 60 &&
                       ratio_i_j < 1.4 && ratio_i_j >0.6 &&
                       ratio_i_p < 1.4 && ratio_i_p >0.6){
                        double dist_diff = abs(dist_1-dist_2);
                        /*the least distance difference */
                        if(dist_diff_min>dist_diff){
                            dist_diff_min=dist_diff;
                            pattern_points.clear();
                            pattern_hull.clear();
                            /* i is alawys the left-up  location_patterns */
                            pattern_points.push_back(p1);
							...
                            pattern_hull.push_back(location_patterns[i]);
                            ...
                        }
                    }
                }
            }
        }
        return ;
    }

The detection result :
在这里插入图片描述

2.2 Get Corner Points

After doing the above process , we can find three locator .Then we need to find the last point for later transformation.

Start
getCorners
rotationCorrection
adjustBorder
recoverToOriginalImge
Over
2.2.1 Get Corners

We need to find the last corner (the right-bottom point) and now we have known three contours(And the first contour in the vector is the up-left one ).

Firstly, we can calculate the center of mass of the three contours (which is the red point).

Secondly , we can find the corner points which are the fartherest ones in the contour point sets from the mass center ( which are the dark blue points).

    /* @name  : getFarPoints
     * @param : hull1 (a convexhull or a contour)  point2 (another point ) point1 ( the result point )
     * @func  : get the fartherest point from the hull to the point
     * */
    void getFarPoints(const vector<Point>& hull1,
                      const Point &point2 ,
                      Point &point1 ){
        double dist_max = 0;
        int len_1 = hull1.size();
        for(int i = 0;i < len_1 ; i++){
            Point p1=hull1[i];
            double dist = norm(p1-point2);
            if(dist>dist_max){
                dist_max = dist;
                point1 = p1;
            }
        }
        return ;
    }

Thirdly , we can find another two points(I call them right_up_second and left_bottom_second , which are the light blue points ). They are the fartherest points in the othor contour point set from the left-up corner point.

    /* @name  : getFarPoints
     * @param : hull1 (a convexhull or a contour)  point2 (another point ) point1 ( the result point )
     * @func  : get the fartherest point from the hull to the point
     * */
    void getFarPoints(const vector<Point>& hull1,
                      const Point &point2 ,const Point &limit, const double &locator_size,
                      Point &point1 ){
        double dist_max = 0;
        int len_1 = hull1.size();
        for(int i = 0;i < len_1 ; i++){
            Point p1=hull1[i];
            double dist = norm(p1-point2);
            double dist_to_parent=norm(p1-limit);
            /*the point can't be around its father corner points*/
            if(dist_to_parent<locator_size/2)
                continue;
            if(dist>dist_max){
                dist_max = dist;
                point1 = p1;
            }
        }
        return ;
    }

Lastly , we can line up the dark blue and light blue points , and the intersection point is the last corner point.
在这里插入图片描述

2.2.2 Rotation Correction

I find it performs better when I do a rotation correction before finding the last point . By doing so , I can get more accurate locator corners and sampling borders.
First doing a rotation correction.
在这里插入图片描述
Then doing step-2.2.1 again.
在这里插入图片描述

2.2.3 Adjust Sampling Borders

To make the sampling borders more accurate , we need to adjust the corners’ position .
And method is choose two points . First we move the points out (away from the mass center ) to make sure all the data information can be included by the border. Then we move the points in to make sure the blank should be minimized.

    void adjustPoints(const Mat & bin_barcode , const Point & shift ,const double & locator_size ,  Point &p1 , Point &p2){
        int size = (int)locator_size;
        subAdjust(bin_barcode , shift ,size ,1 , p1,p2  );
        subAdjust(bin_barcode , shift ,size ,1 , p2,p1  );
        subAdjust(bin_barcode , shift ,size ,0 , p1,p2  );
        subAdjust(bin_barcode , shift ,size ,0 , p2,p1  );
        return ;
    }
 void subAdjust(const Mat & bin_barcode , const Point & shift , const int & size ,bool out_in , Point &p1 , Point &p2  ){
        LineIterator iter = LineIterator(p1 , p2);
        Mat show;
        int i;
        while(1){
            for(  i = 0 ; i < size ; i++, iter++ ){
                Point cur = iter.pos();
                int pixel = (int)bin_barcode.at<uint8_t>(cur);
                    if(pixel<200){
                        break;
                    }
            if(out_in){//move out
                if(i == size){
                    break;
                }
                p1 += shift;
                iter = LineIterator(p1,p2);
            }
            else { //move in
                if(i == size){
                    p1 -= shift;
                    iter = LineIterator(p1,p2);
                }
                else
                    break;
            }
        }
    }
2.2.4 Recover To Original Image

Because of the rotation correction , we need to recover the rotation to get the corner points in the original image.
Because we just did a wrapAffine , so it is easy for us to do a recovery by rotating a certain angle.
The final result is showed as follows:
在这里插入图片描述
在这里插入图片描述

3. Next Weekly Task

1.Improve the sampling ability

I find the sampling result is often not reliable especial when the version is very large ,which is caused by the accumulation of minor deviations for the sampling tile .
I’m going to make use of the alignment patterns. The alignment patterns can divided the whole data information into several region . When we get into a new region , the sampling tile should be adjust by rescaling the sampling size .

2.Finish other decode mode

Date : 2020.07.27

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值