Porting anisotropic image segmentation on G-API

Porting anisotropic image segmentation on G-API

一.预备知识

1.gradient structure tensor

​ 结构张量(structure tensor) 主要用于区分图像的平坦区域、边缘区域与角点区域。

此处的张量就是一个关于图像的结构矩阵,矩阵结构构成如下:

img

Rx,Ry分别为图像的水平与垂直梯度,而后进行求矩阵T的行列式K与迹(trace)H。

根据K与H的关系来求得区分图像的平坦、边缘与角点区域:

平坦区域:H=0;

边缘区域:H>0 && K=0;

角点区域:H>0 && K>0;

2.gradient structure tensor

![](/home/mazh/Pictures/Screenshot from 2019-05-30 11-15-01.png)

二.代码实现

#include <iostream>
#include <utility>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/gapi.hpp"
#include "opencv2/gapi/core.hpp"
#include "opencv2/gapi/imgproc.hpp"
void calcGST(const cv::GMat& inputImg, cv::GMat& imgCoherencyOut, cv::GMat& imgOrientationOut, int w);
int main()
{
    int W = 52;             // window size is WxW
    double C_Thr = 0.43;    // threshold for coherency
    int LowThr = 35;        // threshold1 for orientation, it ranges from 0 to 180
    int HighThr = 57;       // threshold2 for orientation, it ranges from 0 to 180
    cv::Mat imgIn = cv::imread("input.jpg", cv::IMREAD_GRAYSCALE);
    if (imgIn.empty()) //check whether the image is loaded or not
    {
        std::cout << "ERROR : Image cannot be loaded..!!" << std::endl;
        return -1;
    }
    // Calculate Gradient Structure Tensor and post-process it for output with G-API
    cv::GMat in;
    cv::GMat imgCoherency, imgOrientation;
    calcGST(in, imgCoherency, imgOrientation, W);
    cv::GMat imgCoherencyBin = imgCoherency > C_Thr;
    cv::GMat imgOrientationBin = cv::gapi::inRange(imgOrientation, LowThr, HighThr);
    cv::GMat imgBin = imgCoherencyBin & imgOrientationBin;
    cv::GMat out = cv::gapi::addWeighted(in, 0.5, imgBin, 0.5, 0.0);
    // Normalize extra outputs
    cv::GMat imgCoherencyNorm = cv::gapi::normalize(imgCoherency, 0, 255, cv::NORM_MINMAX);
    cv::GMat imgOrientationNorm = cv::gapi::normalize(imgOrientation, 0, 255, cv::NORM_MINMAX);
    // Capture the graph into object segm
    cv::GComputation segm(cv::GIn(in), cv::GOut(out, imgCoherencyNorm, imgOrientationNorm));
    // Define cv::Mats for output data
    cv::Mat imgOut, imgOutCoherency, imgOutOrientation;
    // Run the graph
    segm.apply(cv::gin(imgIn), cv::gout(imgOut, imgOutCoherency, imgOutOrientation));
    cv::imwrite("result.jpg", imgOut);
    cv::imwrite("Coherency.jpg", imgOutCoherency);
    cv::imwrite("Orientation.jpg", imgOutOrientation);
    return 0;
}
void calcGST(const cv::GMat& inputImg, cv::GMat& imgCoherencyOut, cv::GMat& imgOrientationOut, int w)
{
    auto img = cv::gapi::convertTo(inputImg, CV_32F);
    auto imgDiffX = cv::gapi::Sobel(img, CV_32F, 1, 0, 3);
    auto imgDiffY = cv::gapi::Sobel(img, CV_32F, 0, 1, 3);
    auto imgDiffXY = cv::gapi::mul(imgDiffX, imgDiffY);
    auto imgDiffXX = cv::gapi::mul(imgDiffX, imgDiffX);
    auto imgDiffYY = cv::gapi::mul(imgDiffY, imgDiffY);
    auto J11 = cv::gapi::boxFilter(imgDiffXX, CV_32F, cv::Size(w, w));
    auto J22 = cv::gapi::boxFilter(imgDiffYY, CV_32F, cv::Size(w, w));
    auto J12 = cv::gapi::boxFilter(imgDiffXY, CV_32F, cv::Size(w, w));
    auto tmp1 = J11 + J22;
    auto tmp2 = J11 - J22;
    auto tmp22 = cv::gapi::mul(tmp2, tmp2);
    auto tmp3 = cv::gapi::mul(J12, J12);
    auto tmp4 = cv::gapi::sqrt(tmp22 + 4.0*tmp3);
    auto lambda1 = tmp1 + tmp4;
    auto lambda2 = tmp1 - tmp4;
    imgCoherencyOut = (lambda1 - lambda2) / (lambda1 + lambda2);
    imgOrientationOut = 0.5*cv::gapi::phase(J22 - J11, 2.0*J12, true);
}

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值