ubuntu20.04下opencv4.4编译、给图片加中文标注及tensorflow的问题记录、ubuntu下使用cv::text::OCRTesseract模块字符识别

以下我一直以为我下载的是opencv4.1.0的源码编译的,其实是opencv4.4.0。另外ubuntu16.04下编译的opencv4.1.0在ubuntu20.04上可以直接使用,无问题。当然我下面在ubuntu20.04下编译的opencv4.4.0(我一直以为是4.1.0)在ubuntu20.04下使用也没问题。

看过之前介绍的应该知道我用的是ubuntu16.04+opencv3.4.1+tensorflow,用了一年多比较稳定。但这次因要升级到ubuntu20.04,结果整个移植过去,却出现很多报错如:

其实本来还有几个库没有,其实不是没有,是在ubuntu16.04下比如是libxxx5.so,现在会报错说没有libxxx6.so,所以我直接软链接过去解决的。上图中“libtensorflow_cc.so:.dynsym local symbol at index 1552(>=sh_info of 2)”

/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/libtensorflow_cc.so: .dynsym local symbol at index 1552 (>= sh_info of 2)
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/libtensorflow_cc.so: .dynsym local symbol at index 2428 (>= sh_info of 2)
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/libtensorflow_cc.so: .dynsym local symbol at index 2429 (>= sh_info of 2)

类似这的三条报错很好解决,就是在linker时加上-fuse-ld=gold即可。

如上所示,别的IDE也是直接在linker后加-fuse-ld=gold即可。

看已经可以正常输出类别了。

**********************************************************************************************************

但是图1的那些关于opencv的报错我没解决掉,于是我直接在ubuntu20.04下cmake3.18.1编译opencv4.1.0,其实网上很多教程,但在cmake时日志提示报错如下:

#use_cache "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache"
#do_unpack "ippicv_2020_lnx_intel64_20191018_general.tgz" "7421de0095c7a39162ae13a6098782f9" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/a56b6ac6f030c312b2dce17430eef13aed9af274/ippicv/ippicv_2020_lnx_intel64_20191018_general.tgz" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/3rdparty/ippicv"
#check_md5 "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/ippicv/7421de0095c7a39162ae13a6098782f9-ippicv_2020_lnx_intel64_20191018_general.tgz"
#mismatch_md5 "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/ippicv/7421de0095c7a39162ae13a6098782f9-ippicv_2020_lnx_intel64_20191018_general.tgz" "d41d8cd98f00b204e9800998ecf8427e"
#delete "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/ippicv/7421de0095c7a39162ae13a6098782f9-ippicv_2020_lnx_intel64_20191018_general.tgz"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/ippicv/7421de0095c7a39162ae13a6098782f9-ippicv_2020_lnx_intel64_20191018_general.tgz" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/a56b6ac6f030c312b2dce17430eef13aed9af274/ippicv/ippicv_2020_lnx_intel64_20191018_general.tgz"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#match_hash_in_cmake_cache "OCV_DOWNLOAD_ADE_HASH_3rdparty_ade_v0_1_1f_zip"
#do_copy "boostdesc_bgm.i" "0ea90e7a8f3f7876d450e4149c97c74f" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_bgm.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/0ea90e7a8f3f7876d450e4149c97c74f-boostdesc_bgm.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "boostdesc_bgm_bi.i" "232c966b13651bd0e46a1497b0852191" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_bi.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_bgm_bi.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/232c966b13651bd0e46a1497b0852191-boostdesc_bgm_bi.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_bi.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "boostdesc_bgm_hd.i" "324426a24fa56ad9c5b8e3e0b3e5303e" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_hd.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_bgm_hd.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/324426a24fa56ad9c5b8e3e0b3e5303e-boostdesc_bgm_hd.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_hd.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "boostdesc_binboost_064.i" "202e1b3e9fec871b04da31f7f016679f" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_064.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_binboost_064.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/202e1b3e9fec871b04da31f7f016679f-boostdesc_binboost_064.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_064.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "boostdesc_binboost_128.i" "98ea99d399965c03d555cef3ea502a0b" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_128.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_binboost_128.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/98ea99d399965c03d555cef3ea502a0b-boostdesc_binboost_128.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_128.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "boostdesc_binboost_256.i" "e6dcfa9f647779eb1ce446a8d759b6ea" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_256.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_binboost_256.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/e6dcfa9f647779eb1ce446a8d759b6ea-boostdesc_binboost_256.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_256.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "boostdesc_lbgm.i" "0ae0675534aa318d9668f2a179c2a052" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_lbgm.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/boostdesc_lbgm.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/boostdesc/0ae0675534aa318d9668f2a179c2a052-boostdesc_lbgm.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_lbgm.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "vgg_generated_48.i" "e8d0dcd54d1bcfdc29203d011a797179" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_48.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/vgg_generated_48.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/vgg/e8d0dcd54d1bcfdc29203d011a797179-vgg_generated_48.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_48.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "vgg_generated_64.i" "7126a5d9a8884ebca5aea5d63d677225" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_64.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/vgg_generated_64.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/vgg/7126a5d9a8884ebca5aea5d63d677225-vgg_generated_64.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_64.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "vgg_generated_80.i" "7cd47228edec52b6d82f46511af325c5" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_80.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/vgg_generated_80.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/vgg/7cd47228edec52b6d82f46511af325c5-vgg_generated_80.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_80.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "vgg_generated_120.i" "151805e03568c9f490a5e3a872777b75" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_120.i" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/downloads/xfeatures2d/vgg_generated_120.i"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/xfeatures2d/vgg/151805e03568c9f490a5e3a872777b75-vgg_generated_120.i" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_120.i"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

#do_copy "face_landmark_model.dat" "7505c44ca4eb54b4ab1e4777cb96ac05" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/8afa57abc8229d611c4937165d20e2a2d9fc5a12/face_landmark_model.dat" "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/share/opencv4/testdata/cv/face/"
#missing "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/build/share/opencv4/testdata/cv/face//face_landmark_model.dat"
#cmake_download "/home/jumper/thirdparty/opencv-4.1.0/opencv-master/.cache/data/7505c44ca4eb54b4ab1e4777cb96ac05-face_landmark_model.dat" "https://raw.githubusercontent.com/opencv/opencv_3rdparty/8afa57abc8229d611c4937165d20e2a2d9fc5a12/face_landmark_model.dat"
#try 1
#   Trying 0.0.0.0:443...
# connect to 0.0.0.0 port 443 failed: 拒绝连接
#   Trying :::443...
# connect to :: port 443 failed: 拒绝连接
# Failed to connect to raw.githubusercontent.com port 443: 拒绝连接
# Closing connection 0
# 

可以从日志看到我有7个boostdesc_XXX.i和4个vgg.i,1个ippicv和1个face.dat下载失败。日志中也可以看到13个文件的网址,于是我一个个下载,当然有的不好下载,我就复制过来的(当然最好去找些网盘之类的下载链接,我的7个boostdesc最后是找的别人网盘中的链接)。

挣扎好久,终于解决,以下是一些坑的记录:

1、ubuntu20.04中.cache文件夹居然是隐藏的!比如我下载的opencv4.1.0源码一个文件夹是opencv_master,另一个文件夹是opencv_contrib_master,那么这个.cache就隐藏在opencv_master文件夹下。一定要在终端用命令去定位到这个文件夹,可以看到这个文件夹下至少有data(用于存放face.dat文件)、ippicv(用于存放ippicv)、xfeatures2d这三个文件夹,其中xfeatures2d下有vgg、boostdesc两个子文件夹。首先用命令行将上图中下载的这13个文件夹复制一份到.cache对应文件夹中。

2、从日志可以看到,这13个文件,还要对齐按照日志中的要求进行改名,

改名后,再将这13个文件夹也复制一份到.cache文件夹的对应文件夹中。

3、再次重新cmake一次,此时可以看到只会报boost_desc与face.dat的错了,VGG和ippicv都不会再显示mismatch_md5了,而是显示match_cmake之类的,不用管,只要没有mismatch_md5就表示OK。同时,可以看到build文件夹下自动生成了一个downloads文件夹,此文件夹下有一个xfeatures2d文件夹,里面有自动生成4个vgg文件。这也表示VGG成功。

4、怎么解决boost_desc与face.dat报错,我试了很久,网上方法都试过,但没解决。因为cmake-configure后,这8个文件总会被0字节的同名文件替代,比如0ae0675534aa318d9668f2a179c2a052(这一串数字我称为md5码)-boostdesc_lbgm 复制过去时明明不是0字节,但cmake/configure后会变成0字节,导致报错。

最后我实在没办法了,按照错误日志上的提示,意思是说md5码不匹配。比如对0ae0675534aa318d9668f2a179c2a052-boostdesc_lbgm.i我就去查这一串对应的md5码是什么:

通过这命令可以看到0ae0675534aa318d9668f2a179c2a052(错误md5码)对应的码实际是bf7e3c0acd53bf4cccfb9c02a0a46b69(正确md5码),所以将这7个boost_desc文件和face.dat文件分别找到正确md5码,然后重命名,比如我的这个文件应重命名为bf7e3c0acd53bf4cccfb9c02a0a46b69-boostdesc_lbgm.i,然后将这8个文件分别复制一份到.cache对应位置。同时,找到这8个文件相应的cmake配置文件,比如7个boost_desc.i文件是在opencv_contrib_master/modules/xfeatures2d/.../download_boostdesc_cmakelist(有点像这个名字的文件中),打开就是我上图的样子,然后将原来的那7个md5注释掉,换成正确的md5码。这样boostdesc.i的7个问题就解决了。

至于face.dat这个文件的问题其实与boost_desc.i处理一样,只不过其cmake配置文件路径不一样(自己找找,好像在modules/face下面的CMakelist里),然后将错误的md5码改成正确的md5即可。

cmake重新编译,所有都正常通过,不会再报错!!!同时可以看到downloads文件夹下自动生成了所有的boost_desc文件!

然后就开始对opencv_master/build文件夹下进行编译,成功。

然后可以看到build文件夹下生成了include和lib,自己可以在别的地方新建一个文件夹如opencv4.1.0_thirdparty(可以作为动态链接库了),将include和lib复制过去,同时将opencv_master和opencv_contrib_master下的各个modules下的文件夹都看一遍,只要有include/opencv2/下有文件夹和文件的都拷贝进opencv4.1.0_thirdparty/include/opencv2/下,这样整个opencv4.1.0_thirdparty才可以拷贝到别的电脑上使用。

******************************************************************************************************

另外我在ubuntu20.04下发现奇怪问题,

我编译完后,打开编译生成的文件竟然不是可执行文件,而是shared library,所以总是提示找不到binary,无法在IDE下运行或调试。但是在终端运行这个shared library也出了结果。我同事试了用QT也是这样。明明我们IDE配置啥的都是照搬16.04啊。

然后我试了下这样run/debug,就可以在IDE下运行或调试了:

但原因不知。

****************************************************************************************************************************

另外我发现工程此时在ubuntu20.04+opencv4.1下报了个问题:

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.4.0-dev) /home/jumper/thirdparty/opencv-4.1.0/opencv-master/modules/imgproc/src/convhull.cpp:359: error: (-5:Bad argument) The convex hull indices are not monotonous, which can be in the case when the input contour contains self-intersections in function 'convexityDefects'

这个轮廓不属于simple型轮廓,就报我上面那个错。其实我看了下这种报错说不单调,说有交叉,说不属于simple型,但我输出了出问题的轮廓以及对应点:

看起来没什么异常,但在求其中一处的凹陷时如下图我画的白色部分,求这里的凹陷时报那个错:

这些白色点我输出是这样:感觉就是不连续,然后也没实质交叉,所以没太明白具体要求是怎么样才不报错。

[2006, 598]
[2005, 599]
[2005, 600]
[2003, 602]
[2003, 603]
[2002, 604]
[2001, 604]
[2000, 605]
[1999, 605]
[1998, 606]
[1997, 606]
[1996, 605]
[1995, 606]
[1989, 606]
[1988, 605]
[1986, 605]
[1985, 604]
[1982, 604]
[1983, 605]
[1982, 606]
[1981, 606]
[1980, 607]
[1979, 607]
[1978, 608]
[1977, 608]
[1976, 609]
[1976, 611]
[1977, 610]
terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.4.0-dev) /home/jumper/thirdparty/opencv-4.1.0/opencv-master/modules/imgproc/src/convhull.cpp:359: error: (-5:Bad argument) The convex hull indices are not monotonous, which can be in the case when the input contour contains self-intersections in function 'convexityDefects'

于是我自己想了个办法如下,将之前所有直接使用opencv4.1.0中的convexityDefects的地方,改成我写的这个函数,如果返回值小于0(就是可能遇到了上面报错的那种情况),就舍弃此次凹陷点。因为不符合查找凹陷点的条件。

//convexityDefects(contours[i],hullsI[i],defects[i]);

			int convexflag=debugconvexdetect(contours[i],hullsI[i],defects[i]);
			if(convexflag<0)
				continue;

C++下的debugconvexdetect这个函数以及python下的解决办法我都上传到 https://download.csdn.net/download/wd1603926823/88913950

改成这样就好了。目前测试稳定,无问题。也给别人试过,同样解决问题。

**************************************************************************************************************************************

发现opencv4.1.0下给图片标注中文,使用原来的代码不行,然后我改了下,就行了。

#ifndef OPENCV_CVX_TEXT_2007_08_31_H
#define OPENCV_CVX_TEXT_2007_08_31_H

#include "ft2build.h"
#include FT_FREETYPE_H

//#include "debug.h"
//#include <cv.h>
//#include <highgui.h>
#include <opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <locale.h>
#include <string>
/**
* \class cvxText
* \brief OpenCV中输出汉字
*
* OpenCV中输出汉字。字库提取采用了开源的FreeFype库。由于FreeFype是
* GPL版权发布的库,和OpenCV版权并不一致,因此目前还没有合并到OpenCV
* 扩展库中。
*
* 显示汉字的时候需要一个汉字字库文件,字库文件系统一般都自带了。
* 这里采用的是一个开源的字库:“文泉驿正黑体”。
*
* 关于"OpenCV扩展库"的细节请访问
* http://code.google.com/p/opencv-extension-library/
*
* 关于FreeType的细节请访问
* http://www.freetype.org/
* */

class cvxText
{
	// 禁止copy
   cvxText& operator=(const cvxText&);

public:
   /**
    * 装载字库文件
    */
   cvxText(const char *freeType);
   virtual ~cvxText();


   /**
   * 获取字体。目前有些参数尚不支持。
   *
  * \param font        字体类型, 目前不支持
  * \param size        字体大小/空白比例/间隔比例/旋转角度
  * \param underline   下画线
  * \param diaphaneity 透明度
   *
  * \sa setFont, restoreFont
   */
   void getFont(int *type,cv::Scalar *size=NULL, bool *underline=NULL, float *diaphaneity=NULL);

   /**
    * 设置字体。目前有些参数尚不支持。
    *
   * \param font        字体类型, 目前不支持
   * \param size        字体大小/空白比例/间隔比例/旋转角度
   * \param underline   下画线
   * \param diaphaneity 透明度
    *
   * \sa getFont, restoreFont
    */
   void setFont(int *type,cv::Scalar *size=NULL, bool *underline=NULL, float *diaphaneity=NULL);

   /**
    * 恢复原始的字体设置。
    *
   * \sa getFont, setFont
    */
   void restoreFont();

   /**
   * 输出汉字(颜色默认为黑色)。遇到不能输出的字符将停止。
   *
   * \param img  输出的影象
   * \param text 文本内容
   * \param pos  文本位置
   *
   * \return 返回成功输出的字符长度,失败返回-1。
   */
   int putText(cv::Mat &img, const char *text, cv::Point pos);


   /**
  * 输出汉字。遇到不能输出的字符将停止。
  *
  * \param img   输出的影象
  * \param text  文本内容
  * \param pos   文本位置
  * \param color 文本颜色
  *
  * \return 返回成功输出的字符长度,失败返回-1。
  */
   int putText(cv::Mat &src, const char *text, cv::Point pos, cv::Scalar color);

 
   //================================================================
   //================================================================

private:

   // 输出当前字符, 更新m_pos位置
   void putWChar(cv::Mat &img, wchar_t wc, cv::Point &pos, cv::Scalar color);

   size_t get_wchar_size(const char *str);

   wchar_t *c2w(const char *pc);

private:

   FT_Library   m_library;   // 字库
   FT_Face      m_face;      // 字体

   //===============================================================
  // 默认的字体输出参数
   int         m_fontType;
   cv::Scalar   m_fontSize;
   bool      m_fontUnderline;
   float      m_fontDiaphaneity;
};

#endif // OPENCV_CVX_TEXT_2007_08_31_H
#include <wchar.h>
#include <assert.h>
#include <locale.h>
#include <ctype.h>

#include "cvxText.h"

//====================================================================
cvxText::cvxText(const char *freeType)
{
   assert(freeType != NULL);

   // 打开字库文件, 创建一个字体
   if(FT_Init_FreeType(&m_library)) throw;
   if(FT_New_Face(m_library, freeType, 0, &m_face)) throw;

   // 设置字体输出参数
   restoreFont();

   // 设置C语言的字符集环境
   setlocale(LC_ALL, "");
}


// 释放FreeType资源
cvxText::~cvxText()
{
   FT_Done_Face    (m_face);
   FT_Done_FreeType(m_library);
}


size_t cvxText::get_wchar_size(const char *str)
{
   size_t len = strlen(str)+2;
   size_t size=0;
   int i;
   for(i=0; i < (int)len; i++)
   {
      if( str[size] >= 0 && str[size] <= 127 ) //不是全角字符
      size+=sizeof(wchar_t);
      else //是全角字符,是中文
      {
        size+=sizeof(wchar_t);
        i+=2;
      }
   }
  return size;
}



wchar_t *cvxText::c2w(const char *pc)
{
   if(!pc)
   return NULL;

   size_t size_of_ch = (strlen(pc)+2)*sizeof(char);
   size_t size_of_wc = get_wchar_size(pc);
   //std::cout<<size_of_ch<<" "<<size_of_wc<<std::endl;
   wchar_t *pw;
   if(!(pw = (wchar_t*)malloc(size_of_wc)))
   {
      printf("malloc fail");
      return NULL;
   }
   mbstowcs(pw,pc,size_of_wc);
   return pw;
}


//设置字体参数:
//
// font         - 字体类型, 目前不支持
// size         - 字体大小/空白比例/间隔比例/旋转角度
// underline   - 下画线
// diaphaneity   - 透明度
void cvxText::getFont(int *type, cv::Scalar *size, bool *underline, float *diaphaneity)
{
   if(type) *type = m_fontType;
   if(size) *size = m_fontSize;
   if(underline) *underline = m_fontUnderline;
   if(diaphaneity) *diaphaneity = m_fontDiaphaneity;
}


void cvxText::setFont(int *type, cv::Scalar *size, bool *underline, float *diaphaneity)
{
	// 参数合法性检查
   if(type)
   {
      if(type >= 0)
    	  m_fontType = *type;
   }
   if(size)
   {
      m_fontSize.val[0] = fabs(size->val[0]);
      m_fontSize.val[1] = fabs(size->val[1]);
      m_fontSize.val[2] = fabs(size->val[2]);
      m_fontSize.val[3] = fabs(size->val[3]);
   }
   if(underline)
   {
      m_fontUnderline   = *underline;
   }
   if(diaphaneity)
   {
      m_fontDiaphaneity = *diaphaneity;
   }
}

// 恢复原始的字体设置

void cvxText::restoreFont()
{
   m_fontType = 0;            // 字体类型(不支持)

   m_fontSize.val[0] = 45;//60;    // 字体大小//2017.10.19 upgrade
   m_fontSize.val[1] = 0.5;   // 空白字符大小比例
   m_fontSize.val[2] = 0.1;   // 间隔大小比例
   m_fontSize.val[3] = 0;     // 旋转角度(不支持)

   m_fontUnderline   = false; // 下画线(不支持)

   m_fontDiaphaneity = 1.0;   // 色彩比例(可产生透明效果)

   // 设置字符大小
   FT_Set_Pixel_Sizes(m_face, (int)m_fontSize.val[0], 0);
}

// 输出函数(颜色默认为)
int cvxText::putText(cv::Mat &img, const char *text, cv::Point pos)
{
   return putText(img, text, pos, CV_RGB(255,0,0));
}


int cvxText::putText(cv::Mat &img, const char *text, cv::Point pos, cv::Scalar color)
{
   if(!img.data) return -1;
   if(text == NULL) return -1;

   int i;
   wchar_t *cw1 = c2w(text);

   for(i = 0; i<(int)wcslen(cw1); i++)
   {
	   putWChar(img, cw1[i], pos, color);
  }
   if(cw1!=NULL)
   {
	   free(cw1);
   }

   return i;
}



// 输出当前字符, 更新m_pos位置
void cvxText::putWChar(cv::Mat &img,wchar_t wc, cv::Point &pos, cv::Scalar color)
{
	// 根据unicode生成字体的二值位图
   FT_UInt glyph_index = FT_Get_Char_Index(m_face, wc);
   FT_Load_Glyph(m_face, glyph_index, FT_LOAD_DEFAULT);
   FT_Render_Glyph(m_face->glyph, FT_RENDER_MODE_MONO);

   FT_GlyphSlot slot = m_face->glyph;

   //行列数
   int rows = slot->bitmap.rows;
   int cols = slot->bitmap.width;

   for(int i = 0; i < rows; ++i)
   {
      for(int j = 0; j < cols; ++j)
      {
    	  int tmpvalue=i;
         int off  = tmpvalue* (slot->bitmap.pitch )+ j/8;

         if(slot->bitmap.buffer[off] & (0xC0 >> (j%8)))
         {
        	int r =pos.y -(rows-1-i);
            int c = pos.x + j;
         
            if(r >= 0 && r < img.rows && c >= 0 && c < img.cols)
            {
               cv::Vec3b pixel = img.at<cv::Vec3b>(cv::Point(c, r));
               cv::Scalar scalar = cv::Scalar(pixel.val[0], pixel.val[1], pixel.val[2]);
               // 进行色彩融合
               float p = m_fontDiaphaneity;
               for (int k = 0; k < 4; ++k) {
                   scalar.val[k] = scalar.val[k]*(1-p) + color.val[k]*p;
               }

               img.at<cv::Vec3b>(cv::Point(c, r))[0] = (unsigned char)(scalar.val[0]);
               img.at<cv::Vec3b>(cv::Point(c, r))[1] = (unsigned char)(scalar.val[1]);
               img.at<cv::Vec3b>(cv::Point(c, r))[2] = (unsigned char)(scalar.val[2]);
            }
         }
      } // end for
   } // end for

   // 修改下一个字的输出位置
   double space = m_fontSize.val[0]*m_fontSize.val[1];
   double sep   = m_fontSize.val[0]*m_fontSize.val[2];

   pos.x += (int)((cols? cols: space) + sep);
}
float m_diaphaneity=1.0f;
cvxText m_cvxText("simhei.ttf");

m_cvxText.setFont(NULL, NULL, NULL, &m_diaphaneity);

...
m_cvxText.putText(colorImage, algae[i].name.c_str(), cv::Point(c, r), MY_COLOR);
...	

目前测试稳定,字体显示正常:

/*******************************************************************************************************/

2020.9.21 

我同事想按照opencv官网示例使用cv::text::OCRTesseract模块,他是在windows下怎么搞都没使用起来,都不正确。然后我想着我前不久也编译了opencv4.4.0,试试看在ubuntu下使用这个模块。结果到示例的这一句时会报错:

Ptr<OCRTesseract> ocr = OCRTesseract::create();

报错说 OCRTesseract (33): OCRTesseract not found.

OCRTesseract (33): OCRTesseract not found.

原来是我之前编译的opencv4.4.0虽然也包含text模块,但opencv的text模块需要依赖额外的库,所以直接是无法使用的,要使用这个模块必须将这个模块额外的依赖库编译时编译进去。于是我只能重新编译一个带Tesseract的opencv4.4.0。在此吐槽网上很多教程都粘贴复制,看得很无语,没意义,千篇一律都是windows下的教程。都没有讲怎么在ubuntu+cmake+Tesseract+opencv的具体步骤,再次感叹国内外知识接轨的速度太慢了。我的具体步骤如下:

1、按下列命令安装下面的所有依赖项

2、然后去Leptonica官网下载某个版本,我下的是1.76,所以我并没有用下面的命令安装leptonica 

 3、去GitHub - tesseract-ocr/tesseract: Tesseract Open Source OCR Engine (main repository) 下载tesseract,可以看到一定有cmake文件夹以及CMakelist,之前我下的3.04版本的没有这些,把我坑死了。按下列命令安装Tesseract和Leptonica

安装完毕在终端确认如下:并且上面命令中的路径下已经有了这两个库的include和lib

4、然后我开始cmake编译了,发现第一次configure时,没有出现Tesseract相关的字眼,于是我急了,开始乱试:

首先,我去Tesseract和Leptonica两个原文件夹下分别执行下列命令:

mkdir build
cd build
cmake ../
make
sudo make install

然后我再在Tesseract原文件夹下执行下列命令:

$ ./autogen.sh
$ ./configure
$ make
$ sudo make install
$ sudo ldconfig
$ make training
$ sudo make training-install

然后我又把$HOME/usr/local下这两个库的所有.so文件拷贝一份到/usr/local/lib,又拷了一份到/usr/lib。并将这两个路径添加到ld.so.conf并ldconfig更新。

5、然后我重新开始cmake,发现第一次configure还是没有Tesseract,(又折腾N久)此时已疯,于是无奈又点击一次configure,终于出现了Tesseract字样:(原来是要点击2次才会出现,而且出现时路径都有,不会NOTFOUND!反正我此时第二次点击configure时出来的都有路径,我没修改)

此时高兴疯。cmake后面都无任何报错。可以看到这次编译opencv4.4.0没有报任何与opencv相关的错,这都得益于文章开头的第一次编译传统opencv4.4.0时做的那些工作。

6、然后开始进入buildwithTesseract文件夹进行opencv的make与make install

7、然后把刚刚编译好的opencv的so拷贝一份到/usr/lib,同时拷一份到/usr/local/lib,然后ldconfig更新一下

8、给你们看下我编译完的tesseract和leptonica文件夹

9、跟之前编译不带opencv4.4.0一样,将一些include和lib拷贝到opencv4.4.0_tesseract,这样以后这个文件夹就可以拷给别人用而不用别人也编译一遍了。

10、然后工程配置:

11、下载一些模型到tessdata内,记得将这个tessdata加入到环境变量,重启。不然待会儿测试没有模型。

12、工程测试代码:

#include <opencv2/text.hpp>

#define HAVE_TESSERACT

#ifdef HAVE_TESSERACT
#include <tesseract/baseapi.h>
#include <tesseract/resultiterator.h>
#endif

using namespace cv::text;
using OCRTesseract =  cv::text::OCRTesseract;

int main()
{
	cv::Mat mat = cv::imread("/...(my path)/1.png");
	if ( mat.empty() )
		return 0;

	std::string output_text;
	char *dataPath = "/...(my path)/tessdata";
	cv::Ptr<OCRTesseract> ptrOcr = OCRTesseract::create(dataPath);
	ptrOcr->run(mat, output_text );
	cout << output_text << endl;
	
	return 0;
}

 结果:红色警告部分忽略,反正不影响我使用,以后有空再优化。

测试图片是:下面这幅图

结果是: 

可以看到输出了结果,不区分标点符号,效果还OK。红色警告部分暂不用管,

Error in pixReadMemTiff: function not present
Error in pixReadMem: tiff: no pix returned
Error in pixaGenerateFontFromString: pix not made
Error in bmfCreate: font pixa not made

不影响使用。之所以有警告,是因为每次运行会进行一些检查,你没安装的库就会报出这些警告。可以看到我开始的图里只有png和zlib,而实际最好下面这些都安装,这样就不会报警告了。如果你和我一样只用了png图像,那你不安装下面这么多也行,有png就够了。

libgif 5.1.4 : libjpeg 8d (libjpeg-turbo 1.5.2) : libpng 1.6.34 : libtiff 4.0.9 : zlib 1.2.11 : libwebp 0.6.1 : libopenjp2 2.3.0

 Found AVX2
 Found AVX
 Found FMA
 Found SSE

终端我也运行了一下,同样的结果:

至此,opencv中使用OCRTesseract模块初步测试完毕。

/*************************************************************************************************直接使用tensorflow进行字符识别****************************************************************************/

tensorflow训练模型完毕,python下预测是这样:

然后c++下:

/*
 * charactersPredict.h
 *
 *  Created on: 2020年10月29日
 *      Author: wd
 */

#ifndef SRC_CHARACTERSPREDICT_H_
#define SRC_CHARACTERSPREDICT_H_

//opencv------------------------------------//
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/objdetect.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/ml.hpp>
#include <opencv2/ml/ml.hpp>
//tensorflow_cc----------------------------------------//
#include "tensorflow/core/framework/graph.pb.h"
#include <tensorflow/core/public/session_options.h>
#include <tensorflow/core/protobuf/meta_graph.pb.h>
#include <fstream>
#include <utility>
#include <vector>
#include <Eigen/Core>
#include <Eigen/Dense>

#include "tensorflow/cc/ops/const_op.h"
#include "tensorflow/cc/ops/image_ops.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/graph.pb.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/graph/default_device.h"
#include "tensorflow/core/graph/graph_def_builder.h"
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/lib/core/threadpool.h"
#include "tensorflow/core/lib/io/path.h"
#include "tensorflow/core/lib/strings/stringprintf.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/types.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/util/command_line_flags.h"

//----------------------------------------------//
using namespace std;
using namespace cv;

#define MODELGRAPHWG_PATH  "/media/root/Ubuntu43/tmpwg/model/ckp-80.meta"
#define MODELWG_PATH "/media/root/Ubuntu43/tmpwg/model/ckp-80"

const char charall[]= "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-'.!?,\"&";

class charactersPredict {
public:
	charactersPredict();

	//主调用接口
	int charactersPredictJpwg(Mat &inputgrayimg,string &outputchar);

	virtual ~charactersPredict();


private:
	tensorflow::Session* session;
	int sizao_rows;
	int sizao_cols;



	int cnnInit();
	void getchar(tensorflow::Tensor &resultpredict,string &result);
	void imgrotate(Mat &src,Mat &dst);
	int predict(Mat &src,string &charresult);
};

#endif /* SRC_CHARACTERSPREDICT_H_ */
/*
 * charactersPredict.cpp
 *
 *  Created on: 2020年10月29日
 *      Author: wd
 */

#include "charactersPredict.h"

charactersPredict::charactersPredict() {
	// TODO Auto-generated constructor stub
	int initflag=cnnInit();
	sizao_rows=100;
	sizao_cols=32;
}

int charactersPredict::cnnInit()
{
	///CNN initiation--Wang Dan 20191030
	tensorflow::Status status = NewSession(tensorflow::SessionOptions(), &session);
	if (!status.ok())
	{
		std::cout << "ERROR: NewSession() failed..." << std::endl;
		return -1;
	}
	tensorflow::MetaGraphDef graphdef;
	tensorflow::Status status_load = ReadBinaryProto(tensorflow::Env::Default(), MODELGRAPHWG_PATH, &graphdef); //从meta文件中读取图模型;
	if (!status_load.ok()) {
			std::cout << "ERROR: Loading model failed..." << std::endl;
			std::cout << status_load.ToString() << "\n";
			return -1;
	}
	tensorflow::Status status_create = session->Create(graphdef.graph_def()); //将模型导入会话Session中;
	if (!status_create.ok()) {
			std::cout << "ERROR: Creating graph in session failed..." << status_create.ToString() << std::endl;
			return -1;
	}
	// 读入预先训练好的模型的权重
	tensorflow::Tensor checkpointPathTensor(tensorflow::DT_STRING, tensorflow::TensorShape());
	checkpointPathTensor.scalar<std::string>()() = MODELWG_PATH;
	status = session->Run(
			  {{ graphdef.saver_def().filename_tensor_name(), checkpointPathTensor },},
			  {},{graphdef.saver_def().restore_op_name()},nullptr);
	if (!status.ok())
	{
		  throw runtime_error("Error loading checkpoint  ...");
	}

	return 0;
}

void charactersPredict::getchar(tensorflow::Tensor &resultpredict,string &result)
{
	int datanum=resultpredict.shape().dim_size(1);
	auto thedata = resultpredict.tensor<long long int,2>();
	for(int ind=0;ind!=datanum;ind++)
	{
		long long int dataind=thedata(0,ind);
		char tmp=charall[dataind];
		result +=tmp;
	}
}

void charactersPredict::imgrotate(Mat &src,Mat &dst)
{
	for(int r=0;r!=src.rows;r++)
	{
		for(int c=0;c!=src.cols;c++)
		{
			dst.ptr<uchar>(c)[r]=src.ptr<uchar>(r)[c];
		}
	}
}

int charactersPredict::charactersPredictJpwg(Mat &inputgrayimg,string &outputchar)
{
	if(!(inputgrayimg.data))
	{
		std::cout << "ERROR: input img is empty..."  << std::endl;
		return -1;
	}
	if(inputgrayimg.channels()!=1)
	{
		std::cout << "ERROR: input img is invalid (not gray image)..."  << std::endl;
		return -2;
	}

	outputchar="";
	int flag=predict(inputgrayimg,outputchar);
	if(flag!=0)
	{
		std::cout << "ERROR: predict() is wrong..."  << std::endl;
		return -3;
	}

	return 0;
}

int charactersPredict::predict(Mat &src,string &charresult)
{
	//CNN start...20190710 wd
	tensorflow::Tensor resized_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,sizao_rows,sizao_cols,1}));
	float *imgdata = resized_tensor.flat<float>().data();
	cv::Mat cnninputImg(sizao_rows, sizao_cols, CV_32FC1, imgdata);
	cv::Mat srccnn(sizao_rows, sizao_cols, CV_8UC1);
	imgrotate(src,srccnn);
	srccnn.convertTo(cnninputImg, CV_32FC1);
	//对图像做预处理
	cnninputImg=cnninputImg/255;
	//CNN input
	vector<std::pair<string, tensorflow::Tensor> > inputs;
	std::string Input1Name = "input";
	inputs.push_back(std::make_pair(Input1Name, resized_tensor));

	std::string Input2Name="seq_len";
	tensorflow::Tensor seq(tensorflow::DT_INT32, tensorflow::TensorShape({1}));
	seq.vec<int>()(0)=24;
	inputs.push_back(std::make_pair(Input2Name, seq));

	//CNN predict
	vector<tensorflow::Tensor> decode;
	string output="dense_decoded";
	tensorflow::Status status_run = session->Run(inputs, {output}, {}, &decode);
	if (!status_run.ok()) {
	   std::cout << "ERROR: RUN failed in session->Run()..."  << std::endl;
	   std::cout << status_run.ToString() << "\n";
	   return -1;
	}


	tensorflow::Tensor resultpredict=decode[0];
	getchar(resultpredict,charresult);

	return 0;
}

charactersPredict::~charactersPredict() {
	// TODO Auto-generated destructor stub
	tensorflow::Status freestatus=session->Close();//二维数组输出
	if (!freestatus.ok())
	{
		std::cout << "ERROR: release session  failed..." << std::endl;
	}
}

#include "charactersPredict.h"

int main()
{
	charactersPredict characterpredictobj;

	double alltime=0.0;
	char srcfile[100];
	for(int group_index=0;group_index<=1345;group_index++)
	{
		//printf("image=%d ...",group_index);
		sprintf(srcfile, "/media/root/Ubuntu43/tmpwg/3/%d.jpg",group_index);
		Mat srcimg=imread(srcfile,0);
		if(!srcimg.data)
		{
			continue;
		}

		cv::TickMeter timer;
		timer.start();

		string imgresult="";
		characterpredictobj.charactersPredictJpwg(srcimg,imgresult);

		 timer.stop();
		 cout<<"image is "<<group_index<<" result: "<<imgresult.c_str()<<"  time is: "<<timer.getTimeMilli()<<" ms!"<<endl;
	     alltime+=(timer.getTimeMilli());
	     timer.reset();

		//cout<<"result: "<<imgresult.c_str()<<endl;
	}

	double average=alltime/1346;
	cout<<"time average: "<<average<<" ms."<<endl;
	printf("测试完成!\n");

	return 0;
}

结果:

另外从大图切割出小图网上有代码:

#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/dnn.hpp>
#include <vector>
#include <string>
#include<iostream>

using namespace std;
using namespace cv;
using namespace cv::dnn;

//https://github.com/hpc203/ocr-opencv-dnn/blob/master/text_detect_recognition.cpp
struct sStatisticsChar
{
	Rect box;
	string chars;
	bool flag;
};


const char* keys =
    "{ help  h     | | Print help message. }"
    "{ input i     | | Path to input image or video file. Skip this argument to capture frames from a camera.}"
    "{ model m     | | Path to a binary .pb file contains trained detector network.}"
    "{ ocr         | | Path to a binary .pb or .onnx file contains trained recognition network.}"
    "{ width       | 320 | Preprocess input image by resizing to a specific width. It should be multiple by 32. }"
    "{ height      | 320 | Preprocess input image by resizing to a specific height. It should be multiple by 32. }"
    "{ thr         | 0.5 | Confidence threshold. }"
    "{ nms         | 0.4 | Non-maximum suppression threshold. }";


void decodeBoundingBoxes(const Mat& scores, const Mat& geometry, float scoreThresh,
                         std::vector<RotatedRect>& detections, std::vector<float>& confidences);

void fourPointsTransform(const Mat& frame, Point2f vertices[4], Mat& result);

void decodeText(const Mat& scores, std::string& text);
double calcLineDegree(const Point2f& firstPt, const Point2f& secondPt);
double getRcDegree(const RotatedRect box);

inline string num2str(int i)
{
	stringstream ss;
	ss << i;
	return ss.str();
}


int main(int argc, char** argv)
{
	string vPath = "/media/root/Ubuntu43/tmpwg/wholeimgs/";
	float confThreshold = 0.05;
	float nmsThreshold = 0.05;
	int inpWidth = 480;
	int inpHeight = 320;
	String modelDecoder = "/media/root/Ubuntu43/tmpwg/vs_segment_model/netmodel/frozen_east_text_detection.pb";
	String modelRecognition = "/media/root/Ubuntu43/tmpwg/vs_segment_model/netmodel/CRNN_VGG_BiLSTM_CTC.onnx";
    CV_Assert(!modelDecoder.empty());

    // Load networks.
    Net detector = readNet(modelDecoder);
    Net recognizer;

    if (!modelRecognition.empty())
        recognizer = readNet(modelRecognition);


    std::vector<Mat> outs;
    std::vector<String> outNames(2);
    outNames[0] = "feature_fusion/Conv_7/Sigmoid";
    outNames[1] = "feature_fusion/concat_3";


	string name;
	int ii = 0;
	
	for (int i = 226; i < 227; i++)
    {
		name = vPath + num2str(i) + ".jpg";
		Mat frame0, blob;
		frame0 = imread(name);
		if (frame0.empty())
		{
			continue;
		}

		Mat frame = Mat::zeros(frame0.size(), CV_8UC3);
		Rect rect = Rect(0, 108, 1920, 640);
		frame0(rect).copyTo(frame(rect));
		

		blobFromImage(frame, blob, 1.0, Size(inpWidth, inpHeight), Scalar(123.68, 116.78, 103.94), true, false);
		detector.setInput(blob);
		detector.forward(outs, outNames);


		Mat scores = outs[0];
		Mat geometry = outs[1];

		// Decode predicted bounding boxes.
		std::vector<RotatedRect> boxes;
		std::vector<float> confidences;
		decodeBoundingBoxes(scores, geometry, confThreshold, boxes, confidences);

		// Apply non-maximum suppression procedure.
		std::vector<int> indices;
		NMSBoxes(boxes, confidences, confThreshold, nmsThreshold, indices);

		Point2f ratio((float)frame.cols / inpWidth, (float)frame.rows / inpHeight);

		vector<sStatisticsChar> StatisticsChar;
		for (size_t i = 0; i < indices.size(); ++i)
		{
			RotatedRect& box = boxes[indices[i]];

			float rate = (float)box.size.width / box.size.height;

			RotatedRect box1 = box;
			box1.size.width = box.size.width*1.1;

			Point2f vertices[4];
			box1.points(vertices);

			for (int j = 0; j < 4; ++j)
			{
				vertices[j].x *= ratio.x;
				vertices[j].y *= ratio.y;
			}


			Mat cropped;
			fourPointsTransform(frame, vertices, cropped);
			string pathchar = "/media/root/Ubuntu43/tmpwg/segmentresults/" + num2str(ii) + ".jpg";
			ii++;
			cvtColor(cropped, cropped, cv::COLOR_BGR2GRAY);
			imwrite(pathchar,cropped);//存小图图片

			//后面的我没使用,而是直接用存的小图去用tensorflow c++预测
			/*
			Mat blobCrop = blobFromImage(cropped, 1.0 / 127.5, Size(), Scalar::all(127.5));
			recognizer.setInput(blobCrop);

			Mat result = recognizer.forward();

			sStatisticsChar sc;
			std::string wordRecognized = "";
			decodeText(result, wordRecognized);
			sc.box = box.boundingRect();
			sc.chars = wordRecognized;
			sc.flag = false;
			StatisticsChar.push_back(sc);
			*/
		}
    }
    return 0;
}

整体结果如下:

可以看到右边这张大图切割出6个小图然后6个小图的识别结果已打印出来!

/******************************************************************************************************/

开心:在烤火的小布丁。

评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

元气少女缘结神

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值