Overview and Resources of Scene Text Detection

Scene Text Detection Resources

转自华南理工大学SCTU-DLVC实验室,原文链接https://github.com/HCIILAB/Scene-Text-Detection

Author: Chongyu Liu


1. Datasets

1.1 Horizontal-Text Datasets

  • ICDAR 2003(IC03):

    • Introduction: It contains 509 images in total, 258 for training and 251 for testing. Specifically, it contains 1110 text instance in training set, while 1156 in testing set. It has word-level annotation. IC03 only consider English text instance.
    • Link: IC03-download
  • ICDAR 2011(IC11):

    • Introduction: IC11 is an English dataset for text detection. It contains 484 images, 229 for training and 255 for testing. There are 1564 text instance in this dataset. It provides both word-level and character-level annotation.
    • Link: IC11-download
  • ICDAR 2013(IC13):

    • Introduction: IC13 is almost the same as IC11. It contains 462 images in total, 229 for training and 233 for testing. Specifically, it contains 849 text instance in training set, while 1095 in testing set.
    • Link: IC13-download

1.2 Arbitrary-Quadrilateral-Text Datasets

  • USTB-SV1K:

    • Introduction: USTB-SV1K is an English dataset. It contains 1000 street images from Google Street View with 2955 text instance in total. It only provides word-level annotations.
    • Link: USTB-SV1K-download
  • SVT:

    • Introduction: It contains 350 images with 725 English text intance in total. SVT has both character-level and word-level annotations. The images of SVT are harvested from Google Street View and have low resolution.
    • Link: SVT-download
  • SVT-P:

    • Introduction: It contains 639 cropped word images for testing. Images were selected from the side-view angle snapshots in Google Street View. Therefore, most images are heavily distorted by the non-frontal view angle. It is the imporved datasets of SVT.
    • Link: SVT-P-download (Password : vnis)
  • ICDAR 2015(IC15):

    • Introduction: It contains 1500 images in total, 1000 for training and 500 for testing. Specifically, it contains 17548 text instance. It provides word-level annotations. IC15 is the first incidental scene text dataset and it only considers English words.
    • Link: IC15-download
  • COCO-Text:

    • Introduction: It contains 63686 images in total, 43686 for training, 10000 for validating and 10000 for testing. Specifically, it contains 145859 cropped word images for testing, including handwritten and printed, clear and blur, English and non-English.
    • Link: COCO-Text-download
  • MSRA-TD500:

    • Introduction: It contains 500 images in total. It provides text-line-level annotation rather than word, and polygon boxes rather than axis-aligned rectangles for text region annootation. It contains both English and Chinese text instance.
    • Link: MSRA-TD500-download
  • MLT 2017:

    • Introduction: It contains 10000 natural images in total. It provides word-level annotation. There are 9 languages for MLT. It is a more real and complex datasets for scene text detection and recognition…
    • Link: MLT-download
  • MLT 2019:

    • Introduction: It contains 18000 images in total. It provides word-level annotation. Compared to MLT, this dataset has 10 languages. It is a more real and complex datasets for scene text detection and recognition…
    • Link: MLT-2019-download
  • CTW:

    • Introduction: It contains 32285 high resolution street view images of Chinese text, with 1018402 character instances in total. All images are annotated at the character level, including its underlying character type, bouding box, and 6 other attributes. These attributes indicate whether its background is complex, whether it’s raised, whether it’s hand-written or printed, whether it’s occluded, whether it’s distorted, whether it uses word-art.
    • Link: CTW-download
  • RCTW-17:

    • Introduction: It contains 12514 images in total, 11514 for training and 1000 for testing. Images in RCTW-17 were mostly collected by camera or mobile phone, and others were generated images. Text instances are annotated with parallelograms. It is the first large scale Chinese dataset, and was also the largest published one by then.
    • Link: RCTW-17-download
  • ReCTS:

    • Introduction: This data set is a large-scale Chinese Street View Trademark Data Set. It is based on Chinese words and Chinese text line-level labeling. The labeling method is arbitrary quadrilateral labeling. It contains 20000 images in total.
    • Link: ReCTS-download

1.3 Irregular-Text Datasets

  • CUTE80:

    • Introduction: It contains 80 high-resolution images taken in natural scenes. Specifically, it contains 288 cropped word images for testing. The dataset focuses on curved text. No lexicon is provided.
    • Link: CUTE80-download
  • Total-Text:

    • Introduction: It contains 1,555 images in total. Specifically, it contains 11,459 cropped word images with more than three different text orientations: horizontal, multi-oriented and curved.
    • Link: Total-Text-download
  • SCUT-CTW1500:

    • Introduction: It contains 1500 images in total, 1000 for training and 500 for testing. Specifically, it contains 10751 cropped word images for testing. Annotations in CTW-1500 are polygons with 14 vertexes. The dataset mainly consists of Chinese and English.
    • Link: CTW-1500-download
  • LSVT:

    • Introduction: LSVT consists of 20,000 testing data, 30,000 training data in full annotations and 400,000 training data in weak annotations, which are referred to as partial labels. The labeled text regions demonstrate the diversity of text: horizontal, multi-oriented and curved.
    • Link: LSVT-download
  • ArT:

    • Introduction: ArT consists of 10,166 images, 5,603 for training and 4,563 for testing. They were collected with text shape diversity in mind and all text shapes have high number of existence in ArT.
    • Link: ArT-download

1.4 Synthetic Datasets

  • Synth80k :

    • Introduction: It contains 800 thousands images with approximately 8 million synthetic word instances. Each text instance is annotated with its text-string, word-level and character-level bounding-boxes.
    • Link: Synth80k-download
  • SynthText :

    • Introduction: It contains 6 million cropped word images. The generation process is similar to that of Synth90k. It is also annotated in horizontal-style.
    • Link: SynthText-download

1.5 Comparison of Datasets

Comparison of Datasets
DatasetsLanguageImageText instance Text ShapeAnnotation level
TotalTrainTestTotalTrainTestHorizontalArbitrary-QuadrilateralMulti-orientedCharWordText-Line
IC03English509258251226611101156
IC11English4842292551564
IC13English46222923319448491095
USTB-SV1KEnglish10005005002955
SVTEnglish350100250725211514
SVT-PEnglish238639
IC15English15001000500175481223185230
COCO-TextEnglish63686436862000014585911830927550
MSRA-TD500English/Chinese500300200
MLT 2017Multi-lingual18000720010800
MLT 2019Multi-lingual200001000010000
CTWChinese322852588763981018402812872205530
RCTW-17English/Chinese12514151141000
ReCTSChinese20000
CUTE80English80
Total-TextEnglish152512253009330
CTW-1500English/Chinese1500100050010751
LSVTEnglish/Chinese45000043000020000
ArTEnglish/Chinese1016656034563
Synth80kEnglish80k8m
SynthText English800k6m

2. Summary of Scene Text Detection Resources

2.1 Comparison of Methods

Scene text detection methods can be devided into four parts:

(a) Traditional methods;

(b) Segmentation-based methods;

© Regression-based methods;

(d) Hybrid methods.

It is important to notice that: (1) “Hori” stands for horizontal scene text datasets. (2) “Quad” stands for arbitrary-quadrilateral-text datasets. (3) “Irreg” stands for irregular scence text datasets. (4) “Traditional method” stands for the methods that don’t rely on deep learning.

2.1.1 Traditional Methods
      Method           Model     CodeHoriQuadIrregSource Time                                                        Highlight                                                        
Yao et al. [1]TD-MixtureCVPR20121) A new dataset MSRA-TD500 and protocol for evaluation. 2) Equipped a two-level classification scheme and two sets of features extractor.
Yin et al. [2]
TPAMI2013Extract Maximally Stable Extremal Regions (MSERs) as character candidates and group them together.
Le et al. [5]HOCCCVPR2014HOCC + MSERs
Yin et al. [7]
TPAMI2015Presenting a unified distance metric learning framework for adaptive hierarchical clustering.
Wu et al. [9]
TMM2015Exploring gradient directional symmetry at component level for smoothing edge components before text detection.
Tian et al. [17]
IJCAI2016Scene text is first detected locally in individual frames and finally linked by an optimal tracking trajectory.
Yang et al. [33]
TIP2017A text detector will locate character candidates and extract text regions. Then they will linked by an optimal tracking trajectory.
Liang et al. [8]
TIP2015Exploring maxima stable extreme regions along with stroke width transform for detecting candidate text regions.
Michal et al.[12]FASTextICCV2015Stroke keypoints are efficiently detected and then exploited to obtain stroke segmentations.

2.1.2 Segmentation-based Methods
       Method          Model     CodeHoriQuadIrregSource Time                                                                 Highlight                                                             
Li et al. [3]
TIP2014(1)develop three novel cues that are tailored for character detection and a Bayesian method for their integration; (2)design a Markov random field model to exploit the inherent dependencies between characters.
Zhang et al. [14]
CVPR2016Utilizing FCN for salient map detection and centroid of each character prediction.
Zhu et al. [16]
CVPR2016Performs a graph-based segmentation of connected components into words (Word-Graph).
He et al. [18]Text-CNNTIP2016Developing a new learning mechanism to train the Text-CNN with multi-level and rich supervised information.
Yao et al. [21]
arXiv2016Proposing to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem.
Hu et al. [27]WordSupICCV2017Proposing a weakly supervised framework that can utilize word annotations. Then the detected characters are fed to a text structure analysis module.
Wu et al. [28]
ICCV2017Introducing the border class to the text detection problem for the first time, and validate that the decoding process is largely simplified with the help of text border.
Tang et al.[32]
TIP2017A text-aware candidate text region(CTR) extraction model + CTR refinement model.
Dai et al. [35]FTSNarXiv2017Detecting and segmenting the text instance jointly and simultaneously, leveraging merits from both semantic segmentation task and region proposal based object detection task.
Wang et al. [38]
ICDAR2017This paper proposes a novel character candidate extraction method based on super-pixel segmentation and hierarchical clustering.
Deng et al. [40]PixelLinkAAAI2018Text instances are first segmented out by linking pixels wthin the same instance together.
Liu et al. [42]MCNCVPR2018Stochastic Flow Graph (SFG) + Markov Clustering.
Lyu et al. [43]
CVPR2018Detect scene text by localizing corner points of text bounding boxes and segmenting text regions in relative positions.
Chu et al. [45]BorderECCV2018The paper presents a novel scene text detection technique that makes use of semantics-aware text borders and bootstrapping based text segment augmentation.
Long et al. [46]TextSnakeECCV2018The paper proposes TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms based on symmetry axis.
Yang et al. [47]IncepTextIJCAI2018Designing a novel Inception-Text module and introduce deformable PSROI pooling to deal with multi-oriented text detection.
Yue et al. [48]
BMVC2018Proposing a general framework for text detection called Guided CNN to achieve the two goals simultaneously.
Zhong et al. [53]AF-RPNarXiv2018Presenting AF-RPN(anchor-free) as an anchor-free and scale-friendly region proposal network for the Faster R-CNN framework.
Wang et al. [54]PSENetCVPR2019Proposing a novel Progressive Scale Expansion Network (PSENet), designed as a segmentation-based detector with multiple predictions for each text instance.
Xu et al.[57]TextFieldarXiv2018Presenting a novel direction field which can represent scene texts of arbitrary shapes.
Tian et al. [58]FTDNICIP2018FTDN is able to segment text region and simultaneously regress text box at pixel-level.
Tian et al. [83]
CVPR2019Constraining embedding feature of pixels inside the same text region to share similar properties.
Huang et al. [4]MSERs-CNNECCV2014Combining MSERs with CNN
Sun et al. [6]
PR2015Presenting a robust text detection approach based on color-enhanced CER and neural networks.
Baek et al. [62]CRAFTCVPR2019Proposing CRAFT effectively detect text area by exploring each character and affinity between characters.

2.1.3 Regression-based Methods
      Method           Model     CodeHoriQuadIrregSource Time                                                      Highlight                                                                        
Gupta et al. [15]FCRNCVPR2016(a) Proposing a fast and scalable engine to generate synthetic images of text in clutter; (b) FCRN.
Zhong et al. [20]DeepTextarXiv2016(a) Inception-RPN; (b) Utilize ambiguous text category (ATC) information and multilevel region-of-interest pooling (MLRP).
Liao et al. [22]TextBoxesAAAI2017Mainly basing SSD object detection framework.
Liu et al. [25]DMPNetCVPR2017Quadrilateral sliding windows + shared Monte-Carlo method for fast and accurate computing of the polygonal areas + a sequential protocol for relative regression.
He et al. [26]DDRICCV2017Proposing an FCN that has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries.
Jiang et al. [36]R2CNNarXiv2017Using the Region Proposal Network (RPN) to generate axis-aligned bounding boxes that enclose the texts with different orientations.
Xing et al. [37]ArbiTextarXiv2017Adopting the circle anchors and incorporating a pyramid pooling module into the Single Shot MultiBox Detector framework.
Zhang et al. [39]FENAAAI2018Proposing a refined scene text detector with a novel Feature Enhancement Network (FEN) for Region Proposal and Text Detection Refinement.
Wang et al. [41]ITNCVPR2018ITN is presented to learn the geometry-aware representation encoding the unique geometric configurations of scene text instances with in-network transformation embedding.
Liao et al. [44]RRDCVPR2018The regression branch extracts rotation-sensitive features, while the classification branch extracts rotation-invariant features by pooling the rotation sensitive features.
Liao et al. [49]TextBoxes++TIP2018Mainly basing SSD object detection framework and it replaces the rectangular box representation in conventional object detector by a quadrilateral or oriented rectangle representation.
He et al. [50]
TIP2018Proposing a scene text detection framework based on fully convolutional network with a bi-task prediction module.
Ma et al. [51]RRPNTMM2018RRPN + RRoI Pooling.
Zhu et al. [55]SLPRarXiv2018SLPR regresses multiple points on the edge of text line and then utilizes these points to sketch the outlines of the text.
Deng et al. [56]
arXiv2018CRPN employs corners to estimate the possible locations of text instances. And it also designs a embedded data augmentation module inside region-wise subnetwork.
Cai et al. [59]FFNICIP2018Proposing a Feature Fusion Network to deal with text regions differing in enormous sizes.
Sabyasachi et al. [60]RGCICIP2018Proposing a novel recurrent architecture to improve the learnings of a feature map at a given time.
Liu et al. [63]CTDPR2019CTD + TLOC + PNMS
Xie et al. [79]DeRPNAAAI2019DeRPN utilizes anchor string mechanism instead of anchor box in RPN.
Wang et al. [82]
CVPR2019Text-RPN + RNN
Liu et al. [84]
CVPR2019CSE mechanism
He et al. [29]SSTDICCV2017Proposing an attention mechanism. Then developing a hierarchical inception module which efficiently aggregates multi-scale inception features.
Tian et al. [11]
ICCV2015Cascade boosting detects character candidates, and the min-cost flow network model get the final result.
Tian et al. [13]CTPNECCV20161) RPN + LSTM. 2) RPN incorporate a new vertical anchor mechanism and LSTM connects the region to get the final result.
He et al. [19]
ACCV2016ER detetctor detects regions to get coarse prediction of text regions. Then the local context is aggregated to classify the remaining regions to obtain a final prediction.
Shi et al. [23]SegLinkCVPR2017Decomposing text into segments and links. A link connects two adjacent segments.
Tian et al. [30]WeTextICCV2017Proposing a weakly supervised scene text detection method (WeText).
Zhu et al. [31]RTNICDAR2017Mainly basing CTPN vertical vertical proposal mechanism.
Ren et al. [34]
TMM2017Proposing a CNN-based detector. It contains a text structure component detector layer, a spatial pyramid layer, and a multi-input-layer deep belief network (DBN).
Zhang et al. [10]
CVPR2015The proposed algorithm exploits the symmetry property of character groups and allows for direct extraction of text lines from natural images.

2.1.4 Hybrid Methods
       Method          Model     CodeHoriQuadIrregSource Time                                                             Highlight                                                                 
Tang et al. [52]SSFTTMM2018Proposing a novel scene text detection method that involves superpixel-based stroke feature transform (SSFT) and deep learning based region classification (DLRC).
Xie et al.[61]SPCNetAAAI2019Text Context module + Re-Score mechanism.
Liu et al. [64]PMTDarXiv2019Perform “soft” semantic segmentation. It assigns a soft pyramid label (i.e., a real value between 0 and 1) for each pixel within text instance.
Liu et al. [80]BDNIJCAI2019Discretizing bouding boxes into key edges to address label confusion for text detection.
Zhang et al. [81]LOMOCVPR2019DR + IRM + SEM
Zhou et al. [24]EASTCVPR2017The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images with instance segmentation.
Yue et al. [48]
BMVC2018Proposing a general framework for text detection called Guided CNN to achieve the two goals simultaneously.
Zhong et al. [53]AF-RPNarXiv2018Presenting AF-RPN(anchor-free) as an anchor-free and scale-friendly region proposal network for the Faster R-CNN framework.

2.2 Detection Results

2.2.1 Detection Results on Horizontal-Text Datasets
Method               ModelSourceTimeMethod CategoryIC11[68]IC13 [69]IC05[67]
PRFPRFPRF
Yao et al. [1]TD-MixtureCVPR2012Traditional ~~~0.690.660.67~~~
Yin et al. [2]
TPAMI20130.860.680.76~~~~~~
Yin et al. [7]
TPAMI20150.8380.660.738~~~~~~
Wu et al. [9]
TMM2015~~~0.760.700.73~~~
Liang et al. [8]
TIP20150.770.680.710.760.680.72~~~
Michal et al.[12]FASTextICCV2015~~~0.840.690.77~~~
Li et al. [3]
TIP2014Segmentation0.800.620.70~~~~~~
Zhang et al. [14]
CVPR2016~~~0.880.780.83~~~
He et al. [18]Text-CNNTIP20160.910.740.820.930.730.820.870.730.79
Yao et al. [21]
arXiv2016~~~0.8890.8020.843~~~
Hu et al. [27]WordSupICCV2017~~~0.9330.8750.903~~~
Tang et al.[32]
TIP20170.900.860.880.920.870.89~~~
Wang et al. [38]
ICDAR20170.870.780.820.870.820.84~~~
Deng et al. [40]PixelLinkAAAI2018~~~0.8860.8750.881~~~
Liu et al. [42]MCNCVPR2018~~~0.880.870.88~~~
Lyu et al. [43]
CVPR2018~~~0.920.8440.880~~~
Chu et al. [45]BorderECCV2018~~~0.9150.8710.892~~~
Wang et al. [54]PSENetCVPR2019~~~0.940.900.92~~~
Huang et al. [4]MSERs-CNNECCV20140.880.710.78~~~0.840.670.75
Sun et al. [6]
PR20150.920.910.910.940.920.93~~~
Gupta et al. [15]FCRNCVPR2016Regression0.940.770.850.9380.7640.842~~~
Zhong et al. [20]DeepTextarXiv20160.870.830.850.850.810.83~~~
Liao et al. [22]TextBoxesAAAI20170.890.820.860.890.830.86~~~
Liu et al. [25]DMPNetCVPR2017~~~0.930.830.870~~~
Jiang et al. [36]R2CNNarXiv2017~~~0.920.810.86~~~
Xing et al. [37]ArbiTextarXiv2017~~~0.8260.9360.877~~~
Wang et al. [41]ITNCVPR20180.8960.8890.8920.9410.8930.916~~~
Liao et al. [49]TextBoxes++TIP2018~~~0.920.860.89~~~
He et al. [50]
TIP2018~~~0.910.840.88~~~
Ma et al. [51]RRPNTMM2018~~~0.950.890.91~~~
Zhu et al. [55]SLPRarXiv2018~~~0.900.720.80~~~
Cai et al. [59]FFNICIP2018~~~0.920.840.876~~~
Sabyasachi et al. [60]RGCICIP2018~~~0.890.770.83~~~
Wang et al. [82]
CVPR2019~~~0.9370.8780.907~~~
Liu et al. [84]
CVPR2019~~~0.9370.8970.917~~~
He et al. [29]SSTDICCV2017~~~0.890.860.88~~~
Tian et al. [11]
ICCV20150.860.760.810.8520.7590.802~~~
Tian et al. [13]CTPNECCV2016~~~0.930.830.88~~~
He et al. [19]
ACCV2016~~~0.900.750.81~~~
Shi et al. [23]SegLinkCVPR2017~~~0.8770.830.853~~~
Tian et al. [30]WeTextICCV2017~~~0.9110.8310.869~~~
Zhu et al. [31]RTNICDAR2017~~~0.940.890.91~~~
Ren et al. [34]
TMM20170.780.670.720.810.670.73~~~
Zhang et al. [10]
CVPR20150.840.760.800.880.740.80~~~
Tang et al. [52]SSFTTMM2018Hybrid0.9060.8470.8760.9110.8610.885~~~
Xie et al.[61]SPCNetAAAI2019~~~0.940.910.92~~~
Liu et al. [80]BDNIJCAI2019~~~0.8870.8940.89~~~
Zhou et al. [24]EASTCVPR2017~~~0.930.830.870~~~
Yue et al. [48]
BMVC2018~~~0.8850.8460.870~~~
Zhong et al. [53]AF-RPNarXiv2018~~~0.940.900.92~~~

2.2.2 Detection Results on Arbitrary-Quadrilateral-Text Datasets
Method               ModelSourceTimeMethod CategoryIC15 [70]MSRA-TD500 [71]USTB-SV1K [65]SVT [66]
PRFPRFPRFPRF
Le et al. [5]HOCCCVPR2014Traditional~~~0.710.620.66~~~~~~
Yin et al. [7]
TPAMI2015~~~0.810.630.710.4990.4540.475~~~
Wu et al. [9]
TMM2015~~~0.630.700.66~~~~~~
Tian et al. [17]
IJCAI2016~~~0.950.580.7210.5370.4880.51~~~
Yang et al. [33]
TIP2017~~~0.950.580.720.540.490.51~~~
Liang et al. [8]
TIP2015~~~0.740.660.70~~~~~~
Zhang et al. [14]
CVPR2016Segmentation0.710.430.540.830.670.74~~~~~~
Zhu et al. [16]
CVPR20160.810.910.85~~~~~~~~~
He et al. [18]Text-CNNTIP2016~~~0.760.610.69~~~~~~
Yao et al. [21]
arXiv20160.7230.5870.6480.7650.7530.759~~~~~~
Hu et al. [27]WordSupICCV20170.7930.770.782~~~~~~~~~
Wu et al. [28]
ICCV20170.910.780.840.770.780.77~~~~~~
Dai et al. [35]FTSNarXiv20170.8860.800.8410.8760.7710.82~~~~~~
Deng et al. [40]PixelLinkAAAI20180.8550.8200.8370.8300.7320.778~~~~~~
Liu et al. [42]MCNCVPR20180.720.800.760.880.790.83~~~~~~
Lyu et al. [43]
CVPR20180.8950.7970.8430.8760.7620.815~~~~~~
Chu et al. [45]BorderECCV2018~~~0.8300.7740.801~~~~~~
Long et al. [46]TextSnakeECCV20180.8490.8040.8260.8320.7390.783~~~~~~
Yang et al. [47]IncepTextIJCAI20180.9380.8730.9050.8750.7900.830~~~~~~
Wang et al. [54]PSENetCVPR20190.86920.8450.8569~~~~~~~~~
Xu et al.[57]TextFieldarXiv20180.8430.8050.8240.8740.7590.813~~~~~~
Tian et al. [58]FTDNICIP20180.8470.7730.809~~~~~~~~~
Tian et al. [83]
CVPR20190.8830.8500.8660.8420.8170.829~~~~~~
Baek et al. [62]CRAFTCVPR20190.8980.8430.8690.8820.7820.829~~~~~~
Gupta et al. [15]FCRNCVPR2016Regression~~~~~~~~~0.6510.5990.624
Liu et al. [25]DMPNetCVPR20170.7320.6820.706~~~~~~~~~
He et al. [26]DDRICCV20170.820.800.810.770.700.74~~~~~~
Jiang et al. [36]R2CNNarXiv20170.8560.7970.825~~~~~~~~~
Xing et al. [37]ArbiTextarXiv20170.7920.7350.7590.780.720.75~~~~~~
Wang et al. [41]ITNCVPR20180.8570.7410.7950.9030.7230.803~~~~~~
Liao et al. [44]RRDCVPR20180.880.80.8380.8760.730.79~~~~~~
Liao et al. [49]TextBoxes++TIP20180.8780.7850.829~~~~~~~~~
He et al. [50]
TIP20180.850.800.820.910.810.86~~~~~~
Ma et al. [51]RRPNTMM20180.8220.7320.7740.8210.6770.742~~~~~~
Zhu et al. [55]SLPRarXiv20180.8550.8360.845~~~~~~~~~
Deng et al. [56]
arXiv20180.890.810.845~~~~~~~~~
Sabyasachi et al. [60]RGCICIP20180.830.810.820.850.760.80~~~~~~
Wang et al. [82]
CVPR20190.8920.860.8760.8520.8210.836~~~~~~
He et al. [29]SSTDICCV20170.800.730.77~~~~~~~~~
Tian et al. [13]CTPNECCV20160.740.520.61~~~~~~~~~
He et al. [19]
ACCV2016~~~~~~~~~0.870.730.79
Shi et al. [23]SegLinkCVPR20170.7310.7680.750.860.700.77~~~~~~
Tang et al. [52]SSFTTMM2018Hybrid~~~~~~~~~0.5410.7580.631
Xie et al.[61]SPCNetAAAI20190.890.860.87~~~~~~~~~
Liu et al. [64]PMTDarXiv20190.9130.8740.893~~~~~~~~~
Liu et al. [80]BDNIJCAI20190.8810.8460.8630.870.8150.842~~~~~~
Zhang et al. [81]LOMOCVPR20190.8780.8760.877~~~~~~~~~
Zhou et al. [24]EASTCVPR20170.8330.7830.8070.8730.6740.761~~~~~~
Yue et al. [48]
BMVC20180.8660.7890.823~~~~~~0.6910.6600.675
Zhong et al. [53]AF-RPNarXiv20180.890.830.86~~~~~~~~~
Method               ModelSourceTimeMethod CategoryCOCO-Text [72]RCTW-17 [73]MLT [76]OSTD[77]
PRFPRFPRFPRF
Le et al. [5]HOCCCVPR2014Traditional~~~~~~~~~0.800.730.76
Yao et al. [21]
arXiv2016Segmentation0.4320.270.333~~~~~~~~~
Hu et al. [27]WordSupICCV20170.4520.3090.368~~~~~~~~~
Lyu et al. [43]
CVPR20180.3510.3480.349~~~0.7430.7060.724~~~
Chu et al. [45]BorderECCV2018~~~0.7820.5880.6710.7770.6210.690~~~
Yang et al. [47]IncepTextIJCAI2018~~~0.7850.5690.660~~~~~~
Wang et al. [54]PSENetCVPR2019~~~~~~0.75350.69180.7213~~~
Baek et al. [62]CRAFTCVPR2019~~~~~~0.8060.6820.739~~~
He et al. [29]SSTDICCV2017Regression0.460.310.37~~~~~~~~~
Gupta et al. [15]FCRNCVPR2016~~~~~~0.8440.7630.801~~~
Liao et al. [49]TextBoxes++TIP20180.610.570.59~~~~~~~~~
Ma et al. [51]RRPNTMM2018~~~~~~0.76690.57940.6601~~~
Deng et al. [56]
arXiv20180.5550.6330.591~~~~~~~~~
Cai et al. [59]FFNICIP20180.430.350.39~~~~~~~~~
Xie et al. [79]DeRPNAAAI20190.5860.5570.571~~~~~~~~~
He et al. [29]SSTDICCV20170.460.310.37~~~~~~~~~
Xie et al.[61]SPCNetAAAI2019Hybrid~~~~~~0.8060.6860.741~~~
Liu et al. [64]PMTDarXiv2019~~~~~~0.8440.7630.801~~~
Liu et al. [80]BDNIJCAI2019~~~~~~0.7910.6980.742~~~
Zhang et al. [81]LOMOCVPR2019~~~0.7910.6020.6840.8020.6720.731~~~
Zhou et al. [24]EASTCVPR20170.5040.3240.395~~~~~~~~~
Zhong et al. [53]AF-RPNarXiv2018~~~~~~0.750.660.70~~~

2.2.3 Detection Results on Irregular-Text Datasets

In this section, we only select those methods suitable for irregular text detection.

Method               ModelSourceTimeMethod CategoryTotal-text [74]SCUT-CTW1500 [75]
PRFPRF
Baek et al. [62]CRAFTCVPR2019Segmentation0.8760.7990.8360.8600.8110.835
Long et al. [46]TextSnakeECCV20180.8270.7450.7840.6790.8530.756
Tian et al. [83]
CVPR2019~~~81.784.280.1
Wang et al. [54]PSENetCVPR20190.8400.7790.8090.8480.7970.822
Zhu et al. [55]SLPRarXiv2018Regression~~~0.8010.7010.748
Liu et al. [63]CTD+TLOCPR2019~~~0.7740.6980.734
Wang et al. [82]
CVPR2019~~~80.180.280.1
Liu et al. [84]
CVPR20190.8140.7910.8020.7870.7610.774
Zhang et al. [81]LOMOCVPR2019Hybrid87.679.383.385.776.580.8
Xie et al.[61]SPCNetAAAI20190.830.830.83~~~

3. Survey

[A] [TPAMI-2015] Ye Q, Doermann D. Text detection and recognition in imagery: A survey[J]. IEEE transactions on pattern analysis and machine intelligence, 2015, 37(7): 1480-1500. paper

[B] [Frontiers-Comput. Sci-2016] Zhu Y, Yao C, Bai X. Scene text detection and recognition: Recent advances and future trends[J]. Frontiers of Computer Science, 2016, 10(1): 19-36. paper

[C] [arXiv-2018] Long S, He X, Ya C. Scene Text Detection and Recognition: The Deep Learning Era[J]. arXiv preprint arXiv:1811.04256, 2018. paper

4. Evaluation

If you are insterested in developing better scene text detection metrics, some references recommended here might be useful.

[A] Wolf, Christian, and Jean-Michel Jolion. “Object count/area graphs for the evaluation of object detection and segmentation algorithms.” International Journal of Document Analysis and Recognition (IJDAR) 8.4 (2006): 280-296. paper

[B] D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. K. Ghosh, A. D.Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, F. Shafait, S. Uchida, and E. Valveny. ICDAR 2015 competition on robust reading. In ICDAR, pages 1156–1160, 2015. paper

[C] Calarasanu, Stefania, Jonathan Fabrizio, and Severine Dubuisson. “What is a good evaluation protocol for text localization systems? Concerns, arguments, comparisons and solutions.” Image and Vision Computing 46 (2016): 1-17. paper

[D] Shi, Baoguang, et al. “ICDAR2017 competition on reading chinese text in the wild (RCTW-17).” 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). Vol. 1. IEEE, 2017. paper

[E] Nayef, N; Yin, F; Bizid, I; et al. ICDAR2017 robust reading challenge on multi-lingual scene text detection and script identification-rrc-mlt. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, volume 1, 1454–1459. IEEE.
paper

[F] Dangla, Aliona, et al. “A first step toward a fair comparison of evaluation protocols for text detection algorithms.” 2018 13th IAPR International Workshop on Document Analysis Systems (DAS). IEEE, 2018. paper

[G] He,Mengchao and Liu, Yuliang, et al. ICPR2018 Contest on Robust Reading for Multi-Type Web images. ICPR 2018. paper

[H] Liu, Yuliang and Jin, Lianwen, et al. “Tightness-aware Evaluation Protocol for Scene Text Detection” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019. paper code

5. OCR Service

OCRAPIFree
Tesseract OCR Engine×
Azure
ABBYY
OCR Space
SODA PDF OCR
Free Online OCR
Online OCR
Super Tools
Online Chinese Recognition
Calamari OCR×
Tencent OCR×

6. References and Code

[1] Yao C, Bai X, Liu W, et al. Detecting texts of arbitrary orientations in natural images. 2012 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2012: 1083-1090. Paper
[2] Yin X C, Yin X, Huang K, et al. Robust text detection in natural scene images. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013, 36(5): 970-83. Paper
[3] Li Y, Jia W, Shen C, et al. Characterness: An indicator of text in the wild. IEEE transactions on image processing, 2014, 23(4): 1666-1677. Paper
[4] Huang W, Qiao Y, Tang X. Robust scene text detection with convolution neural network induced mser trees. European Conference on Computer Vision(ECCV), 2014: 497-511. Paper
[5] Kang L, Li Y, Doermann D. Orientation robust text line detection in natural images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014: 4034-4041. Paper
[6] Sun L, Huo Q, Jia W, et al. A robust approach for text detection from natural scene images. Pattern Recognition, 2015, 48(9): 2906-2920. Paper
[7] Yin X C, Pei W Y, Zhang J, et al. Multi-orientation scene text detection with adaptive clustering. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015 (9): 1930-1937. Paper
[8] Liang G, Shivakumara P, Lu T, et al. Multi-spectral fusion based approach for arbitrarily oriented scene text detection in video images. IEEE Transactions on Image Processing, 2015, 24(11): 4488-4501. Paper
[9] Wu L, Shivakumara P, Lu T, et al. A New Technique for Multi-Oriented Scene Text Line Detection and Tracking in Video. IEEE Trans. Multimedia, 2015, 17(8): 1137-1152. Paper
[10] Zheng Z, Wei S, et al. Symmetry-based text line detection in natural scenes. IEEE Conference on Computer Vision & Pattern Recognition(CVPR), 2015. Paper
[11] Tian S, Pan Y, Huang C, et al. Text flow: A unified text detection system in natural scene images. Proceedings of the IEEE international conference on computer vision(ICCV). 2015: 4651-4659. Paper
[12] Buta M, et al. FASText: Efficient unconstrained scene text detector. 2015 IEEE International Conference on Computer Vision (ICCV). 2015: 1206-1214. Paper
[13] Tian Z, Huang W, He T, et al. Detecting text in natural image with connectionist text proposal network. European conference on computer vision(ECCV), 2016: 56-72. Paper Code
[14] Zhang Z, Zhang C, Shen W, et al. Multi-oriented text detection with fully convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016: 4159-4167. Paper
[15] Gupta A, Vedaldi A, Zisserman A. Synthetic data for text localisation in natural images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016: 2315-2324. Paper Code
[16] S. Zhu and R. Zanibbi, A Text Detection System for Natural Scenes with Convolutional Feature Learning and Cascaded Classification, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 625-632. Paper
[17] Tian S, Pei W Y, Zuo Z Y, et al. Scene Text Detection in Video by Learning Locally and Globally. IJCAI. 2016: 2647-2653. Paper
[18] He T, Huang W, Qiao Y, et al. Text-attentional convolutional neural network for scene text detection. IEEE transactions on image processing, 2016, 25(6): 2529-2541. Paper
[19] He, Dafang and Yang, Xiao and Huang, Wenyi and Zhou, Zihan and Kifer, Daniel and Giles, C Lee. Aggregating local context for accurate scene text detection. ACCV, 2016. Paper
[20] Zhong Z, Jin L, Zhang S, et al. Deeptext: A unified framework for text proposal generation and text detection in natural images. arXiv preprint arXiv:1605.07314, 2016. Paper
[21] Yao C, Bai X, Sang N, et al. Scene text detection via holistic, multi-channel prediction. arXiv preprint arXiv:1606.09002, 2016. Paper
[22] Liao M, Shi B, Bai X, et al. TextBoxes: A Fast Text Detector with a Single Deep Neural Network. AAAI. 2017: 4161-4167. Paper Code
[23] Shi B, Bai X, Belongie S. Detecting Oriented Text in Natural Images by Linking Segments. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 3482-3490. Paper Code
[24] Zhou X, Yao C, Wen H, et al. EAST: an efficient and accurate scene text detector. CVPR, 2017: 2642-2651. Paper Code
[25] Liu Y, Jin L. Deep matching prior network: Toward tighter multi-oriented text detection. CVPR, 2017: 3454-3461. Paper
[26] He W, Zhang X Y, Yin F, et al. Deep Direct Regression for Multi-Oriented Scene Text Detection. Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2017: 745-753. Paper
[27] Hu H, Zhang C, Luo Y, et al. Wordsup: Exploiting word annotations for character based text detection. ICCV, 2017. Paper
[28] Wu Y, Natarajan P. Self-organized text detection with minimal post-processing via border learning. ICCV, 2017. Paper
[29] He P, Huang W, He T, et al. Single shot text detector with regional attention. The IEEE International Conference on Computer Vision (ICCV). 2017, 6(7). Paper Code
[30] Tian S, Lu S, Li C. Wetext: Scene text detection under weak supervision. ICCV, 2017. Paper
[31] Zhu, Xiangyu and Jiang, Yingying et al. Deep Residual Text Detection Network for Scene Text. ICDAR, 2017. Paper
[32] Tang Y , Wu X. Scene Text Detection and Segmentation Based on Cascaded Convolution Neural Networks. IEEE Transactions on Image Processing, 2017, 26(3):1509-1520. Paper
[33] Yang C, Yin X C, Pei W Y, et al. Tracking Based Multi-Orientation Scene Text Detection: A Unified Framework with Dynamic Programming. IEEE Transactions on Image Processing, 2017. Paper
[34] X. Ren, Y. Zhou, J. He, K. Chen, X. Yang and J. Sun, A Convolutional Neural Network-Based Chinese Text Detection Algorithm via Text Structure Modeling. in IEEE Transactions on Multimedia, vol. 19, no. 3, pp. 506-518, March 2017. Paper
[35] Dai Y, Huang Z, Gao Y, et al. Fused text segmentation networks for multi-oriented scene text detection. arXiv preprint arXiv:1709.03272, 2017. Paper
[36] Jiang Y, Zhu X, Wang X, et al. R2CNN: rotational region CNN for orientation robust scene text detection. arXiv preprint arXiv:1706.09579, 2017. Paper
[37] Xing D, Li Z, Chen X, et al. ArbiText: Arbitrary-Oriented Text Detection in Unconstrained Scene. arXiv preprint arXiv:1711.11249, 2017. Paper
[38] C. Wang, F. Yin and C. Liu, Scene Text Detection with Novel Superpixel Based Character Candidate Extraction. in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2017, pp. 929-934. Paper
[39] Sheng Zhang, Yuliang Liu, Lianwen Jin et al. Feature Enhancement Network: A Refined Scene Text Detector. In AAAI 2018. Paper
[40] Dan Deng et al. PixelLink: Detecting Scene Text via Instance Segmentation. In AAAI 2018. Paper Code
[41] Fangfang Wang, Liming Zhao, Xi L et al. Geometry-Aware Scene Text Detection with Instance Transformation Network. In CVPR 2018. Paper
[42] Zichuan Liu, Guosheng Lin, Sheng Yang et al. Learning Markov Clustering Networks for Scene Text Detection. In CVPR 2018. Paper
[43] Pengyuan Lyu, Cong Yao, Wenhao Wu et al. Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation. In CVPR 2018. Paper
[44] Minghui L, Zhen Z, Baoguang S. Rotation-Sensitive Regression for Oriented Scene Text Detection. In CVPR 2018. Paper
[45] Chuhui Xue et al. Accurate Scene Text Detection through Border Semantics Awareness and Bootstrapping. In ECCV 2018. Paper
[46] Long, Shangbang and Ruan, Jiaqiang, et al. TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes. In ECCV, 2018. Paper
[47] Qiangpeng Yang, Mengli Cheng et al. IncepText: A New Inception-Text Module with Deformable PSROI Pooling for Multi-Oriented Scene Text Detection. In IJCAI 2018. Paper
[48] Xiaoyu Yue et al. Boosting up Scene Text Detectors with Guided CNN. In BMVC 2018. Paper
[49] Liao M, Shi B , Bai X. TextBoxes++: A Single-Shot Oriented Scene Text Detector. IEEE Transactions on Image Processing, 2018, 27(8):3676-3690. Paper Code
[50] W. He, X. Zhang, F. Yin and C. Liu, Multi-Oriented and Multi-Lingual Scene Text Detection With Direct Regression, in IEEE Transactions on Image Processing, vol. 27, no. 11, pp.5406-5419, 2018. Paper
[51] Ma J, Shao W, Ye H, et al. Arbitrary-oriented scene text detection via rotation proposals.in IEEE Transactions on Multimedia, 2018. Paper Code
[52] Youbao Tang and Xiangqian Wu. Scene Text Detection Using Superpixel-Based Stroke Feature Transform and Deep Learning Based Region Classification. In TMM, 2018. Paper
[53] Zhuoyao Zhong, Lei Sun and Qiang Huo. An Anchor-Free Region Proposal Network for Faster R-CNN based Text Detection Approaches. arXiv preprint arXiv:1804.09003. 2018. Paper
[54] Wenhai W, Enze X, et al. Shape Robust Text Detection with Progressive Scale Expansion Network. In CVPR 2019. Paper Code
[55] Zhu Y, Du J. Sliding Line Point Regression for Shape Robust Scene Text Detection. arXiv preprint arXiv:1801.09969, 2018. Paper
[56] Linjie D, Yanxiang Gong, et al. Detecting Multi-Oriented Text with Corner-based Region Proposals. arXiv preprint arXiv: 1804.02690, 2018. Paper Code
[57] Yongchao Xu, Yukang Wang, Wei Zhou, et al. TextField: Learning A Deep Direction Field for Irregular Scene Text Detection. arXiv preprint arXiv: 1812.01393, 2018. Paper
[58] Xiaowei Tian, Dao Wu, Rui Wang, Xiaochun Cao. Focal Text: an Accurate Text Detection with Focal Loss. In ICIP 2018. Paper
[59] Chenqin C, Pin L, Bing S. Feature Fusion Network for Scene Text Detection. In ICIP, 2018. Paper
[60] Sabyasachi Mohanty et al. Recurrent Global Convolutional Network for Scene Text Detection. In ICIP 2018. Paper
[61] Enze Xie, et al. Scene Text Detection with Supervised Pyramid Context Network. In AAAI 2019. Paper
[62] Youngmin Baek, Bado Lee, et al. Character Region Awareness for Text Detection. In CVPR 2019. Paper
[63] Yuliang L, Lianwen J, Shuaitao Z, et al. Curved Scene Text Detection via Transverse and Longitudinal Sequence Connection. Pattern Recognition, 2019. Paper Code
[64] Jingchao Liu, Xuebo Liu, et al, Pyramid Mask Text Detector. arXiv preprint arXiv:1903.11800, 2019. Paper Code
[79] Lele Xie, Yuliang Liu, Lianwen Jin, Zecheng Xie, DeRPN: Taking a further step toward more general object detection. In AAAI, 2019. Paper Code
[80] Yuliang Liu, Lianwen Jin, et al, Omnidirectional Scene Text Detction with Sequential-free Box Discretization. In IJ
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值