自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+

AI小作坊 的博客

大道至简,天人合一

  • 博客(17)
  • 资源 (40)
  • 论坛 (1)

原创 目标检测 RCNN, SPPNet, Fast RCNN, Faster RCNN 总结

RCNN CVPR 2014 Rich feature hierarchies for accurate object detection and semantic segmentation https://github.com/rbgirshick/rcnn自从2012年 AlexNet 网络的提出,深度学习大行其道。一开始主要集中于 图像分类领域。RCNN的提出将卷积网络引入了目标检测领

2016-12-29 15:12:10 3005

原创 SPP-Net 是怎么让 CNN 实现输入任意尺寸图像的?

ECCV2014 Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition解决的问题: there is a technical issue in the training and testing of the CNNs: the prevalent CNNs require afixedinput

2016-12-28 16:17:37 9908 2

原创 深度残差网络 - Deep Residual Learning for Image Recognition

CVPR2016 code: https://github.com/KaimingHe/deep-residual-networks针对CNN网络深度问题,本文提出了一个叫深度残差学习网络,可以使得网络的层数达到1000层,不论训练速度还是效果都是很不错。1 Introduction 自从2012年AlexNet提出之后,VGG和GoogleNet相继被提出。从中我们可以网络模型的深度

2016-12-28 09:45:04 1668 1

原创 网络模型 Inception V2/V3-Rethinking the Inception Architecture for Computer Vision

https://github.com/Moodstocks/inception-v3.torch本文是对 GoogleNet 网络模型 Inception 架构的重思考和改进,Inception V3, 其中 Going deeper with convolutions 是 Inception V1, Batch Normalization 是 Inception V2。1 Introduct

2016-12-27 11:38:18 2135

原创 Image Super-Resolution Using Deep Convolutional Networks

Image Super-Resolution Using Deep Convolutional Networkscode: http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html本文使用一个三层的卷积网络来实现图像超分辨率,得到很好的效果。1 I NTRODUCTION 单帧图像的超分辨率,即由一幅低分辨率图像重建出一幅高分辨率图像。在计算机视觉中这是一个

2016-12-23 16:53:22 3122

原创 GTX1080 安装 CUDA 7.5

GTX1080安装时CUDA开发包与驱动需要分开安装,分别下载开发包和驱动文件,采用.run方式安装。 (一) Pre-install actions(安装前准备) 1. Verify You Have a CUDA-Capable GPU 执行下面的操作,然后验证硬件支持GPU CUDA,只要型号存在于 https://developer.nvidia.com/cuda-gpus,就没问题

2016-12-20 13:55:34 1533

原创 Caffe 关于 LetNet-5 之 lenet_solver.prototxt 解析

对于 LetNet-5 这个模型,在训练和测试时涉及到一些参数, Caffe 在 lenet_solver.prototxt 这个参数描述文件定义了相关参数。# The train/test net protocol buffer definition // 参数描述文件net: "examples/mnist/lenet_train_test.prototxt" //位置# test

2016-12-13 16:38:18 2503

原创 Caffe 关于 LetNet-5 之 lenet_train_test.prototxt 解析

前面我们深入分析了 lenet.prototxt 这个网络参数描述文件,但是这是对广义 LetNet-5 网络进行描述的。在实际训练和测试中,LetNet-5 网络 稍有不同,那么针对 训练和测试, Caffe 又是如何定义 LetNet-5 网络 了?对应的描述文件是 lenet_train_test.prototxt 下面我们来仔细看看这个文件:name: "LeNet" // 网

2016-12-12 16:39:35 852 2

原创 Caffe 中关于 LetNet-5 网络的定义文件 lenet.prototxt 解析

在 https://github.com/BVLC/caffe/blob/master/examples/mnist 是Caffe关于LetNet-5的相关文件, 这里面有几个后缀是 .prototxt 的文件,它是用 protocol buffer 这个工具生成的文件。百度百科对 protocol buffer 描述如下: protocol buffer(以下简称PB)是google 的一种

2016-12-12 15:16:58 2469 1

原创 怎么使用 Caffe 进行 LetNet-5 的训练和预测

在 LeNet5的深入解析 我们已经对 LetNet-5 网络结构做出了详细的描述,接下来我们将深入分析 Caffe 中怎么使用 LetNet-5 的这个模型进行预测。 Caffe 中关于 LetNet-5 的实现文件主要存放于 https://github.com/BVLC/caffe/tree/master/examples/mnist第一步是进行 Caffe 的安装接着看看在 Caffe

2016-12-12 10:45:15 2125

原创 LeNet5的深入解析

论文:Gradient-based-learning-applied-to-document-recognition参考:http://blog.csdn.net/strint/article/details/44163869LeNet5 这个网络虽然很小,但是它包含了深度学习的基本模块:卷积层,池化层,全链接层。是其他深度学习模型的基础, 这里我们对LeNet5进行深入分析,并由此看看 Caf

2016-12-09 11:35:53 24447 8

原创 训练三层BP神经网络实现异或运算 Python 代码实现

本文主要使用下面的网络结构来完成 异或运算 异或运算 : 0^0 = 0, 1^0 = 1, 0^1 = 1, 1^1 = 0 。上图的公式推导可以参考博文: 三层神经网络前向后向传播示意图 import numpy as npdef nonlin(x, deriv = False): if(deriv == True): return x*(1-x)

2016-12-07 14:10:17 2197

原创 三层神经网络前向后向传播示意图

BP 神经网络信号前向后向传播示意图主要参考博文 BP神经网络后向传播算法 本文主要分析下面的三层神经网络的信号传播,两个输入,两个隐层,一个输出网络中每个紫色模块是一个神经元,它包括信号输入求和,求和后的信号再经过激活函数处理(一般是非线性激活函数),得到输出 下面开始训练网络的流程,训练网络首先需要训练数据,对于我们这里的网络,训练数据为若干组 (x1,x2)及对应的期望输出 z 。我们首先

2016-12-07 09:32:29 10393

原创 Python 实现感知器模型、两层神经网络

python 3.4 因为使用了 numpy这里我们首先实现一个感知器模型来实现下面的对应关系 [[0,0,1], ——- 0 [0,1,1], ——- 1 [1,0,1], ——- 0 [1,1,1]] ——- 1从上面的数据可以看出:输入是三通道,输出是单通道。 这里的激活函数我们使用 sigmoid 函数 f(x)=1/(1+e

2016-12-06 15:08:30 1602

原创 Batch Normalization

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate ShiftICML 2015本文主要是对网络层的输入分布做归一化( each training mini-batch)来提高训练速度,有一定的 Dropout 效果。1 Introduction 深度学习在各个领域进展都很快。

2016-12-05 14:46:48 1100

原创 Python 基础 二

使用 python 2.6 源于:http://wiki.jikexueyuan.com/project/learn-python-hard-way/接收参数: from sys import argvscript, first, second, third = argvprint “The script is called:”, script print “Your first variab

2016-12-02 16:10:19 277

原创 Python 基础 一

这里用的是 python 2.6打印输出: print “Hello World!”注释: # A comment, this is so you can read your program later.数字和数学计算: print 3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6变量和命名: cars = 100 carpool_capacity = 100.0 pr

2016-12-02 10:58:08 323

Vehicle model recognition from frontal view image measurements

This paper deals with a novel vehicle manufacturer and model recognition scheme, which is enhanced by color recognition for more robust results. A probabilistic neural network is assessed as a classifier and it is demonstrated that relatively simple image processing measurements can be used to obtain high performance vehicle authentication. The proposed system is assisted by a previously developed license plate recognition, a symmetry axis detector and an image phase congruency calculation modules. The reported results indicate a high recognition rate and a fast processing time, making the system suitable for real-time applications.

2011-10-15

Vehicle Detection and Tracking in Car Video Based on Motion Model

Vehicle Detection and Tracking in Car Video Based on Motion Model--This work aims at real-time in-car video analysis to detect and track vehicles ahead for safety, auto-driving, and target tracing. This paper describes a comprehensive approach to localize target vehicles in video under various environmental conditions. The extracted geometry features from the video are projected onto a 1D profile continuously and are tracked constantly. We rely on temporal information of features and their motion behaviors for vehicle identification, which compensates for the complexity in recognizing vehicle shapes, colors, and types. We model the motion in the field of view probabilistically according to the scene characteristic and vehicle motion model. The Hidden Markov Model is used for separating target vehicles from background, and tracking them probabilistically. We have investigated videos of day and night on different types of roads, showing that our approach is robust and effective in dealing with changes in environment and illumination, and that real time processing becomes possible for vehicle borne cameras.

2011-10-15

Projection and Least Square Fitting

Projection and Least Square Fitting with Perpendicular Offsets based Vehicle License Plate Tilt Correction

2011-10-15

An Algorithm for License Plate Recognition Applied to ITS

An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow re- moval method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for im- age tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16%and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions

2011-10-15

A Review of Computer Vision Techniques for the Analysis of Urban Traffic

Automatic video analysis from urban surveillance cameras is a fast-emerging field based on computer vision techniques. We present here a comprehensive review of the state-of-the-art computer vision for traffic video with a critical analysis and an outlook to future research directions. This field is of increasing relevance for intelligent transport systems (ITSs). The decreasing hardware cost and, therefore, the increasing de- ployment of cameras have opened a wide application field for video analytics. Several monitoring objectives such as congestion, traffic rule violation, and vehicle interaction can be targeted using cameras that were typically originally installed for human oper- ators. Systems for the detection and classification of vehicles on highways have successfully been using classical visual surveillance techniques such as background estimation and motion tracking for some time. The urban domain is more challenging with respect to traffic density, lower camera angles that lead to a high degree of occlusion, and the variety of road users. Methods from object categorization and 3-D modeling have inspired more advanced techniques to tackle these challenges. There is no commonly used data set or benchmark challenge, which makes the direct com- parison of the proposed algorithms difficult. In addition, evalu- ation under challenging weather conditions (e.g., rain, fog, and darkness) would be desirable but is rarely performed. Future work should be directed toward robust combined detectors and classifiers for all road users, with a focus on realistic conditions during evaluation.

2011-10-15

Accuracy of Laplacian Edge Detectors

The sources of error for the edge finding technique proposed by Marr and Hildreth (D. Marr and T. Poggio, Proc. R. Soc. London Ser. B204, 1979, 301–328; D. Marr and E. Hildreth, Proc. R. Soc. London Ser. B.207, 1980, 187–217) are identified, and the magnitudes of the errors are estimated, based on idealized models of the most common error producing situations. Errors are shown to be small for linear illuminations, as well as for nonlinear illuminations with a second derivative less than a critical value. Nonlinear illuminations are shown to lead to spurious contours under some conditions, and some fast techniques for discarding such contours are suggested.

2011-10-12

On Improving the Efficiency of Tensor Voting

This paper proposes two alternative formulations to reduce the high computational complexity of tensor voting, a robust perceptual grouping technique used to extract salient information from noisy data. The first scheme consists of numerical approximations of the votes, which have been derived from an in-depth analysis of the plate and ball voting processes. The second scheme simplifies the formulation while keeping the same perceptual meaning of the original tensor voting: The stick tensor voting and the stick component of the plate tensor voting must reinforce surfaceness, the plate components of both the plate and ball tensor voting must boost curveness, whereas junctionness must be strengthened by the ball component of the ball tensor voting. Two new parameters have been proposed for the second formulation in order to control the potentially conflictive influence of the stick component of the plate vote and the ball component of the ball vote. Results show that the proposed formulations can be used in applications where efficiency is an issue since they have a complexity of order O(1). Moreover, the second proposed formulation has been shown to be more appropriate than the original tensor voting for estimating saliencies by appropriately setting the two new parameters.

2011-10-11

Selecting Critical Patterns Based on Local Geometrical

Pattern selection methods have been traditionally developed with a dependency on a specific classifier. In contrast, this paper presents a method that selects critical patterns deemed to carry essential information applicable to train those types of classifiers which require spatial information of the training data set. Critical patterns include those edge patterns that define the boundary and those border patterns that separate classes. The proposed method selects patterns from a new perspective, primarily based on their location in input space. It determines class edge patterns with the assistance of the approximated tangent hyperplane of a class surface. It also identifies border patterns between classes using local probability. The proposed method is evaluated on benchmark problems using popular classifiers, including multilayer perceptrons, radial basis functions, support vector machines, and nearest neighbors. The proposed approach is also compared with four state-of-the-art approaches and it is shown to provide similar but more consistent accuracy from a reduced data set. Experimental results demonstrate that it selects patterns sufficient to represent class boundary and to preserve the decision surface.

2011-10-11

Fast LOG Filtering Using Recursive Filters

Marr and Hildreth's theory of LoG filtering with multiple scales has been extensively elaborated. One problem with LoG filtering is that it is very time-consuming, especially with a large size of filters. This paper presents a recursive convolution scheme for LoG filtering and a fast algorithm to extract zero-crossings. It has a constant computational complexity per pixel and is independent of the size of the filter. A line buffer is used to determine the locations of zero-crossings along with filtering hence avoiding the need for an additional convolution and extra memory units. Various images have been tested

2011-10-11

A discrete expression of Canny's criteria for step

Optimal filters for edge detection are usually developed in the continuous domain and then transposed by sampling to the discrete domain. Simpler filters are directly defined in the discrete domain. We define criteria to compare filter performances in the discrete domain. Canny has defined (1983, 1986) three criteria to derive the equation of an optimal filter for step edge detection: good detection, good localization, and low-responses multiplicity. These criteria seem to be good candidates for filter comparison. Unfortunately, they have been developed in the continuous domain, and their analytical expressions cannot be used in the discrete domain. We establish three criteria with the same meaning as Canny's.

2011-10-11

The Canny Edge Detector Revisited

Canny (1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more acurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.

2011-08-11

OpenCV 2 Computer Vision Application Programming Cookbook

Overview of OpenCV 2 Computer Vision Application Programming Cookbook Teaches you how to program computer vision applications in C++ using the different features of the OpenCV library Demonstrates the important structures and functions of OpenCV in detail with complete working examples Describes fundamental concepts in computer vision and image processing Gives you advice and tips to create more effective object-oriented computer vision programs Contains examples with source code and shows results obtained on real images with detailed explanations and the required screenshots

2011-06-24

Learning based Symmetric Features Selection for Vehicle Detection

Learning based Symmetric Features Selection for Vehicle Detection This paper describes a symmetric features selection strategy based on statistical learning method for detecting vehicles with a single moving camera for autonomous driving. Symmetry is a good class of feature for vehicle detection, but the areas with high symmetry and threshold for segmentation is hard to be decided. Usually, the additional supposition is added artificially, and this will decrease the robustness of algorithms. In this paper, we focus on the problem of symmetric features selection using learning method for autonomous driving environment. Global symmetry and local symmetry are defined and used to construct a cascaded structure with a one-class classifier followed by a two-class classifier.

2011-04-11

Intensity and Edge-Based Symmetry Detection Applied to Car-Following

Intensity and Edge-Based Symmetry Detection Applied to Car-Following We present two methods for detecting symmetry in images, one based directly on the intensity values and another one based on a discrete representation of local orientation. A symmetry finder has been developed which uses the intensity-based method to search an image for compact regions which display some degree of mirror symmetry due to intensity similarities across a straight axis. In a different approach, we look at symmetry as a bilateral relationship between local orientations. A symmetryenhancing edge detector is presented which indicates edges dependent on the orientations at two different image positions. SEED, as we call it, is a detector element implemented by a feedforward network that holds the symmetry conditions. We use SEED to find the contours of symmetric objects of which we know the axis of symmetry from the intensity-based symmetry finder. The methods presented have been applied to the problem of visually guided car-following. Real-time experiments with a system for automatic headway control on motorways have been successful.

2011-04-11

Accurate Robust Symmetry Estimation

Accurate Robust Symmetry Estimation Stephen Smith and Mark Jenkinson There are various applications, both in medical and non-medical image analysis, which require the automatic detection of the line (2D images) or plane (3D) of reflective symmetry of objects. There exist relatively simple methods of finding reflective symmetry when object images are complete (i.e., completely symmetric and perfectly segmented from image “background”). A much harder problem is finding the line or plane of symmetry when the object of interest contains asymmetries, and may not have well defined edges.

2011-04-11

Approach of vehicle segmentation based on texture character

Approach of vehicle segmentation based on texture character

2011-04-01

Method of removing moving shadow based on texture

Method of removing moving shadow based on texture

2011-04-01

Environmentally Robust Motion Detection for Video Surveillance

Most video surveillance systems require to manually set a motion detection sensitivity level to generate motion alarms. The performance of motion detection algorithms, embedded in closed circuit television (CCTV) camera and digital video recorder (DVR), usually depends upon the preselected motion sensitivity level, which is expected to work in all environmental conditions. Due to the preselected sensitivity level, false alarms and detection failures usually exist in video surveillance systems. The proposed motion detection model based upon variational energy provides a robust detection method at various illumination changes and noise levels of image sequences without tuning any parameter manually. We analyze the structure mathematically and demonstrate the effectiveness of the proposed model with numerous experiments in various environmental conditions. Due to the compact structure and efficiency of the proposed model, it could be implemented in a small embedded system.

2011-03-17

Optimal multi-level thresholding using a two-stage Otsu optimization approach

Otsu’s method of image segmentation selects an optimum threshold by maximizing the between-class variance in a gray image. However, this method becomes very time-consuming when extended to a multi-level threshold problem due to the fact that a large number of iterations are required for computing the cumulative probability and the mean of a class. To greatly improve the efficiency of Otsu’s method, a new fast algorithm called the TSMO method (Two-Stage Multithreshold Otsu method) is presented. The TSMO method outperforms Otsu’s method by greatly reducing the iterations required for computing the between-class variance in an image. The experimental results show that the computational time increases exponentially for the conventional Otsu method with an average ratio of about 76. For TSMO-32, the maximum computational time is only 0.463 s when the class number M increases from two to six with relative errors of less than 1% when compared to Otsu’s method. The ratio of computational time of Otsu’s method to TSMO-32 is rather high, up to 109,708, when six classes (M = 6) in an image are used. This result indicates that the proposed method is far more efficient with an accuracy equivalent to Otsu’s method. It also has the advantage of having a small variance in runtimes for different test images.

2011-03-17

A Background Reconstruction Method Based on Double-background

In this paper, we show a new method to reconstruct and update the background. This approach is based on double-background. We use the statistical information of the pixel intensity to construct a background that represents the status during a long time, and construct another background with feedback information in motion detection that represents the recent changes at a short time. This couple of background images is fused to construct and update the background image used to motion detection. The background reconstruction algorithm can perform well on the tests that we have applied it to.

2011-03-17

Statistical Change Detection by the Pool Adjacent Violators Algorithm

In this paper we present a statistical change detection approach aimed at being robust with respect to the main disturbance factors acting in real-world applications, such as illumination changes, camera gain and exposure variations, noise. We rely on modeling the effects of disturbance factors on images as locally order-preserving transformations of pixel intensities plus additive noise. This allows us to identify within the space of all the possible image change patterns the subspace corresponding to disturbance factors effects. Hence, scene changes can be detected by a-contrario testing the hypothesis that the measured pattern is due to disturbance factors, that is by computing a distance between the pattern and the subspace. By assuming additive gaussian noise, the distance can be computed within a maximum likelihood non-parametric isotonic regression framework. In particular, the projection of the pattern onto the subspace is computed by an O(N) iterative procedure known as Pool Adjacent Violators algorithm.

2011-03-17

Cooperative Fusion of Stereo and Motion

Cooperative Fusion of Stereo and Motion This paper presents a new matching algorithm based on cooperative fusion of stereo and motion cues. In this algorithm, stereo disparity and image flow values are recovered from two successive pairs of stereo images by solving the stereo and motion correspondence problems. Feature points are extracted from the images as matching objects. The entire matching process composes of a network of four subprocesses (two for stereo and two for motion). Each of the subprocesses can access information from connected nodes to perform the disambiguation. The “best” matches are obtained in a relaxation manner using the 3-D continuity constraint. Experimental results are presented to illustrate the performances of the proposed method

2011-03-09

A Treatise on Mathematical Theory of Elasticity (1944)(ISBN 0486601749)

Love, A Treatise on Mathematical Theory of Elasticity (1944)(ISBN 0486601749).djvu 第三部分(共三部分)

2011-02-27

A Treatise on Mathematical Theory of Elasticity (1944)(ISBN 0486601749)

Love, A Treatise on Mathematical Theory of Elasticity (1944)(ISBN 0486601749).djvu 第二部分(共三部分)

2011-02-27

Love, A Treatise on Mathematical Theory of Elasticity (1944)(ISBN 0486601749)

Love, A Treatise on Mathematical Theory of Elasticity (1944)(ISBN 0486601749) 第一部分(共三部分)

2011-02-27

Computation of Real-Time Optical Flow Based on Corner Features

This paper describes an approach to real-time optical flow computation that combines the corner features and pyramid Lucas-Kanade. Corners instead of all the points in the image are taken into optical flow computation, which could reduce the amount of calculation to a large extend. The experiment has shown that using this optical flow algorithm to track targets is effective and could meet the requirements of real-time applications.

2011-02-24

II-LK – A Real-Time Implementation for Sparse Optical Flow

In this paper we present an approach to speed up the computation of sparse optical flow fields by means of integral images and provide implementation details. Proposing a modification of the Lucas-Kanade energy functional allows us to use integral images and thus to speed up the method notably while affecting only slightly the quality of the computed optical flow. The approach is combined with an efficient scanline algorithm to reduce the computation of integral images to those areas where there are features to be tracked. The proposed method can speed up current surveillance algorithms used for scene description and crowd analysis.

2011-02-24

Medical Image Reconstruction A Conceptual Tutorial --pdf

Medical Image Reconstruction: A Conceptual Tutorial" introduces the classical and modern image reconstruction technologies, such as two-dimensional (2D) parallel-beam and fan-beam imaging, three-dimensional (3D) parallel ray, parallel plane, and cone-beam imaging. This book presents both analytical and iterative methods of these technologies and their applications in X-ray CT (computed tomography), SPECT (single photon emission computed tomography), PET (positron emission tomography), and MRI (magnetic resonance imaging). Contemporary research results in exact region-of-interest (ROI) reconstruction with truncated projections, Katsevich's cone-beam filtered backprojection algorithm, and reconstruction with highly undersampled data with l0-minimization are also included.

2011-02-24

Extraction and recognition of license plates of motorcycles and vehicles on highways

Extraction and recognition of license plates of motorcycles and vehicles on highways

2011-02-22

High Performance Implementation of License Plate Recognition in Image Sequences

High Performance Implementation of License Plate Recognition in Image Sequences

2011-02-22

Vs-star-- A visual interpretation system for visual surveillance

Vs-star-- A visual interpretation system for visual surveillance

2011-02-22

Robust fragments-based tracking with adaptive feature selection

Robust fragments-based tracking with adaptive feature selection

2011-02-22

Robust and automated unimodal histogram thresholding and potential applications

Robust and automated unimodal histogram thresholding and potential applications

2011-02-22

角点检测方法研究-- 毛雁明, 兰美辉

角点检测方法研究---根据实现方法不同可将角点检测方法分为两大类:基于边缘的角点检测方法与基于灰度变化的角点检测方法,并对现有的角点检测方法作了较为详细的分析与比较,指出角点检测技术的研究与发展方向.

2011-02-22

图像融合中角点检测技术研究

图像融合中角点检测技术研究--图像融合中角点检测技术研究

2011-02-22

Fast image region growing

Fast image region growing---Fast image region growing

2011-02-22

Simple Low Level Features for Image Analysis

Simple Low Level Features for Image Analysis

2011-02-22

Extracting Straight Lines

Extracting Straight Lines---line detection edge detection

2011-02-22

Corner Detection Algorithms for Digital Images in Last Three Decades

Corner Detection Algorithms for Digital Images in Last Three Decades

2011-02-22

Application of Shape Analysis Techniques for the Classification of Vehicles

Application of Shape Analysis Techniques for the Classification of Vehicles

2011-02-22

O天涯海阁O的留言板

发表于 2020-01-02 最后回复 2020-01-02

空空如也
提示
确定要删除当前文章?
取消 删除