Ullrich Köthe's Publications
To read abstracts, you must activate JavaScript in your browser. Papers are available in PDF and compressed PostScript format. The latter can be decompressed using gzip (Unix/Linux) and 7-zip or WinZip (Windows). Please drop me an e-mail if you have printing problems with these files.
Theses
-
U. Köthe:
-
Reliable Low-Level Image Analysis
Habilitation, Department Informatik, University of Hamburg, 318 pages, Hamburg 2008
Abstract | PDF (10 MB) (book version in preparation)
What information give discrete images about the continuous world?
Image analysis uses discrete methods to make statements about the continuous real world. Since an infinite amount of information is lost by digitization, it is not obviuous whether or when this approach will succeed: Can one prove that certain properties of interest will be preserved, despite the information loss? This habilitation thesis considers theories which explicitly connect continuous and discrete models, such as Shannon's famous sampling theorem and a recently discovered geometric sampling theorem. This analysis reveals important consequences regarding the necessary image quality (e.g. resolution and signal-to-noise-ratio) and the resulting limits of observation. These findings are subsequently applied to a large number of low-level image analysis problems (such edge and corner detection, segmentation, local estimation, and noise normalization), which leads to significantly improved algorithms that perform robustly and accurately in accordance to the predictions of theory.
U. Köthe:
-
Generische Programmierung für die Bildverarbeitung
Dissertation, Fachbereich Informatik, Universität Hamburg, 274 pages, Hamburg 2000, ISBN: 3-8311-0239-2. (in German)
Abstract | PostScript (8.5 MB) | PDF (12.5 MB)
If you like this work, please consider buying it as a book (Libri Book on Demand, 25.05 EURO, also available at Amazon.de).
Das Problem, flexible Software für die Bildverarbeitung zu entwickeln, hat sich als außerordentlich schwierig erwiesen. Dies liegt insbesondere daran, daß Flexibilität häufig mit einer deutlichen Verschlechterung der Rechengeschwindigkeit erkauft wird. Angesichts der großen Datenmengen, die in der Bildverarbeitung üblicherweise anfallen, ist dies nicht akzeptabel.Die generische Programmierung, die zum Kernbestandteil der Programmiersprache C++ geworden ist, bietet neuartige Möglichkeiten, die Flexibilität von Software ohne Geschwindigkeitsverluste zu erhöhen. Das vorliegende Buch wendet diese neuen Methoden erstmals konsequent auf die Bildverarbeitung an. Es beschreibt ein umfassendes, konsistentes System von generischen Algorithmen und Objekten für die verschiedensten Problemstellungen, angefangen von grundlegenden Bilddatenstrukturen bis hin zu komplexen, graphbasierten Segmentierungsverfahren.
Die meisten der hier vorgestellten Konzepte sind in der frei zugänglichen Bildverarbeitungsbibliothek VIGRA realisiert und können dadurch vom Leser unmittelbar getestet und in eigenen Projekten verwendet werden.
Image Analysis and Segmentation
-
M. Hanselmann, U. Köthe, B.Y. Renard, M. Kirchner, R.M.A. Heeren, F.A. Hamprecht:
-
Multivariate Watershed Segmentation of Compositional Data,
in: S. Brlek, C. Reutenauer, X. Provençal (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2009, Lecture Notes in Computer Science 5810, pp. 180-192, Berlin: Springer, 2009. (note: this article is © Springer-Verlag)
Abstract | PDF
Watershed segmentation of spectral images is typically achieved by first transforming the high-dimensional input data into a scalar boundary indicator map which is used to derive the watersheds. We propose to combine a Random Forest classifier with the watershed transform and introduce three novel methods to obtain scalar boundary indicator maps from class probability maps. We further introduce the multivariate watershed as a generalization of the classic watershed approach.
H. Meine, P. Stelldinger, U. Köthe:
-
Pixel Approximation Errors in Common Watershed Algorithms,
in: S. Brlek, C. Reutenauer, X. Provençal (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2009, Lecture Notes in Computer Science 5810, pp. 193-202, Berlin: Springer, 2009. (note: this article is © Springer-Verlag)
Abstract | PDF
The exact, subpixel watershed algorithm delivers very accurate watershed boundaries based on a spline interpolation, but is slow and only works in 2D. On the other hand, there are very fast pixel watershed algorithms, but they produce errors not only in certain exotic cases, but also in real-world images and even in the most simple scenarios. In this work, we examine closely the source of these errors and propose a new algorithm that is fast, approximates the exact watersheds (with pixel resolution), and can be extended to 3D.
C. Bähnisch, P. Stelldinger, U. Köthe:
-
Fast and Accurate 3D Edge Detection for Surface Reconstruction,
in: J. Denzler, G. Notni, H. Süße (Eds.): Pattern Recognition, Proc. DAGM 2009, Lecture Notes in Computer Science 5748 , pp. 111-120, Berlin: Springer, 2009. (note: this article is © Springer-Verlag)
Abstract | PDF
Although edge detection is a well investigated topic, 3D edge detectors mostly lack either accuracy or speed. We will show, how to build a highly accurate subvoxel edge detector, which is fast enough for practical applications. In contrast to other approaches we use a spline interpolation in order to have an efficient approximation of the theoretically ideal sinc interpolator. We give theoretical bounds for the accuracy and show experimentally that our approach reaches these bounds while the often-used subpixel-accurate parabola fit leads to much higher edge displacements.
B. Andres, U. Köthe, A. Bonea, B. Nadler, F.A. Hamprecht:
-
Quantitative Assessment of Image Segmentation Quality by Random Walk Relaxation Times,
in: J. Denzler, G. Notni, H. Süße (Eds.): Pattern Recognition, Proc. DAGM 2009, Lecture Notes in Computer Science 5748 , pp. 502-511, Berlin: Springer, 2009. (note: this article is © Springer-Verlag)
Abstract | PDF
The purpose of image segmentation is to partition the pixel grid of an image into connected components termed segments such that (i) each segment is homogenous and (ii) for any pair of adjacent segments, their union is not homogenous. (If it were homogenous the segments should be merged). We propose a rigorous definition of segment homogeneity which is scale-free and adaptive to the geometry of segments. We motivate this definition using random walk theory and show how segment homogeneity facilitates the quantification of violations of the conditions (i) and (ii) which are referred to as under-segmentation and over-segmentation, respectively. We describe the theoretical foundations of our approach and present a proof of concept on a few natural images.
M. Hanselmann, U. Köthe, M. Kirchner, B.Y. Renard, E.R. Amstalden, K. Glunde, R.M.A. Heeren, F.A. Hamprecht:
-
Towards Digital Staining using Imaging Mass Spectrometry and Random Forests ,
Journal of Proteome Research, 8(7):3558-3567, 2009
Abstract | PDF
We show on imaging mass spectrometry (IMS) data that the Random Forest classifier can be used for automated tissue classification and that it results in predictions with high sensitivities and positive predictive values, even when intersample variability is present in the data. We further demonstrate how Markov Random Fields and vector-valued median filtering can be applied to reduce noise effects to further improve the classification results in a posthoc smoothing step. Our study gives clear evidence that digital staining by means of IMS constitutes a promising complement to chemical staining techniques.
M. Frank, M. Plaue, H. Rapp, U. Köthe, B. Jähne, F.A. Hamprecht:
-
Theoretical and Experimental Error Analysis of Continuous-Wave Time-Of-Flight Range Cameras,
Optical Engineering, 48(1):013602, 2009
Abstract | PDF
This paper offers a formal investigation of the measurement principle of time-of-flight (TOF) 3D cameras using correlation of amplitude-modulated continuous-wave signals. These sensors can provide both depth maps and IR intensity pictures simultaneously and in real-time. We examine the theory of the data acquisition in detail. The variance of the range measurements is derived in a concise way and we show that the computed range follows an Offset Normal distribution. The impact of quantization of that distribution is discussed. All theoretically investigated errors like the behavior of the variance, depth bias, saturation and quantization effects are supported by experimental results.
H. Meine, U. Köthe, P. Stelldinger:
-
A Topological Sampling Theorem for Robust Boundary Reconstruction and Image Segmentation,
Discrete Applied Mathematics (DGCI Special Issue), 157(3):524-541, 2009.
Abstract | PDF
Existing theories on shape digitization impose strong constraints on admissible shapes, and require error-free data. Consequently, these theories are not applicable to most real-world situations. In this paper, we propose a new approach that overcomes many of these limitations. It assumes that segmentation algorithms represent the detected boundary by a set of points whose deviation from the true contours is bounded. Given these error bounds, we reconstruct boundary connectivity by means of Delaunay triangulation and alpha-shapes. We prove that this procedure is guaranteed to result in topologically correct image segmentations under certain realistic conditions. Experiments on real and synthetic images demonstrate the good performance of the new method and confirm the predictions of our theory.
B. Andres, C. Kondermann, D. Kondermann, U. Köthe, F.A. Hamprecht, C.S. Garbe:
-
On Errors-In-Variables Regression with Arbitrary Covariance and its Application to Optical Flow Estimation,
in: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pages 1-6, 2008.
Abstract | BibTeX | PDF
Linear inverse problems in computer vision, including motion estimation, shape fitting and image reconstruction, give rise to parameter estimation problems with highly correlated errors in variables. Established total least squares methods estimate the most likely corrections A' and b' to a given data matrix [A, b] perturbed by additive Gaussian noise, such that there exists a solution y with [A + A', b +b']y = 0. In practice, regression imposes a more restrictive constraint namely the existence of a solution x with [A + A']x = [b + b']. In addition, more complicated correlations arise canonically from the use of linear filters. We, therefore, propose a maximum likelihood estimator for regression in the general case of arbitrary positive definite covariance matrices. We show that A', b' and x can be found simultaneously by the unconstrained minimization of a multivariate polynomial which can, in principle, be carried out by means of a Grobner basis. Results for plane fitting and optical flow computation indicate the superiority of the proposed method.
B. Andres, U. Köthe, M. Helmstaedter, W. Denk, F.A. Hamprecht:
-
Segmentation of SBFSEM Volume Data of Neural Tissue by Hierarchical Classification,
in: G. Rigoll (Ed.): Pattern Recognition, Proc. DAGM 2008, Lecture Notes in Computer Science 5096 , pp. 142-152, Berlin: Springer, 2008. (note: this article is © Springer-Verlag)
Abstract | BibTeX | PDF
Received a Best Paper Award from the German Association for Pattern Recognition (DAGM)
Three-dimensional electron-microscopic image stacks with almost isotropic resolution allow, for the first time, to determine the complete connection matrix of parts of the brain. In spite of major advances in staining, correct segmentation of these stacks remains challenging, because very few local mistakes can lead to severe global errors. We propose a hierarchical segmentation procedure based on statistical learning and topology-preserving grouping. Edge probability maps are computed by a random forest classifier (trained on hand-labeled data) and partitioned into supervoxels by the watershed transform. Over-segmentation is then resolved by another random forest. Careful validation shows that the results of our algorithm are close to human labelings.
U. Köthe:
-
What Can We Learn from Discrete Images about the Continuous World?,
in: D. Coeurjolly, I. Sivignon, L. Tougne, F. Dupont (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2008, Lecture Notes in Computer Science 4992, pp. 4-19, Berlin: Springer, 2008. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Image analysis attempts to perceive properties of the continuous real world by means of digital algorithms. Since discretization discards an infinite amount of information, it is difficult to predict if and when digital methods will produce reliable results. This paper reviews theories which establish explicit connections between the continuous and digital domains (such as Shannon's sampling theorem and a recent geometric sampling theorem) and describes some of their consequences for image analysis. Although many problems are still open, we can already conclude that adherence to these theories leads to significantly more stable and accurate algorithms.
U. Köthe, P. Stelldinger, H. Meine:
-
Provably Correct Edgel Linking and Subpixel Boundary Reconstruction,
in: K. Franke, K.-R. Müller, B. Nikolay, R. Schäfer (Eds.): Pattern Recognition, Proc. DAGM 2006, Lecture Notes in Computer Science 4174, pp. 81-90, Berlin: Springer, 2006. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Existing methods for segmentation by edgel linking are based on heuristics and give no guarantee for a topologically correct result. In this paper, we propose an edgel linking algorithm based on a new sampling theorem for shape digitization, which guarantees a topologically correct reconstruction of regions and boundaries if the edgels approximate true object edges with a known maximal error. Experiments on real and generated images demonstrate the good performance of the new method and confirm the predictions of our theory.
P. Stelldinger, U. Köthe, H. Meine:
-
Topologically Correct Image Segmentation Using Alpha Shapes,
in: A. Kuba, L. Nyul, K. Palagyi (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2006, Lecture Notes in Computer Science 4245, pp. 542-554, Berlin: Springer, 2006. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Existing theories on shape digitization impose strong constraints on feasible shapes and require error-free measurements. We use Delaunay triangulation and alpha-shapes to prove that topologically correct segmentations can be obtained under much more realistic conditions. Our key assumption is that sampling points represent object boundaries with a certain maximum error. Experiments on real and generated images demonstrate the good performance and correctness of the new method.
U. Köthe, H. Meine:
-
Merkmalsextraktion für eine automatische Bildsuche,
in: G. Stanke, A. Bienert, J. Hemsley, V. Cappellini (Eds.): Konferenzband EVA 2006 Berlin, Elektronische Bildverarbeitung und Kunst, Kultur, Historie, pp 47-53, ISBN 3-9809212-7-1, Berlin, 2006 (in German)
Abstract | PostScript | PDF
Many applications of content-based image retrieval require very accurate local image features. We describe how the measurement accuracy of geometrical and topological features can be optimized by means of appropriate image resolution, interpolation, and subpixel-accurate edge detection.
Für viele Anwendungen der Bildsuche werden hochgenaue lokale Bildmerkmale benötigt. Wir beschreiben, wie man durch hinreichende Bildauflösung, Interpolation, und subpixel-genaue Kantendetektion die Messgenauigkeit für geometrische und topologische Merkmale optimieren kann.
H. Meine, U. Köthe:
-
A New Sub-pixel Map for Image Analysis,
in: R. Reulke, U. Eckhardt, B. Flach, U. Knauer, K. Polthier (Eds.): Combinatorial Image Analysis, Proc. IWCIA 2006, Lecture Notes in Computer Science 4040, pp. 116-130, Berlin: Springer, 2006. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Planar maps have been proposed as a powerful and easy-to-use representation for various kinds of image analysis results, but so far they are restricted to pixel accuracy. This leads to limitations in the representation of complex structures (such as junctions, triangulations, and skeletons) and discards the sub-pixel information available in grayvalue and color images. We extend the planar map formalism to sub-pixel accuracy and introduce various algorithms to create such a map, thereby demonstrating significant gains over the existing approaches.
P.Stelldinger, U. Köthe:
-
Connectivity preserving digitization of blurred binary images in 2D and 3D,
Computers & Graphics, Volume 30, Issue 1, Pages 70-76 (February 2006) (note: this article is © Elsevier B.V.)
Abstract | official Elsevier page | paper draft in PostScript or PDF format
Connectivity and neighborhood are fundamental topological properties of objects in pictures. Since the input for any image analysis algorithm is a digital image, which does not need to have the same topological characteristics as the imaged real world, it is important to know, which shapes can be digitized without change of such topological properties. Most existing approaches do not take into account the unavoidable blurring in real image acquisition systems or use extremely simplified and thus unrealistic models of digitization with blurring. In some previous work we showed that certain shapes can be digitized topologically correctly with a square grid when some blurring with an arbitrary non-negative radially symmetric point spread function is involved. Now we extend this result to other common sampling grids in the two and even in the three dimensional space, including hexagonal, bcc and fcc grids.
U. Köthe:
-
Low-level Feature Detection Using the Boundary Tensor,
in: J. Weickert, H. Hagen (Eds.): Visualization and Processing of Tensor Fields, Series on Mathematics and Visualization, pp. 63-79, Berlin: Springer, 2006 (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Tensors are a useful tool for the detection of low-level features such as edges, lines, corners, and junctions because they can represent feature strength and orientation in a way that is easy to work with. However, traditional approaches to define feature tensors have a number of disadvantages. By means of the first and second order Riesz transforms, we propose a new approach called the boundary tensor. Using quadratic convolution equations, we show that the boundary tensor overcomes some problems of the older tensor definitions. When the Riesz transform is combined with the Laplacian of Gaussian, the boundary tensor can be efficiently computed in the spatial domain. The usefulness of the new method is demonstrated for a number of application examples.
G. Kedenburg, C. Cocosco, U. Köthe, W. Niessen, E. Vonken, M. Viergever:
-
Automatic cardiac MRI myocardium segmentation using graphcut,
in: J. Reinhardt, J. Pluim (Eds.): Proc. Medical Imaging 2006: Image Processing, SPIE vol. 6144, pp. 85-96, 2006
Abstract | PDF
Segmentation of the left myocardium in four-dimensional (space-time) cardiac MRI data sets is a prerequisite of many diagnostic tasks. We propose a fully automatic method based on global minimization of an energy functional by means of the graphcut algorithm. Starting from automatically obtained segmentations of the left and right ventricles and a cardiac region of interest, a spatial model is constructed using simple and plausible assumptions. This model is used to learn the appearance of different tissue types by non parametric robust estimation. Our method does not require previously trained shape or appearance models. Processing takes 30-40s on current hardware. We evaluated our method on 11 clinical cardiac MRI data sets acquired using cine balanced fast field echo. Linear regression of the automatically segmented myocardium volume against manual segmentations (performed by a radiologist) showed an RMS error of about 12ml.
H. Meine, U. Köthe:
-
Image Segmentation with the Exact Watershed Transform,
in: J.J. Villanueva (Ed.): VIIP 2005, Proc. 5th IASTED International Conference on Visualization, Imaging, and Image Processing, pp. 400-405, ACTA Press, 2005. (note: this article is © ACTA Press)
Abstract | PostScript | PDF
Discrete algorithms for low-level boundary detection are geometrically inaccurate and topologically unreliable. Corresponding continuous methods are often more accurate and need fewer or no heuristics. Thus, we transfer discrete boundary indicators into a continuous form by means of differentiable spline interpolation and detect boundaries using the exact watershed transform. We demonstrate that this significantly improves the obtained segmentations.
U. Köthe, M. Felsberg:
-
Riesz-Transforms Versus Derivatives: On the Relationship Between the Boundary Tensor and the Energy Tensor,
in: R. Kimmel, N. Sochen, J. Weickert (Eds.): Scale Space and PDE Methods in Computer Vision, Proc. Scale-Space 2005, Lecture Notes in Computer Science 3459, pp. 179-191, Berlin: Springer, 2005. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Traditionally, quadrature filters and derivatives have been considered as alternative approaches to low-level image analysis. In this paper we show that there actually exist close connections: We define the quadrature-based boundary tensor and the derivative-based gradient energy tensor which exhibit very similar behavior. We analyse the reason for this and determine how to minimize the difference. These insights lead to a simple and very efficient integrated feature detection algorithm.
M. Felsberg, U. Köthe:
-
GET: The Connection Between Monogenic Scale-Space and Gaussian Derivatives,
in: R. Kimmel, N. Sochen, J. Weickert (Eds.): Scale Space and PDE Methods in Computer Vision, Proc. Scale-Space 2005, Lecture Notes in Computer Science 3459, pp. 192-203, Berlin: Springer, 2005. (note: this article is © Springer-Verlag)
Abstract | PDF
In this paper we propose a new operator which combines advantages of monogenic scale-space and Gaussian scale-space, of the monogenic signal and the structure tensor. The gradient energy tensor (GET) defined in this paper is based on Gaussian derivatives up to third order using different scales. These filters are commonly available, separable, and have an optimal uncertainty. The response of this new operator can be used like the monogenic signal to estimate the local amplitude, the local phase, and the local orientation of an image, but it also allows to measure the coherence of image regions as in the case of the structure tensor. Both theoretically and in experiments the new approach compares favourably with existing methods.
H. Meine, U. Köthe:
-
The GeoMap: A Unified Representation for Topology and Geometry,
in: L. Brun, M. Vento (Eds.): Graph-Based Representations in Pattern Recognition, Proc. GbR 2005, Lecture Notes in Computer Science 3434, pp. 132-141, Berlin: Springer, 2005. (note: this article is © Springer-Verlag)
Abstract | PDF
We propose the GeoMap abstract data type as a unified representation for image segmentation purposes. It manages both topology (based on XPMaps) and pixel-based information, and its interface is carefully designed to support a variety of automatic and interactive segmentation methods. We have successfully used the abstract concept of a GeoMap as a foundation for the implementation of well-known segmentation methods.
P.Stelldinger, U. Köthe:
-
Shape Preserving Digitization of Binary Images After Blurring,
in: E. Andres, G. Damiand, P. Lienhardt (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2005, Lecture Notes in Computer Science 3429, pp. 383-391, Berlin: Springer, 2005. (note: this article is © Springer-Verlag)
Abstract | PDF
Topology is a fundamental property of shapes in pictures. Since the input for any image analysis algorithm is a digital image, which does not need to have the same topological characteristics as the imaged real world, it is important to know, which shapes can be digitized without topological changes. Most existing approaches do not take into account the unavoidable blurring in real image acquisition systems or use extremely simplified and thus unrealistic models of digitization with blurring. In case of the mostly used square grids we show which binary images can be digitized topologically correctly after blurring with an arbitrary non-negative radially symmetric point spread function, which is an important step forward to real digitization.
P. Stelldinger, U. Köthe:
-
Towards a general sampling theory for shape preservation,
Image and Vision Computing, Special Issue on Discrete Geometry for Computer Vision, Volume 23, Issue 2, Pages 237-248, 1 February 2005. (note: this article is © Elsevier B.V.)
Abstract | official Elsevier page | paper draft in PostScript format
Computerized image analysis makes statements about the continuous world by looking at a discrete representation. Therefore, it is important to know precisely which information is preserved during digitization. We analyze this question in the context of shape recognition. Existing results in this area are based on very restricted models and thus not applicable to real imaging situations. We present generalizations in several directions: first, we introduce a new shape similarity measure that approximates human perception better. Second, we prove a geometric sampling theorem for arbitrary dimensional spaces. Third, we extend our sampling theorem to two-dimensional images that are subjected to blurring by a disk point spread function. Our findings are steps towards a general sampling theory for shapes that shall ultimately describe the behavior of real optical systems.This article brings together and extends the conference papers "Shape Preserving Digitization of Ideal and Blurred Binary Images" and "Shape Preservation During Digitization: Tight Bounds Based on the Morphing Distance".
U. Köthe:
-
Boundary Characterization within the Wedge-Channel Representation ,
in: B. Jähne, R. Mester, E. Barth, H. Scharr (Eds.): Complex Motion, Proc. of 1st International Workshop on Complex Motion, Günzburg 2004, Lecture Notes in Computer Science 3417, pp. 42-53, Berlin: Springer, 2004. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Junctions play an important role in motion analysis. Approaches based on the structure tensor have become the standard for junction detection. However, the structure tensor is not able to classify junctions into different types (L, T, Y, X etc.). We propose to solve this problem by the wedge channel representation. It is based on the same computational steps as used for the (anisotropic) structure tensor, but stores results into channel vectors rather than tensors. Due to one-sided channel smoothing, these channel vectors not only represent edge orientation (as existing channel approaches do) but edge direction. Thus junctions cannot only be detected, but also fully characterized.
U. Köthe:
-
Accurate and Efficient Approximation of the Continuous Gaussian Scale-Space,
in: C.E. Rasmussen, H. Bülthoff, M. Giese, B. Schölkopf (Eds.): Pattern Recognition, Proc. of 26th DAGM Symposium, Tübingen 2004, Lecture Notes in Computer Science 3175, pp. 350-358, Berlin: Springer, 2004. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
The Gaussian scale-space is a standard tool in image analysis. While continuous in theory, it is generally realized with fixed regular grids in practice. This prevents the use of algorithms which require continuous and differentiable data and adaptive step size control, such as numerical path following. We propose an efficient continuous approximation of the Gaussian scale-space that removes this restriction and opens up new ways to subpixel feature detection and scale adaptation.
H. Meine, U. Köthe, H.S. Stiehl:
-
Fast and Accurate Interactive Image Segmentation in the GeoMap Framework,
in: T. Tolxdorff, J. Braun, H. Handels, A. Horsch, H.-P. Meinzer (Eds.): Proc. Bildverarbeitung für die Medizin 2004, pp. 60-64, Berlin: Springer, 2004. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Although many interactive segmentation methods exists, none can be considered a silver bullet for all clinical tasks. Moreover, incompatible data representations prevent multiple algorithms from being combined as desired. We propose the GeoMap as a unified representation for segmentation results and illustrate how it facilitates the design of an integrated framework for interactive medical image analysis. Results show the high flexibility and performance of the new framework.
U. Köthe:
-
Integrated Edge and Junction Detection with the Boundary Tensor,
in: ICCV 03, Proc. of 9th Intl. Conf. on Computer Vision, Nice 2003, vol. 1, pp. 424-431, Los Alamitos: IEEE Computer Society, 2003. (note: this article is © IEEE)
Abstract | PostScript | PDF
The boundaries of image regions necessarily consist of edges (in particular, step and roof edges), corners, and junctions. Currently, different algorithms are used to detect each boundary type separately, but the integration of the results into a single boundary representation is difficult. Therefore, a method for the simultaneous detection of all boundary types is needed. We propose to combine responses of suitable polar separable filters into what we will call the boundary tensor. The trace of this tensor is a measure of boundary strength, while the small eigenvalue and its difference to the large one represent corner/junction and edge strengths respectively. We prove that the edge strength measure behaves like a rotationally invariant quadrature filter. A number of examples demonstrate the properties of the new method and illustrate its application to image segmentation.
U. Köthe:
-
Edge and Junction Detection with an Improved Structure Tensor,
in: B. Michaelis, G. Krell (Eds.): Pattern Recognition, Proc. of 25th DAGM Symposium, Magdeburg 2003, Lecture Notes in Computer Science 2781, pp. 25-32, Berlin: Springer, 2003. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Awarded the main prize of the German Pattern Recognition Society (DAGM) 2003
We describe three modifications to the structure tensor approach to lowlevel feature extraction. We first show that the structure tensor must be represented at a higher resolution than the original image. Second, we propose a nonlinear filter for structure tensor computation that avoids undesirable blurring. Third, we introduce a method to simultaneously extract edge and junction information. Examples demonstrate significant improvements in the quality of the extracted features.
U. Köthe, P. Stelldinger:
-
Shape Preserving Digitization of Ideal and Blurred Binary Images,
in: I. Nyström, G. Sanniti di Baja, S. Svensson (Eds.): Discrete Geometry for Computer Imagery, Proc. of 11th DGCI Conference, Naples 2003, Lecture Notes in Computer Science 2886, pp. 82-91, Berlin: Springer, 2003. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
In order to make image analysis methods more reliable it is important to analyse to what extend shape information is preserved during image digitization. Most existing approaches to this problem consider topology preservation and are restricted to ideal binary images. We extend these results in two ways. First, we characterize the set of binary images which can be correctly digitized by both regular and irregular sampling grids, such that not only topology is preserved but also the Hausdorff distance between the original image and the reconstruction is bounded. Second, we prove an analogous theorem for gray scale images that arise from blurring of binary images with a certain filter type. These results are steps towards a theory of shape digitization applicable to real optical systems.
P. Stelldinger, U. Köthe:
-
Shape Preservation During Digitization: Tight Bounds Based on the Morphing Distance,
in: B. Michaelis, G. Krell (Eds.): Pattern Recognition, Proc. of 25th DAGM Symposium, Magdeburg 2003, Lecture Notes in Computer Science 2781, pp. 108-115, Berlin: Springer, 2003. (note: this article is © Springer-Verlag)
(this paper builds on the DGCI paper above)
Abstract | PostScript | PDF
We define strong rsimilarity and the morphing distance to bound geometric distortions between shapes of equal topology. We then derive a necessary and su#cient condition for a set and its digitizations to be rsimilar, regardless of the sampling grid. We also extend these results to certain gray scale images. Our findings are steps towards a theory of shape digitization for real optical systems.This paper builds on the paper "Shape Preserving Digitization of Ideal and Blurred Binary Images" which should be read before.
U. Köthe:
-
Deriving Topological Representations from Edge Images,
in: T. Asano, R. Klette, C. Ronse (Eds.): Geometry, Morphology, and Computational Imaging, 11th Intl. Workshop on Theoretical Foundations of Computer Vision, Lecture Notes in Computer Science 2616, pp. 320-334, Berlin: Springer, 2003. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
In order to guarantee consistent descriptions of image structure, it is desirable to base such descriptions on topological principles. Thus, we want to be able to derive topological representations from segmented images. This paper discusses two methods to achieve this goal by means of the recently introduced XPMaps. First, it improves an existing algorithm that derives topological representations from region images and crack edges, and second, it presents a new algorithm that can be applied to standard 8-connected edge images.
U. Köthe:
-
XPMaps and Topological Segmentation - a Unified Approach to Finite Topologies in the Plane,
in: A. Braquelaire, J.-O. Lachaud, A. Vialard (Eds.): Proc. of 10th International Conference on Discrete Geometry for Computer Imagery (DGCI 2002), Lecture Notes in Computer Science 2301, pp. 22-33, Berlin: Springer, 2002. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Long version with proofs: Technical Report FBI-HH-M-308/01, Department of Informatics, University of Hamburg, December 2001 ( PDF)
Finite topological spaces are now widely recognized as a valuable tool of image analysis. However, their practical application is complicated because there are so many different approaches. We show that there are close relationships between those approaches which motivate the introduction of XPMaps as a concept that subsumes the important characteristics of the other approaches. The notion of topological segmentations then extends this concept to a particular class of labelings of XPMaps. We show that the new notions lead to significant simplifications from both a theoretical and practical viewpoint.
U. Köthe:
-
Local Appropriate Scale in Morphological Scale-Space,
in: B. Buxton, R. Cipolla (Eds.): Computer Vision, Proc. of 4th European Conference on Computer Vision, vol. 1, Lecture Notes in Computer Science 1064, pp. 219-228, Berlin: Springer, 1996. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Long version with proofs: Fraunhofer IGD Technical Report 96i001-FEGD, 1996. ( PDF)
This paper presents a novel approach to selecting appropriate scales for region detection prior to feature localization. We develop and formalize a number of requirements that should be fulfilled by such an appropriate scale operator and show by theoretical considerations and experiments that a morphological opening-closing scale-space meets these requirements better than Gaussian scale-space. As a prerequisite for appropriate scale measurements we generalize morphological decomposition methods and introduce a morphological band-pass filter. It decomposes an image into structures of different size and different curvature polarity ("light and dark blobs"). It may thus be seen as a morphological analogy to the important Laplacian of Gaussian operator. The local appropriate scale is than defined as the scale that maximizes the response of the band-pass filter at each point. This operator has a number of interesting properties. Most notably it gives constant scale values in a region of constant width, and its zero- crossings coincide with local maxima of the gradient magnitudes. Some example applications show that the new operator is very useful to tune subsequent operators towards optimal scales.
U. Köthe:
-
Morphological Appropriate Scale Measurements for Region Segmentation,
in: P. Johansen (ed.): Proc. of Copenhagen WS on Gaussian Scale-Space Theory, U of Copenhagen, Dept. of Computer Science, Technical Report Nr. 96/19, 1996.
Abstract | PostScript | PDF
This paper presents a novel approach to selecting appropriate scales in morphological openingclosing scalespace. It is based on a morphological bandpass filter that decomposes an image into structures of different size and different curvature polarity ("light and dark blobs"). Appropriate scale is defined as the scale that maximizes the response of the bandpass. The resulting scale measurements allow to automatically select window sizes (scales) for segmentation operators. The application of this idea to region segmentation gives very satisfying results.
U. Köthe:
-
Parameterfreie Merkmalsextraktion durch automatische Skalenselektion,
in: F. K. List (Hrsg.): Vorträge 16. Wissenschaftlich-Technische Jahrestagung der Deutschen Gesellschaft für Photogrammetrie und Fernerkundung 1996, Publikationen der Deutschen Gesellschaft für Photogrammetrie und Fernerkundung, Band 5, pp. 29-36, 1997. (in German)
Abstract | PostScript | PDF
Der vorliegende Artikel diskutiert Möglichkeiten, parameterfreie Merkmalsdetektoren zu definieren, indem diese mit einem Mechanismus zur Selektion optimaler Skalen in einem geeigneten Skalenraum kombiniert werden. Als optimale Skala wird dabei für jedes Pixel diejenige Skala gewählt, die ein geeignetes Auffälligkeitsmaß maximiert. Verschiedene Beispiele zeigen die sehr interessanten Eigenschaften dieser neuen Technik.
U. Köthe:
-
Inhaltsbasierte Suche in Bilddatenbanken,
Forschungsbericht 95i004-FEGD, Fraunhofer Institute for Computer Graphics Rostock, Joachim-Jungius-Str. 9, 18059 Rostock, Germany, 1995. (in German)
Abstract | PostScript | PDF
Der vorliegende Artikel beschäftigt sich mit einer neuen Art intelligenter Informationssysteme, den Bilddatenbanken mit inhaltsbasierter Suchoption. Die inhaltsbasierte Suche gilt als vielversprechender Lösungsansatz für das Finden relevanter Daten in großen Datenbeständen. Die grundlegenden Konzepte in Bezug auf Bilddatenbanken werden beschrieben und anhand eines Modellbeispiels, einer Datenbank mit Brillengestellen, überprüft. Es zeigt sich, daß der Nutzen von inhaltsbasierter Suche entscheidend davon abhängt, daß das Retrievalsystem ähnliche Suchkriterien anwendet wie der Mensch. Aufgrund einfacher Experimente werden geeignete Kriterien, die auf Richtungshistogrammen und Skelettlinien beruhen, identifiziert. Die experimentelle Evaluierung des darauf aufbauenden Demonstrationssystems zeigt eine gute Übereinstimmung der Suchergebnisse mit den Erwartungen des Nutzers.
U. Köthe:
-
Primary Image Segmentation,
in: G. Sagerer, S. Posch, F. Kummert (Hrsg.): Mustererkennung 1995, 17. DAGM-Symposium, pp. 554-561, Berlin: Springer, 1995. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
This paper introduces the notion of primary image segmentation which serves as a well defined link between low- and high-level image analysis. A general algorithmic framework based on priority queues is proposed that allows for the integration of a variety of different segmentation algorithms. A seeded region growing approach, along with a number of improved seed selection methods and foveation of critical areas, is chosen to realize this framework. Experimental evaluation shows very good performance of these algorithms on a relatively large number of outdoor photographs without the need to adjust parameters.
Analysis of Mass Spectroscopic Data
-
M. Kirchner, B.Y. Renard, U. Köthe, D.J. Pappin, F.A. Hamprecht, H. Steen, and J.A.J. Steen:
-
Computational Protein Profile Similarity Screening for Quantitative Mass Spectrometry Experiments,
Bioinformatics, 2009
Abstract | PDF
Motivation: The qualitative and quantitative characterization of protein abundance profiles over a series of time points or a set of environmental conditions is becoming increasingly important. Using isobaric mass tagging experiments, mass spectrometry-based quantitative proteomics deliver accurate peptide abundance profiles for relative quantitation. Associated data analysis worflows need to provide tailored statistical treatment that (i) takes the correlation structure of the normalized peptide abundance profiles into account and (ii) allows inference of protein-level similarity. We introduce a suitable distance measure for relative abundance profiles, derive a statistical test for equality and propose a protein-level representation of peptide-level measurements. This yields a workflow that delivers a similarity ranking of protein abundance profiles with respect to a defined reference. All procedures have in common that they operate based on the true correlation structure that underlies the measurements. This optimizes power and delivers more intuitive and efficient results than existing methods that do not take these circumstances into account.
Results: We use protein profile similarity screening to identify candidate proteins whose abundances are post-transcriptionally controlled by the Anaphase Promoting Complex (APC/C), a specific E3 ubiquitin ligase that is a master regulator of the cell cycle. Results are compared with an established protein correlation profiling method. The proposed procedure yields a 50.9-fold enrichment of co-regulated protein candidates and a 2.5-fold improvement over the previous method.
Availability: A MATLAB toolbox is available from http://hci.iwr.uniheidelberg. de/mip/proteomics.
Contact: hanno.steen@childrens.harvard.edu
M. Hanselmann, M. Kirchner, B.Y. Renard, A. Kharchenko, L. Klerk, U. Köthe, R. Heeren, F.A. Hamprecht:
-
Concise representation of MS Images by Probabilistic Latent Semantic Analysis,
Conference of the American Society for Mass Spectrometry (ASMS) 2008.
S. Boppel, B.Y. Renard, M. Kirchner, H. Steen, U. Köthe, F.A. Hamprecht:
-
Sparse Profile Reconstruction for LC/MS Feature Extraction,
Conference of the American Society for Mass Spectrometry (ASMS) 2008.
X. Lou, M. Kirchner, B.Y. Renard, U. Köthe, H. Steen, M.A. Mayer, F.A. Hamprecht:
-
Fully Automated HX-MS Data Analysis with Complete Deuteration Distribution Estimation,
Conference of the American Society for Mass Spectrometry (ASMS) 2008.
3-D Reconstruction and Visualization
-
H. Diener, U. Köthe, B. Ristow, M. Schreyer, U. Stelbe:
-
ERSO - Acquisition, Reconstruction and Simulation of Real Objects,
in: IECON '98, Proc. of 24th Annual Conf. of the IEEE Industrial Electronics Society, 1998.
Abstract | PostScript | PDF
A basic system for acquisition, reconstruction and simulation of real objects (ERSO) is presented. Our approach is to combine the knowledge of different research areas as photogrammetric, computer graphics, and computer vision to develop new techniques for generating three-dimensional models from images. The first two paragraphs give an introduction and overview of the system architecture. The following paragraphs describe in more detail the different parts of the reconstruction process.
A. Schlempp, U. Köthe:
-
ViComp - Architektur und Städteplanung mit virtueller Modellierung und Komposition,
in: O. Deussen, P. Lorenz (Hrsg.): Proc. Simulation und Animation '97, Society for Computer Simulation Intl., 1997. (in German)
Abstract | PostScript | PDF
Das System ViComp (Virtual Composer) dient zur interaktiven Komposition von virtuellen Szenen. ViComp wird innerhalb des vom Wirtschaftsministerium Mecklenburg-Vorpommerns geförderten Verbundprojekts "Erweitertes Architektur- und Planungsmodell, EAPM" entwickelt. Anlaß hierzu waren die bei der Erstellung komplexer Architektur- und Stadtmodelle auftretenden Inkompabilit˜aten durch die Zusammenführung von aus verschiedenen Systemen stammenden Einzelteilen. Neben der sehr zeit- und damit auch kostenintensiven Modellierung der verschiedenen Objekte werden hier unter anderem Bildbasierte Rekonstruktionsverfahren, Objektdatenbanken oder auch 3D-CAD Daten eingesetzt. Die Inkompabilitäten beziehen sich vor allem auf den Datenbestand und finden sich typischerweise in unterschiedlichen Dateiformaten, Maßstäben oder Orientierungen der Koordinatensysteme. Neben der Überwindung dieser Inkompabilitäten wird mit ViComp eine Möglichkeit geschaffen, die verschiedenen Komponenten in einer Gesamtszene anzuordnen. Durch direktmanipulative und Constraint-basierte Interaktion mit den Standardeingabegeräten (2D-Maus, Tastatur) innerhalb einer einzigen 3D-Ansicht wird eine intuitive, schnelle und flexible Lösung dieser Aufgabe vorgestellt.
U. Köthe, W. Luth, K. Otto:
-
Bildgestützte 3-D Rekonstruktion: Aspekte der Integration von digitaler Bildverarbeitung und 3-D Modellierung,
Workshop ICA '94 - Integration of CA-Techniques in Theory and Practice, Rostock 1994. (in German)
A. Hildebrand, U. Köthe:
-
SMART: System for Segmentation, Matching, and Reconstruction,
in: SPIE vol. 1943, Conf. State of the Art Mapping, 1993.
Abstract | PostScript | PDF
In many areas of application, such as medicine, robot technology, photogrammetry etc., the acquisition of an abstract description of three dimensional objects is an important task. A common approach to this problem are the photogrammetric methods. Although the basic algorithms with in this field are well known, many questions are still open. Among other problems these methods require an exact determination of the camera positions before the photos can be taken. Additionally the measurement of points in the images and the combination of data from different views often has to be done by hand. Therefore a lot of skilled work and specialized equipment is necessary during both the acquisition and the evaluation of an image series.Our approach is directed to an integrated system called SMART (Segmentation Matching And ReconsTruction) that is based on general purpose equipment (general purpose workstation, photographic camera or CCD camera). It is designed as a selfcalibrating system, i.e. the camera positions, as well as their relative orientations, are derived automatically during the evaluation of the image series. Hence the photos need not to be taken by a specially trained person. The whole procedure within the SMART system can be reflected in a vision pipeline (see section ). After the image acquisition we perform a rough segmentation of the images (1) to find characteristic geometric primitives of the objects and to reject noncharacteristic ones those occurrence is unavoidable during the acquisition process and (2) to calculate the camera positions approximately. Based on this information we measure the exact positions of the characteristic details and their correspondence. This data allows the usage of an selfcalibrating reconstruction algorithm. Now, the obtained partial reconstructions are connected to one complete reconstruction. At the same time the precision of the 3D coordinates is improved by means of a bundle block adjustment. Since the obtained data structure is independent of a specific application area, it can easily exported to, e.g. rendering or CAD applications.
Software Design for Computer Vision
-
U. Köthe:
-
Generic Programming Techniques that Make Planar Cell Complexes Easy to Use,
in: G. Bertrand, A. Imiya, R. Klette (Eds.): Digital and Image Geometry - Advanced Lectures (Proc. of a Dagstuhl Seminar), Lecture Notes in Computer Science 2243, pp. 17-37, Berlin: Springer, 2001. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Cell complexes are potentially very useful in many fields, including image segmentation, numerical analysis, and computer graphics. However, in practice they are not used as widely as they could be. This is partly due to the difficulties in actually implementing algorithms on top of cell complexes. We propose to use generic programming to design cell complex data structures that are easy to use, efficient, and flexible. The implementation of the new design is demonstrated for a number of common cell complex types and an example algorithm.
U. Köthe, K. Weihe:
-
The STL Model in the Geometric Domain,
in: M. Jazayeri, R. Loos, D. Musser (Eds.): Generic Programming, Proc. of a Dagstuhl Seminar, Lecture Notes in Computer Science 1766, pp. 232-248, Berlin: Springer, 2000. (note: this article is © Springer-Verlag)
Abstract | PostScript | PDF
Computational geometry and its close relative image analysis are among the most promising application domains of generic programming. This insight raises the question whether, and to which extent, the concepts of the Standard Template Library (STL) are appropriate for library design in this realm. We will discuss this question in view of selected fundamental algorithms and data structures.
U. Köthe:
-
STL-Style Generic Programming with Images,
C++ Report Magazine 12(1), pp. 24-30, January 2000.
Abstract | PostScript | PDF
Generic Programming has been introduced as a powerful means to implement reusable algorithms that are intended to run in many different application contexts [MS89, MS94]. The Standard Template Library (STL) has impressively demonstrated this for fundamental algorithms such as sorting and searching [MS96] and has been incorporated into the upcoming C++ standard [C++96]. However, the need for algorithm implementations which are independent of a specific application framework is in no way restricted to the fields covered by the STL.In this paper we are going to explore this problem in the field of image processing. Image processing algorithms are applied in application areas as diverse as robotics, medical imaging, and video processing, to mention just a few. In fact, most applications with graphical user interfaces involve some form of image manipulation. Therefore, reusable image processing algorithms would be extremely useful. We introduce a number of abstract generic concepts, in particular 2D iterators and data accessors, that greatly facilitate the design of generic image manipulation algorithms.
U. Köthe:
-
Reusable Software in Computer Vision,
in: B. Jähne, H. Haussecker, P. Geissler (Eds.): Handbook of Computer Vision and Applications, Volume 3: Systems and Applications, pp. 103-132, San Diego: Academic Press, 1999. (note: this contribution is © Academic Press)
Abstract | PostScript | PDF
Although highly desirable, software reuse is currently not very common in image processing, largely because it is hard incorporate potentially reusable components into another environment and/or performance is inaccaptably sacrificed in doing so. This paper applies a new approach, generic programming, to solve the reuse problem and formally defines the necessary interfaces (called iterators) to separate image processing algorithms from their underlying data structures. On the basis of these iterators, which can also be implemented for legacy data structures, we build reusable algorithms with only a small penalty in speed. A number of examples show the elegance and efficiency of the new approach.
U. Köthe:
-
Design Patterns for Independent Building Blocks,
in: Jens Coldewey, Paul Dyson (Eds.): EuroPLoP '98, Proceedings of the 3rd European Conference on Pattern Languages of Programming and Computing 1998, pp. 143-165, Konstanz: Universitätsverlag Konstanz, 1999.
Abstract | PostScript | PDF
The pattern language presented in this paper aims at helping designers to develop reusable building blocks that can be plugged together as needed by the application to be built. The patterns try to identify essential properties of reusable software. In particular, we show that extensive standardization is not a necessary prerequisite of reusability as long as interfaces are designed in a way that supports building block adaptation. We hope that the presented design approach will be a small step towards the long envisioned "software factory".
U. Köthe:
-
Reusable Implementations are Necessary to Characterize and Compare Vision Algorithms,
DAGM-Workshop on Performance Characteristics and Quality of Computer Vision Algorithms, Braunschweig 1997.
Abstract | PostScript | PDF
This paper argues that the difficulties in implementing computer vision algorithms are a major reason for the lack of research into al gorithm comparison. We conclude that it is important to represent al gorithms in form of reusable code. Since current vision systems do not fulfill all requirements we must pose on reusable implementations, we propose to solve the reuse problem by applying generic programming. We define two dimensional iterators, which mediate between image processing algorithms and their underlying data structures, so that the same algorithm implementation can be applied to any number of different image formats. The elegance and efficiency of this approach is illustrated by a number of useful examples.
U. Köthe:
-
Requested Interface,
in: Proc. of 2nd European Conference on Pattern Languages of Programming, EuroPLoP '97, Siemens Technical Report 120/SW1/FB, 1997.
Abstract | PostScript | PDF
This paper introduces the Requested Interface pattern which describes ways to implement truly independent software components that can be plugged together as needed in order to make reuse more attractive than reimplementation. It encourages components to delegate subtasks to collaborating servers so that they can be adapted to a new context by simply exchanging those subtask servers. The delegating objects must specify minimal and abstract requested interfaces that describe the subtasks independently of existing server interfaces. An adaptation layer mediates between the requested interface of a client and the offered interface of a server implementing the subtask.