Back to the Gradient Vector Flow Page

List of Citations from Science Citation Index for

M. Kass, A. Witkin, and D. Terzopoulos, ``Snakes - Active Contour Models'' International Journal of Computer Vision, 1(4): 321-331, 1987.

1988: 2  1989: 1  1990: 1  1993: 11  1994: 25  1995: 37  1996: 74  1997: 99  1998: 88  1999: 115  2000: 129  2001: 129  2002: 134  2003: 172  2004: 194  2005: 27  

  Total citations: 1238

As of 11 Mar 2005

By Year - By Citation Rank - By Year with Abstract

 
1988

    1.   TERZOPOULOS, D, WITKIN, A, and KASS, M, "CONSTRAINTS ON DEFORMABLE MODELS - RECOVERING 3D SHAPE AND NONRIGID MOTION," ARTIFICIAL INTELLIGENCE, vol. 36, pp. 91-123, 1988.

Abstract:   We present a new approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model, which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. We consider the estimation of dynamic facial muscle contractions from video sequences of expressive human faces. We develop an estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions.

    2.   TERZOPOULOS, D, and WITKIN, A, "PHYSICALLY BASED MODELS WITH RIGID AND DEFORMABLE COMPONENTS," IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 8, pp. 41-51, 1988.

Abstract:   We present a new approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model, which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. We consider the estimation of dynamic facial muscle contractions from video sequences of expressive human faces. We develop an estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions.

 
1989

    3.   BOOKSTEIN, FL, "PRINCIPAL WARPS - THIN-PLATE SPLINES AND THE DECOMPOSITION OF DEFORMATIONS," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 11, pp. 567-585, 1989.

Abstract:   We present a new approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model, which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. We consider the estimation of dynamic facial muscle contractions from video sequences of expressive human faces. We develop an estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions.

 
1990

    4.   VANCLEYNENBREUGEL, J, FIERENS, F, SUETENS, P, and OOSTERLINCK, A, "DELINEATING ROAD STRUCTURES ON SATELLITE IMAGERY BY A GIS- GUIDED TECHNIQUE," PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, vol. 56, pp. 893-898, 1990.

Abstract:   We present a new approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model, which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. We consider the estimation of dynamic facial muscle contractions from video sequences of expressive human faces. We develop an estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions.

 
1993

    5.   SHUFELT, JA, and MCKEOWN, DM, "FUSION OF MONOCULAR CUES TO DETECT MAN-MADE STRUCTURES IN AERIAL IMAGERY," CVGIP-IMAGE UNDERSTANDING, vol. 57, pp. 307-330, 1993.

Abstract:   We present a new approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model, which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. We consider the estimation of dynamic facial muscle contractions from video sequences of expressive human faces. We develop an estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions.

    6.   TERZOPOULOS, D, and WATERS, K, "ANALYSIS AND SYNTHESIS OF FACIAL IMAGE SEQUENCES USING PHYSICAL AND ANATOMICAL MODELS," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, pp. 569-579, 1993.

Abstract:   We present a new approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model, which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. We consider the estimation of dynamic facial muscle contractions from video sequences of expressive human faces. We develop an estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions.

    7.   FUKUHARA, T, and MURAKAMI, T, "3-D MOTION ESTIMATION OF HUMAN HEAD FOR MODEL-BASED IMAGE- CODING," IEE PROCEEDINGS-I COMMUNICATIONS SPEECH AND VISION, vol. 140, pp. 26-35, 1993.

Abstract:   Model-based image coding applied to interpersonal communication achieves very low bit-rate image transmission. To accomplish it, accurate three-dimensional (3-D) motion estimation of a speaker is necessary. A new method of 3-D motion estimation is presented, consisting of two steps. In the first, facial contours and feature points of a speaker are extracted using filtering and Snake algorithms. Five feature points on a speaker's facial image are tracked between consecutive picture frames, which gives 2-D motion vectors of the feature points. Then, in the second step, the 3-D motion of a speaker's head is estimated using a three-layered neural network model, after training with many possible motion patterns of the human head using an existing 3-D general shape model. Experimental results show that our method not only achieves good results but is also more robust than existing methods, even when the motion of an object is rather large or complicated. Accurately estimated 3-D motion parameters can realise image transmission at a very low bit rate.

    8.   WHITTEN, G, "SCALE-SPACE TRACKING AND DEFORMABLE SHEET MODELS FOR COMPUTATIONAL VISION," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, pp. 697-706, 1993.

Abstract:   Many problems in computational vision (including stereo correspondence, motion analysis and surface reconstruction) can be solved effectively using a constrained optimization approach, where smoothness is the common constraint. Moreover, these problems can be cast in a variational form that minimizes an energy functional. Unfortunately, standard optimization techniques tend to find only local energy minima. Coarse to fine scale space tracking (where energy minima at reduced resolution are found and successively tracked to higher resolution) has been demonstrated to find solutions of practical value. For smoothness-constrained optimization problems, we show that scale space tracking can be implicitly implemented by appropriately adjusting the smoothness constraint. A useful physical model for controlled smoothness (deformable sheets) provides a natural framework for scale space tracking and addressing many vision problems that can be solved by appealing to a smoothness constraint. Deformable sheets are characterized by a global energy functional, and the smoothness constraint is represented by a linear internal energy term. In analogy to physical sheets, the model sheets are deformed by problem specific external forces and, in turn, impose smoothness on the applied forces. We have related deformable sheet smoothness properties to Gaussian blurring (the common expression of scale) and used this relationship to unify the concepts of scale and smoothness. In our formulation, the smoothness/scale state is controlled by a single parameter in the deformable sheet model. This single parameter control of scale makes it possible to perform scale space tracking by solving the differential equation that describes the trajectory of energy minima through scale space. Further, it permits adaptive scale step size selection based on the local properties of scale space, which allows for much larger steps than would be possible with the conservative step size required by nonadaptive techniques. We show that this process is characterized by a sparse linear system and prove that the associated matrix is positive definite and, consequently, nonsingular. Our analysis also provides for the determination of scale-dependent parameters, which is useful for efficient multiresolution processing. We have applied the deformable sheet model described to different problems in computational vision using real imagery with encouraging results, which are presented here.

    9.   HOGG, DC, "SHAPE IN MACHINE VISION," IMAGE AND VISION COMPUTING, vol. 11, pp. 309-316, 1993.

Abstract:   The representation of shape in machine vision is reviewed with emphasis on the most common types of representation and recent developments. Both planar shape and solid shape are examined with connections and generalizations drawn wherever possible. Particular emphasis is placed on the importance of invariant descriptions and on the representation of shape classes.

   10.   GAUCH, JM, and PIZER, SM, "THE INTENSITY AXIS OF SYMMETRY AND ITS APPLICATION TO IMAGE SEGMENTATION," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, pp. 753-770, 1993.

Abstract:   In this paper, we present a new method for describing the shape of structures in grey-scale images, which is known as the intensity axis of symmetry (IAS). We describe the spatial and intensity variations of the image simultaneously rather than by the usual two-step process of 1) using intensity properties of the image to segment an image into regions and 2) describing the spatial shape of these regions. The result is an image shape description that is useful for a number of computer vision applications. Our method for computing this image shape description relies on minimizing an active surface functional that provides coherence in both the spatial and intensity dimensions while deforming into an axis of symmetry. Shape- based image segmentation is possible by identifying image regions associated with individual components of the IAS. The resulting image regions have geometric coherence and correspond well to visually meaningful objects in medical images.

   11.   TSAI, CT, SUN, YN, CHUNG, PC, and LEE, JS, "ENDOCARDIAL BOUNDARY DETECTION USING A NEURAL-NETWORK," PATTERN RECOGNITION, vol. 26, pp. 1057-1068, 1993.

Abstract:   Echocardiography has been widely used as a real-time non- invasive clinical tool to diagnose cardiac functions. Due to the poor quality and inherent ambiguity in echocardiograms, it is difficult to detect the myocardial boundaries of the left ventricle. Many existing methods are semi-automatic and detect cardial boundaries by serial computation which is too slow to be practical in real applications. In this paper, a new method for detecting the endocardial boundary by using a Hopfield neural network is proposed. Taking advantage of parallel computation and energy convergence capability in the Hopfield network, this method is faster and more stable for the detection of the endocardial border. Moreover, neither manual operations nor a priori assumptions are needed in this method. Experiments on several LV echocardiograms and clinical validation have shown the effectiveness of our method in these patient studies.

   12.   LINDEBERG, T, "DETECTING SALIENT BLOB-LIKE IMAGE STRUCTURES AND THEIR SCALES WITH A SCALE-SPACE PRIMAL SKETCH - A METHOD FOR FOCUS-OF- ATTENTION," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 11, pp. 283-318, 1993.

Abstract:   This article presents: (i) a multiscale representation of grey- level shape called the scale-space primal sketch, which makes explicit both features in scale-space and the relations between structures at different scales, (ii) a methodology for extracting significant blob-like image structures from this representation, and (iii) applications to edge detection, histogram analysis, and junction classification demonstrating how the proposed method can be used for guiding later-stage visual processes. The representation gives a qualitative description of image structure, which allows for detection of stable scales and associated regions of interest in a solely bottom-up data-driven way. In other words, it generates coarse segmentation cues, and can hence be seen as preceding further processing, which can then be properly tuned. It is argued that once such information is available, many other processing tasks can become much simpler. Experiments on real imagery demonstrate that the proposed theory gives intuitive results.

   13.   COHEN, LD, and COHEN, I, "FINITE-ELEMENT METHODS FOR ACTIVE CONTOUR MODELS AND BALLOONS FOR 2-D AND 3-D IMAGES," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, pp. 1131-1147, 1993.

Abstract:   The use of energy-minimizing curves, known as ''snakes'' to extract features of interest in images has been introduced by Kass, Witkin and Terzopoulos [23]. A balloon model was introduced in [12] as a way to generalize and solve some of the problems encountered with the original method. A 3-D generalization of the balloon model as a 3-D deformable surface, which evolves in 3-D images, is presented. It is deformed under the action of internal and external forces attracting the surface toward detected edgels by means of an attraction potential. We also show properties of energy- minimizing surfaces concerning their relationship with 3-D edge points. To solve the minimization problem for a surface, two simplified approaches are shown first, defining a 3-D surface as a series of 2-D planar curves. Then, after comparing finite- element method and finite-difference method in the 2-D problem, we solve the 3-D model using the finite-element method yielding greater stability and faster convergence. This model is applied for segmenting magnetic resonance images.

   14.   TSAI, CT, SUN, YN, and CHUNG, PC, "MINIMIZING THE ENERGY OF ACTIVE CONTOUR MODEL USING A HOPFIELD NETWORK," IEE PROCEEDINGS-E COMPUTERS AND DIGITAL TECHNIQUES, vol. 140, pp. 297-303, 1993.

Abstract:   Active contour models (snakes) are commonly used for locating the boundary of an object in computer vision applications. The minimisation procedure is the key problem to solve in the technique of active contour models. In this paper, a minimisation method for an active contour model using Hopfield networks is proposed. Due to its network structure, it lends itself admirably to parallel implementation and is potentially faster than conventional methods. In addition, it retains the stability of the snake model and the possibility for inclusion of hard constraints. Experimental results are given to demonstrate the feasibility of the proposed method in applications of industrial pattern recognition and medical image processing.

   15.   WOLBERG, WH, STREET, WN, and MANGASARIAN, OL, "BREAST CYTOLOGY DIAGNOSIS WITH DIGITAL IMAGE-ANALYSIS," ANALYTICAL AND QUANTITATIVE CYTOLOGY AND HISTOLOGY, vol. 15, pp. 396-404, 1993.

Abstract:   An interactive computer system has been developed for evaluating cytologic features derived directly from a digital scan of breast fine needle aspirate slides. The system uses computer vision techniques to analyze cell nuclei and classifies them using an inductive method based on linear programming. A digital scan of selected areas of the aspirate slide is done by a trained observer, while the analysis of the digitized image is done by an untrained observer. When trained and tested on 119 breast fine needle aspirates (68 benign and 51 malignant) using leave-one-out testing, 90% correctness was achieved. These results indicate that the method is accurate (good intraobserver and interobserver reproducibility) and that an untrained operator can obtain diagnostic results comparable to those achieved visually by experienced observers.

 
1994

   16.   CANNING, J, "A MINIMUM DESCRIPTION LENGTH MODEL FOR RECOGNIZING OBJECTS WITH VARIABLE APPEARANCES (THE VAPOR MODEL)," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, pp. 1032-1036, 1994.

Abstract:   Most object recognition systems can only model objects composed of rigid pieces whose appearance depends only on lighting and viewpoint. Many real world objects, however, have variable appearances because they are flexible and/or have a variable number of parts. These objects cannot be easily modeled using current techniques. We propose the use of a knowledge representation called the VAPOR (Variable APpearance Object Representation) model to represent objects with these kinds of variable appearances. The VAPOR model is an idealization of the object; all instances of the model in an image are variations from the ideal appearance. The variations are evaluated by the description length of the data given the model, i.e., the number of information-theoretic bits needed to represent the model and the deviations of the data from the ideal appearance. The shortest length model is chosen as the best description. We demonstrate how the VAPOR model performs in a simple domain of circles and polygons and in the complex domain of finding cloverleaf interchanges in aerial images of roads.

   17.   STORVIK, G, "A BAYESIAN-APPROACH TO DYNAMIC CONTOURS THROUGH STOCHASTIC SAMPLING AND SIMULATED ANNEALING," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, pp. 976-986, 1994.

Abstract:   In many applications of image analysis, simply connected objects are to be located in noisy images. During the last 5-6 years active contour models have become popular for finding the contours of such objects. Connected to these models are iterative algorithms for finding the minimizing energy curves making the curves behave dynamically through the iterations. These approaches do however have several disadvantages. The numerical algorithms that are in use constraint the models that can be used. Furthermore, in many cases only local minima can be achieved. In this paper, we discuss a method for curve detection based on a fully Bayesian approach. A model for image contours which allows the number of nodes on the contours to vary is introduced. Iterative algorithms based on stochastic sampling is constructed, which make it possible to simulate samples from the posterior distribution, making estimates and uncertainty measures of specific quantities available. Further, simulated annealing schemes making the curve move dynamically towards the global minimum energy configuration are presented. In theory, no restrictions on the models are made. In practice, however, computational aspects must be taken into consideration when choosing the models. Much more general models than the one used for active contours may however be applied. The approach is applied to ultrasound images of the left ventricle and to Magnetic Resonance images of the human brain, and show promising results.

   18.   MOSHFEGHI, M, RANGANATH, S, and NAWYN, K, "3-DIMENSIONAL ELASTIC MATCHING OF VOLUMES," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 3, pp. 128-138, 1994.

Abstract:   Registering volumes that have been deformed with respect to each other involves recovery of the deformation. A 3-D elastic matching algorithm has been developed to use surface information for registering volumes. Surface extraction is performed in two steps: extraction of contours in 2-D image planes using active contours, and forming triangular patch surface models from the stack of 2-D contours. One volume is modeled as being deformed with respect to another goal volume. Correspondences between surfaces in the two image volumes are used to warp the deformed volume towards its goal. This process of contour extraction, surface formation and matching, and warping is repeated a number of times, with decreasing image volume stiffness. As the iterations continue the stretched volume is refined towards its goal volume. Registration examples of deformed volumes are presented.

   19.   WANG, Y, and LEE, O, "ACTIVE MESH - A FEATURE SEEKING AND TRACKING IMAGE SEQUENCE REPRESENTATION SCHEME," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 3, pp. 610-624, 1994.

Abstract:   This paper introduces a representation scheme for image sequences using nonuniform samples embedded in a deformable mesh structure. It describes a sequence by nodal positions and colors in a starting frame, followed by nodal displacements in the following frames. The nodal points in the mesh are more densely distributed in regions containing interesting features such as edges and corners; and are dynamically updated to follow the same features in successive frames. They are determined automatically by maximizing feature (e.g, gradient) magnitudes at nodal points, while minimizing interpolation errors within individual elements, and matching errors between corresponding elements. In order to avoid the mesh elements becoming overly deformed, a penalty term is also incorporated, which measures the irregularity of the mesh structure. The notions of shape functions and master elements commonly used in the finite element method have been applied to simplify the numerical calculation of the energy functions and their gradients. The proposed representation is motivated by the active contour or snake model proposed by Kass, Witkin, and Terzopoulos. The current representation retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction. Our computer simulations have shown that the proposed scheme can successfully track facial feature movements in head-and-shoulder type of sequences, and more generally, interframe changes that can be modeled as elastic deformation. The treatment for the starting frame also constitutes an efficient representation of arbitrary still images.

   20.   RUAN, S, BRUNO, A, and COATRIEUX, JL, "3-DIMENSIONAL MOTION AND RECONSTRUCTION OF CORONARY-ARTERIES FROM BIPLANE CINEANGIOGRAPHY," IMAGE AND VISION COMPUTING, vol. 12, pp. 683-689, 1994.

Abstract:   A new approach is described for reconstructing coronary arteries from two sequences of projection images. The estimation of motion is performed on three-dimensional line segments (or centrelines), and is based on a 'prediction-projection-optimization' loop. The method copes with time varying properties, deformations and superpositions of vessels. Experiments using simulated and real data have been carried out, and the results found to be robust over a full cycle of a human heart. Local and global kinetic features can then be derived to obtain a greater insight on the cardiac functional state

   21.   DING, K, and GUNASEKARAN, S, "SHAPE FEATURE-EXTRACTION AND CLASSIFICATION OF FOOD MATERIAL USING COMPUTER VISION," TRANSACTIONS OF THE ASAE, vol. 37, pp. 1537-1545, 1994.

Abstract:   Food material shape is often closely related to its qualify. Due to the demands of high quality, automated food shape inspection has become an important need for the food industry. Currently, accuracy and speed are two major problems for food shape inspection with computer vision. Therefore, in this study, a fast and accurate computer-vision based feature extraction and classification system was developed. In the feature extraction stage, a statistical model based feature extractor (SMB) and a multi-index active model-based (MAM) feature extractor were developed to improve the accuracy of classifications. In the classification stage, first the back-propagation neural network was applied as a multi-index classifier. Then, to speed up training, some minimum indeterminate zone (MIZ) classifiers were developed. Corn kernels, almonds, and animal-shaped crackers were used to rest the above techniques. The results showed that accuracy and speed were greatly improved when the MAM feature extractor was used in conjunction with the MIZ classifier.

   22.   XU, G, SEGAWA, E, and TSUJI, S, "ROBUST ACTIVE CONTOURS WITH INSENSITIVE PARAMETERS," PATTERN RECOGNITION, vol. 27, pp. 879-884, 1994.

Abstract:   Active contours, known as snakes, have found wide applications since their first introduction in 1987 by Kass et al. (Int. J. Comput. Vision 1, 321-331). However, one problem with the current models is that the performance depends on proper internal parameters and initial contour position, which, unfortunately, cannot be determined a priori. It is usually a hard job to tune internal parameters and initial contour position. The problem comes from the fact that the internal normal force at each point of contour is also a function of contour shape. To solve this problem, we propose to compensate for this internal normal force so as to make it independent of shape. As a result, the new model works robustly with no necessity to fine-tune internal parameters, and can converge to high curvature points like corners.

   23.   CARLBOM, I, TERZOPOULOS, D, and HARRIS, KM, "COMPUTER-ASSISTED REGISTRATION, SEGMENTATION, AND 3D RECONSTRUCTION FROM IMAGES OF NEURONAL TISSUE-SECTIONS," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 13, pp. 351-362, 1994.

Abstract:   Neuroscientists have studied the relationship between nerve cell morphology and function for over a century. To pursue these studies, they need accurate three-dimensional models of nerve cells that facilitate detailed anatomical measurement and the identification of internal structures. Although serial transmission electron microscopy has been a source of such models since the mid 1960s, model reconstruction and analysis remain very time consuming. We have developed a new approach to reconstructing and visualizing 3D nerve cell models from serial microscopy. An interactive system exploits recent computer graphics and computer vision techniques to significantly reduce the time required to build such models. The key ingredients of the system are a digital ''blink comparator'' for section registration, ''snakes,'' or active deformable contours, for semiautomated cell segmentation, and voxel-based techniques for 3D reconstruction and visualization of complex cell volumes with internal structures.

   24.   THIRION, JP, "DIRECT EXTRACTION OF BOUNDARIES FROM COMPUTED-TOMOGRAPHY SCANS," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 13, pp. 322-328, 1994.

Abstract:   This paper presents a method, based on the Filtered Backprojection technique (FBP), to extract directly the boundaries of X-ray images, without previous image reconstruction. We preprocess the raw data in order to compute directly the reconstructed values of the gradient or of the Laplacian at any location in the plane (defined with real coordinates). The reconstructed value of the gradient and of the Laplacian correspond to the exact mathematical definition of the differentials of the image. For noisy data, we propose also to use an extension of existing FBP techniques, adapted to the computation of the gradient and of the Laplacian. Finally, we show how to use the corresponding operators to perform the segmentation of a slice, without image reconstruction. Images of the reconstructed gradient, Laplacian, and segmented objects are presented.

   25.   DAYANAND, S, UTTAL, WR, SHEPHERD, T, and LUNSKIS, C, "A PARTICLE SYSTEM MODEL FOR COMBINING EDGE INFORMATION FROM MULTIPLE SEGMENTATION MODULES," CVGIP-GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 56, pp. 219-230, 1994.

Abstract:   A model for fusing the output of multiple segmentation modules is presented. The model is based on the particle system approach to modeling dynamic objects from computer graphics. The model also has built-in capabilities to extract regions, thin the edge image, remove ''twigs,'' and close gaps in the contours. The model functions both as an effective data fusion technique and as a model of an important human visual process. (C) 1994 Academic Press, Inc.

   26.   MANHAEGHE, C, LEMAHIEU, I, VOGELAERS, D, and COLARDYN, F, "AUTOMATIC INITIAL ESTIMATION OF THE LEFT-VENTRICULAR MYOCARDIAL MIDWALL IN EMISSION TOMOGRAMS USING KOHONEN MAPS," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, pp. 259-266, 1994.

Abstract:   A new method to make an automatic initial estimation of the position of the middle of the left ventricular (LV) myocardial wall (LV myocardial midwall) in emission tomograms is presented. This method eliminates the manual interaction still required by other, more accurate LV delineation algorithms, and which consists of indicating the LV long axis and/or the LV extremities. A well-known algorithm from the world of neural networks, Kohonen's self-organizing maps, was adapted to use general shapes and to behave well for data with large background noise.

   27.   RONFARD, R, "REGION-BASED STRATEGIES FOR ACTIVE CONTOUR MODELS," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 13, pp. 229-251, 1994.

Abstract:   The variational method has been introduced by Kass et al. (1987) in the field of object contour modeling, as an alternative to the more traditional edge detection-edge thinning-edge sorting sequence. Since the method is based on a pre-processing of the image to yield an edge map, it shares the limitations of the edge detectors it uses. In this paper, we propose a modified variational scheme for contour modeling, which uses no edge detection step, but local computations instead-only around contour neighborhoods-as well as an ''anticipating'' strategy that enhances the modeling activity of deformable contour curves. Many of the concepts used were originally introduced to study the local structure of discontinuity, in a theoretical and formal statement by Leclerc & Zucker (1987), but never in a practical situation such as this one. The first part of the paper introduces a region-based energy criterion for active contours, and gives an examination of its implications, as compared to the gradient edge map energy of snakes. Then, a simplified optimization scheme is presented, accounting for internal and external energy in separate steps. This leads to a complete treatment, which is described in the last sections of the paper (4 and 5). The optimization technique used here is mostly heuristic, and is thus presented without a formal proof, but is believed to fill a gap between snakes and other useful image representations, such as split-and-merge regions or mixed line-labels image fields.

   28.   CHEN, LH, LIN, WC, and LIAO, HYM, "RECOVERY OF SUPERQUADRIC PRIMITIVE FROM STEREO IMAGES," IMAGE AND VISION COMPUTING, vol. 12, pp. 285-295, 1994.

Abstract:   This paper presents an integrated approach to recovering the superquadric primitive from stereo images. While the depth data obtained from stereo matching algorithms are always sparse and noisy, to extract an object from the scene and obtain a smoothed depth map of the object, occluding contour detection and surface reconstruction are incorporated into the recovery process of superquadrics. The algorithm combines the recovery processes of occluding contour, surface and volumetric models in a cooperative and synergetic manner. The performance of the algorithm is demonstrated with two examples using real images.

   29.   YOUNG, AA, IMAI, H, CHANG, CN, and AXEL, L, "2-DIMENSIONAL LEFT-VENTRICULAR DEFORMATION DURING SYSTOLE USING MAGNETIC-RESONANCE-IMAGING WITH SPATIAL MODULATION OF MAGNETIZATION," CIRCULATION, vol. 89, pp. 740-752, 1994.

Abstract:   Background Myocardial tissue tagging with the use of magnetic resonance imaging allows noninvasive regional analysis of heart wall motion and deformation. However, any evaluation of the effect of disease or treatment requires a baseline reference of normal values and variation. We studied the two-dimensional motion of material points imaged within the left ventricular wall using spatial modulation of magnetization (SPAMM) in 12 normal human volunteers. Methods and Results Five parallel short-axis and five parallel long-axis slices were acquired at five times during systole. SPAMM tags were generated at end diastole using a 7-mm grid. Intersection point data were analyzed for displacement, rotation, and torsion, and triangles of points were analyzed for local rotation and principal strains. Short-axis displacement was the least in the septum for all longitudinal levels (P<.001). Torsion about the long axis was uniform circumferentially because of the motion of the centroids used to reference the rotation. In the long-axis images, the base displaced longitudinally toward the apex, with the posterior wall moving farther than the anterior wall (13.4+/-2.2 versus 9.7+/-1.8 mm, P<.001) in this direction. The largest principal strain (maximum lengthening) was approximately radially oriented in both views. In the short-axis images, the minimum principal strain (maximum shortening) increased in magnitude toward the apex (P<.001) with little circumferential variation, except at midventricle, where the anterior wall showed greater contraction than the posterior wall (-0.21+/-0.03 versus -0.19+/-0.02, P<.02). Conclusions Consistent regional variations in deformation are seen in the normal human heart, Displacement and maximum shortening strains are well characterized with two-dimensional magnetic resonance tagging; however, higher-resolution images will be required to study transmural variations.

   30.   KOEPFLER, G, LOPEZ, C, and MOREL, JM, "A MULTISCALE ALGORITHM FOR IMAGE SEGMENTATION BY VARIATIONAL METHOD," SIAM JOURNAL ON NUMERICAL ANALYSIS, vol. 31, pp. 282-299, 1994.

Abstract:   Most segmentation algorithms are composed of several procedures: split and merge, small region elimination, boundary smoothing,..., each depending on several parameters. The introduction of an energy to minimize leads to a drastic reduction of these parameters. The authors prove that the most simple segmentation tool, the ''region merging'' algorithm, made according to the simplest energy, is enough to compute a local energy minimum belonging to a compact class and to achieve the job of most of the tools mentioned above. The authors explain why ''merging'' in a variational framework leads to a fast multiscale, multichannel algorithm, with a pyramidal structure. The obtained algorithm is O(n ln n), where n is the number of pixels of the picture. This fast algorithm is applied to make grey level and texture segmentation and experimental results are shown.

   31.   CALWAY, AD, and WILSON, R, "CURVE EXTRACTION IN IMAGES USING A MULTIRESOLUTION FRAMEWORK," CVGIP-IMAGE UNDERSTANDING, vol. 59, pp. 359-366, 1994.

Abstract:   A multiresolution approach to curve extraction in images is described. Based on a piecewise linear representation of curves, the scheme combines an efficient method of extracting line segments with a grouping process to identify curve traces. The line segments correspond to linear features defined at appropriate spatial resolutions within a quadtree structure and are extracted using a hierarchical decision process based on frequency domain properties. Implementation is achieved through the use of the multiresolution Fourier transform, a linear transform providing spatially localized estimates of the frequency spectrum over multiple scales. The scheme is simple to implement and computationally inexpensive, and results of experiments performed on natural images demonstrate that its performance compares favorably with that of existing methods. (C) 1994 Academic Press, Inc.

   32.   NELSON, RC, "FINDING LINE SEGMENTS BY STICK GROWING," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, pp. 519-523, 1994.

Abstract:   A method is described for extracting lineal features from an image using extended local information to provide robustness and sensitivity. The method utilizes both gradient magnitude and direction information, and incorporates explicit lineal and end-stop terms. These terms are combined nonlinearly to produce an energy landscape in which local minima correspond to lineal features called sticks that can be represented as line segments. A hill climbing (stick-growing) process is used to find these minima. The method is compared to two others, and found to have improved gap-crossing characteristics.

   33.   OSULLIVAN, F, and QIAN, MJ, "A REGULARIZED CONTRAST STATISTIC FOR OBJECT BOUNDARY ESTIMATION - IMPLEMENTATION AND STATISTICAL EVALUATION," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, pp. 561-570, 1994.

Abstract:   We propose an optimization approach to the estimation of a simple closed curve describing the boundary of an object represented in an image. The problem arises in a variety of applications, such as template matching schemes for medical image registration. A regularized optimization formulation with an objective function that measures the normalized image contrast between the inside and outside of a boundary is proposed. Numerical methods are developed to implement the approach, and a set of simulation studies are carried out to quantify statistical performance characteristics. One set of simulations models emission computed tomography (ECT) images; a second set considers images with a locally coherent noise pattern. In both cases, the error characteristics are found to be quite encouraging. The approach is highly automated, which offers some practical advantages over currently used technologies in the medical imaging field.

   34.   KUMAR, S, and GOLDGOF, D, "AUTOMATIC TRACKING OF SPAMM GRID AND THE ESTIMATION OF DEFORMATION PARAMETERS FROM CARDIAC MR-IMAGES," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 13, pp. 122-132, 1994.

Abstract:   In this paper, we present a new approach for the automatic tracking of SPAMM (Spatial Modulation of Magnetization) grid in cardiac MR images and consequent estimation of deformation parameters. The tracking is utilized to extract grid points from MR images and to establish correspondences between grid points in images taken at consecutive frames. These correspondences are used with a thin plate spline model to establish a mapping from one image to the next. This mapping is then used for motion and deformation estimation. Spatio-temporal tracking of SPAMM grid is achieved by using snakes-active contour models with an associated energy functional. We present a minimizing strategy which is suitable for tracking the SPAMM grid. By continuously minimizing their energy functionals, the snakes lock on to and follow the in-slice motion and deformation of the SPAMM grid. The proposed algorithm was tested with excellent results on 123 images (three data sets each a multiple slice 2D, 16 phase Cine study, three data sets each a multiple slice 2D, 13 phase Cine study and three data sets each a multiple slice 2D, 12 phase Cine study).

   35.   RAPPOPORT, A, HELOR, Y, and WERMAN, M, "INTERACTIVE DESIGN OF SMOOTH OBJECTS WITH PROBABILISTIC POINT CONSTRAINTS," ACM TRANSACTIONS ON GRAPHICS, vol. 13, pp. 156-176, 1994.

Abstract:   Point displacement constraints constitute an attractive technique for interactive design of smooth curves, surfaces, and volumes. The user defines an arbitrary number of ''control points'' on the object and specifies their desired spatial location, while the system computes the object's degrees of freedom so that the constraints are satisfied. A constraint-based interface gives a feeling of direct manipulation of the object. In this article we introduce soft constraints, constraints which do not have to be met exactly. The softness of each constraint serves as a nonisotropic, local shape parameter enabling the user to explore the space of objects conforming to the constraints. Additionally, there is a global shape parameter which determines the amount of similarity of the designed object to a rest shape, or equivalently, the rigidity of the rest shape. We present an algorithm termed probabilistic point constraints (PPC) for implementing soft constraints. The PPC algorithm views constraints as stochastic measurements of the state of a static system. The softness of a constraint is derived from the covariance of the ''measurement.'' The resulting system of probabilistic equations is solved using the Kalman filter, a powerful estimation tool in the theory of stochastic systems. We also describe a user interface using direct-manipulation devices for specifying and visualizing covariances in 2D and 3D. The algorithm is suitable for any object represented as a parametric blend of control points, including most spline representations. The covariance of a constraint provides a continuous transition from exact interpolation to controlled approximation of the constraint. The algorithm involves only linear operations and allows real-time interactive direct manipulation of curves and surfaces on current workstations.

   36.   GUNASEKARAN, S, and DING, KX, "USING COMPUTER VISION FOR FOOD QUALITY EVALUATION," FOOD TECHNOLOGY, vol. 48, pp. 151-154, 1994.

Abstract:   Image warping, often referred to as ''rubber sheeting,'' represents the deformation of a domain image space into a range image space. In this paper, a technique which extends the definition of a rubber-sheet transformation to allow a polygonal region to be warped into one or more subsets of itself, where the subsets may be multiply connected, is described. To do this, it constructs a set of ''slits'' in the domain image, which correspond to discontinuities and concavities in the range image, using a technique based on generalized Voronoi diagrams. The concept of medial axis is extended to describe inner and outer medial contours of a polygon. Polygonal regions are decomposed into annular subregions, and path homotopies are introduced to describe the annular subregions. These constructions motivate the definition of a ladder, which guides the construction of grid point pairs necessary to effect the warp itself. (C) 1994 Academic Press, Inc.

   37.   LANDAU, P, and SCHWARTZ, E, "SUBSET WARPING - RUBBER SHEETING WITH CUTS," CVGIP-GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 56, pp. 247-266, 1994.

Abstract:   Image warping, often referred to as ''rubber sheeting,'' represents the deformation of a domain image space into a range image space. In this paper, a technique which extends the definition of a rubber-sheet transformation to allow a polygonal region to be warped into one or more subsets of itself, where the subsets may be multiply connected, is described. To do this, it constructs a set of ''slits'' in the domain image, which correspond to discontinuities and concavities in the range image, using a technique based on generalized Voronoi diagrams. The concept of medial axis is extended to describe inner and outer medial contours of a polygon. Polygonal regions are decomposed into annular subregions, and path homotopies are introduced to describe the annular subregions. These constructions motivate the definition of a ladder, which guides the construction of grid point pairs necessary to effect the warp itself. (C) 1994 Academic Press, Inc.

   38.   WEISS, I, "HIGH-ORDER DIFFERENTIATION FILTERS THAT WORK," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 16, pp. 734-739, 1994.

Abstract:   Reliable derivatives or digital images have always been hard to obtain, especially (but not only) at high orders. We analyze the sources of errors in traditional filters, such as derivatives or the Gaussian, that are used for differentiation. We then study a class of filters which is much more suitable for our purpose, namely filters that preserve polynomials up to a given order. We show that the errors in differentiation can be corrected using these filters. We derive a condition for the validity domain of these filters, involving some characteristics of the filter and of the shape. Our experiments show a very good performance for smooth functions.

   39.   YOUNG, AA, KRAMER, CM, FERRARI, VA, AXEL, L, and REICHEK, N, "3-DIMENSIONAL LEFT-VENTRICULAR DEFORMATION IN HYPERTROPHIC CARDIOMYOPATHY," CIRCULATION, vol. 90, pp. 854-867, 1994.

Abstract:   Background In hypertrophic cardiomyopathy, ejection fraction is normal or increased, and force-length relations are reduced. However, three-dimensional (3D) motion and deformation in vivo have not been assessed in this condition. We have reconstructed the 3D motion of the left ventricle (LV) during systole in 7 patients with hypertrophic cardiomyopathy (HCM) and 12 normal volunteers by use of magnetic resonance tagging. Methods and Results Transmural tagging stripes were automatically tracked to subpixel resolution with an active contour model. A 3D finite-element model was used to interpolate displacement information between short- and long-axis slices and register data on a regional basis. Displacement and strain data were averaged into septal, posterior, lateral, and anterior regions at basal, midventricular, and apical levels. Radial motion (toward the central long axis) decreased slightly in patients with HCM, whereas longitudinal displacement (parallel to the long axis) of the base toward the apex was markedly reduced: 7.5 +/- 2.5 mm (SD) versus 12.5 +/- 2.0 mm, P<.001. Circumferential and longitudinal shortening were both reduced in the septum (P<.01 at all levels). The principal strain associated with 3D maximal contraction was slightly depressed in many regions, significantly in the basal septum (-0.18 +/- 0.05 versus -0.22 +/- 0.02, P<.05) walls. In contrast, LV torsion (twist of the apex about the long axis relative to the base) was greater in HCM patients (19.9 +/- 2.4 degrees versus 14.6 +/- 2.7 degrees, P<.01). Conclusions HCM patients had reduced 3D myocardial shortening on a regional basis; however, LV torsion was increased.

   40.   FUJIMURA, K, YOKOYA, N, and YAMAMOTO, K, "MOTION ANALYSIS OF NONRIGID OBJECTS BY ACTIVE CONTOUR MODELS USING MULTISCALE IMAGES," SYSTEMS AND COMPUTERS IN JAPAN, vol. 25, pp. 81-91, 1994.

Abstract:   This paper considers the approach to dynamic image processing, which is one of the important problems in the future medical image processing. The tracking of the object and the analysis of the motion are discussed for the dynamic images of a nonrigid object with smooth shape, motion and deformation, which is the case in most medical images. This approach is based on an active contour model defined by an energy function in terms of both intra- and interframe constraints for the contour of the object. The contour of the target object is extracted and tracked by minimizing the energy function using multiscale dynamic programming and the motion is analyzed. The dynamic programming in multiscale proposed in this paper is to adjust the search neighborhood of the dynamic programming according to the scale. The coarse or fine neighborhood is defined for the coarse and fine scales, respectively, and the energy is minimized starting from the coarse scale and shifting to the fine scale. By this scheme, a large motion an deformation of the object can be handled. The proposed motion tracking method has been applied successfully to the dynamic image in the ''behavioral analysis of a slug aiming at the analysis of the neural mechanism of learning and memory formation in slugs,'' as well as to dynamic echocardiographic images. In the first application, the positive maximum of the curvature along the contour is extracted in the motion analysis as a characteristic point invariant to the deformation of the object. Then the shift of that point is traced. By this approach, the rough motion of the object can be estimated.

 
1995

   41.   UEDA, N, and MASE, K, "TRACKING MOVING CONTOURS USING ENERGY-MINIMIZING ELASTIC CONTOUR MODELS," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 9, pp. 465-484, 1995.

Abstract:   This paper proposes a robust method for tracking an object contour in a sequence of images. In this method, both object extraction and tracking problems can be solved simultaneously. Furthermore, it is applicable to the tracking of arbitrary shapes since it does not need a priori knowledge about the object shapes. In the contour tracking, energy-minimizing elastic contour models are utilized, which is newly presented in this paper. The contour tracking is formulated as an optimization problem to find the position that minimizes both the elastic energy of its model and the potential energy derived from the edge potential image that includes a target object contour. We also present an algorithm which efficiently solves energy minimization problems within a dynamic programming framework. The algorithm enables us to obtain optimal solution even when the variables to be optimized are not ordered. We show the validity and usefulness of the proposed method with some experimental results.

   42.   FUA, P, and LECLERC, YG, "OBJECT-CENTERED SURFACE RECONSTRUCTION - COMBINING MULTIIMAGE STEREO AND SHADING," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 16, pp. 35-56, 1995.

Abstract:   Our goal is to reconstruct both the shape and reflectance properties of surfaces from multiple images. We argue that an object-centered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and self-occlusions. We then present a specific object-centered reconstruction method and its implementation. The method begins with an initial estimate of surface shape provided, for example, by triangulating the result of conventional stereo. The surface shape and reflectance properties are then iteratively adjusted to minimize an objective function that combines information from multiple input images. The objective function is a weighted sum of stereo, shading, and smoothness components, where the weight varies over the surface. For example, the stereo component is weighted more strongly where the surface projects onto highly textured areas in the images, and less strongly otherwise. Thus, each component has its greatest influence where its accuracy is likely to be greatest. Experimental results on both synthetic and real images are presented.

   43.   BITTAR, E, TSINGOS, N, and GASCUEL, MP, "AUTOMATIC RECONSTRUCTION OF UNSTRUCTURED 3D DATA - COMBINING A MEDIAL AXIS AND IMPLICIT SURFACES," COMPUTER GRAPHICS FORUM, vol. 14, pp. C457-C468, 1995.

Abstract:   This paper presents a new method that combines a medial axis and implicit surfaces in order to reconstruct a 3D solid from on unstructured set of points scattered on the object's surface. The representation produced is based on iso-surfaces generated by skeletons, and is a particularly compact way of defining a smooth free-form solid. The method is based on the minimisation of an energy representing a ''distance'' between the set of data points and the iso-surface, resembling previous reserach(19). Initialisation, however, is more robust and efficient since there is computation of the medial axis of the set of points. Instead of subdividing existing skeletons in order to refine the object's surface, a new reconstruction algorithm progressively selects skeleton-points from the precomputed medial axis using an heuristic principle based on a ''local energy'' criterion. This drastically speeds up the reconstruction process. Moreover, using the medial axis allows reconstruction of objects with complex topology and geometry, like objects that have holes and branches or that are composed of several connected components. This process is fully automatic. The method has been successfully applied to both synthetic and real data.

   44.   VELTKAMP, RC, and WESSELINK, W, "MODELING 3D CURVES OF MINIMAL ENERGY," COMPUTER GRAPHICS FORUM, vol. 14, pp. C97-C110, 1995.

Abstract:   Modeling a curve through minimizing its energy yields an overall smooth curve. A common way to model shape features is to perform the minimization subject to a number of interpolation constraints. This way of modeling is attractive because the designer is not bothered with the precise representation of the curve (e.g, control points). However, local shape specification by means of interpolation constraints is very limited. On the other hand, local deformation by repositioning control points is powerful but very laborious, and destroys the minimal energy property. In this paper, deform operators are introduced for 3D curve modeling that have built-in energy terms that have an intuitive effect. These operators allow local shape modification and do justice to the energy minimization way of modeling.

   45.   BUCK, TD, EHRICKE, HH, STRASSER, W, and THURFJELL, L, "3-D SEGMENTATION OF MEDICAL STRUCTURES BY INTEGRATION OF RAYCASTING WITH ANATOMIC KNOWLEDGE," COMPUTERS & GRAPHICS, vol. 19, pp. 441-449, 1995.

Abstract:   We present a graphically interactive procedure which is used to register a digital anatomic brain atlas with the tomographic patient volume. Patient structures to be segmented are outlined by local elastic deformation of corresponding objects from the anatomy model. This is performed in voxel space using a cost minimization procedure. The anatomic knowledge acquired in this manner is stored in a patient specific volume dataset and guides a raycaster with respect to the localization of object surfaces in order to control the result of the deformation process. Thus objects, which so far could not have been segmented appropriately or only after tedious manual editing efforts, become accessible by physicians. We present several results demonstrating the high quality and practicality of the method.

   46.   KISWORO, M, VENKATESH, S, and WEST, GAW, "DETECTION OF CURVED EDGES AT SUBPIXEL ACCURACY USING DEFORMABLE MODELS," IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING, vol. 142, pp. 304-312, 1995.

Abstract:   One approach to the detection of curves at subpixel accuracy involves the reconstruction of such features from subpixel edge data points. A new technique is presented for reconstructing and segmenting curves with subpixel accuracy using deformable models. A curve is represented as a set of interconnected Hermite splines forming a snake generated from the subpixel edge information that minimises the global energy functional integral over the set. While previous work on the minimisation was mostly based on the Euler-Lagrange transformation, the authors use the finite element method to solve the energy minimisation equation. The advantages of this approach over the Euler-Lagrange transformation approach are that the method is straightforward, leads to positive m-diagonal symmetric matrices, and has the ability to cope with irregular geometries such as junctions and corners. The energy functional integral solved using this method can also be used to segment the features by searching for the location of the maxima of the first derivative of the energy over the elementary curve set.

   47.   Kuszyk, BS, Ney, DR, and Fishman, EK, "The current state of the art in three dimensional oncologic imaging: An overview," INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, vol. 33, pp. 1029-1039, 1995.

Abstract:   To provide an overview of the methods and clinical applications of three dimensional (3D) medical imaging in the oncologic patient. Methods and Materials: We briefly outline the techniques currently used to create 3D medical images with an emphasis on their strengths and shortcomings as they relate to oncologic imaging and radiation therapy planning, We then discuss some of the most important and promising oncologic applications of 3D imaging and suggest likely future directions in this rapidly developing field. Results: Since the first application of 3D techniques to medical data over a decade ago, 3D medical images have evolved from relatively crude representations of musculoskeletal abnormalities to detailed and accurate representations of a variety of soft tissue, vascular, and oncologic pathology. The rapid development of both computer hardware and software coupled with the application of 3D techniques to a variety of imaging modalities have expanded the clinical applications of this technology dramatically. Conclusions: 3D medical images are clinically practical tools for oncologic evaluation and effective radiation therapy planning.

   48.   Broggi, A, and Berte, S, "Vision-based road detection in automotive systems: A real-time expectation-driven approach," JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, vol. 3, pp. 325-348, 1995.

Abstract:   The main aim of this work is the development of a vision-based road detection system fast enough to cope with the difficult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel SIMD architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.

   49.   KRAITCHMAN, DL, YOUNG, AA, CHANG, CN, and AXEL, L, "SEMIAUTOMATIC TRACKING OF MYOCARDIAL MOTION IN MR TAGGED IMAGES," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 14, pp. 422-433, 1995.

Abstract:   Tissue tagging using magnetic resonance (MR) imaging has enabled quantitative noninvasive analysis of motion and deformation in vivo. One method for MR tissue tagging is Spatial Modulation of Magnetization (SPAMM), Manual detection and tracking of tissue tags by visual inspection remains a time-consuming and tedious process. We have developed an interactively guided semi-automated method of detecting and tracking tag intersections in cardiac MR images, A template matching approach combined with a novel adaptation of active contour modeling permits rapid analysis of MR images. We have validated our technique using MR SPAMM images of a silicone gel phantom with controlled deformations. Average discrepancy between theoretically predicted and semi-automatically selected tag intersections was 0.30 mm +/- 0.17 [mean +/- SD, NS (P < 0.05)]. Cardiac SPAMM images of normal volunteers and diseased patients also have been evaluated using our technique.

   50.   MARCHANT, JA, and ONYANGO, CM, "FITTING GREY LEVEL POINT DISTRIBUTION MODELS TO ANIMALS IN SCENES," IMAGE AND VISION COMPUTING, vol. 13, pp. 3-12, 1995.

Abstract:   Point distribution models allow a compact description of an object's shape to be found from a set of example images. In previous work by the first author, a method of incorporating grey level information into a PDM was developed. This paper investigates fitting such a composite model to image data consisting of a set of images of a pig viewed from above. Model fitting is achieved by optimizing an objective function consisting of two components, one that measures the degree of grey level correspondence between the model and the data, and the other that measures how well the boundary of the model fits the data. The shape of the objective function as the model parameters are varied is investigated, and an optimization strategy developed. The strategy is used to find a pig in a number of images with backgrounds of increasing complexity. The strategy performs well with both an uncluttered and a realistic background. The performance with a simulated noisy background is not so good when the boundary component is included in the objective function. This is a result of the boundary component being more sensitive to noise in the image. In this case, it is better to optimize with the grey level component alone. A problem is identified when the grey level distribution changes significantly as the pig moves under the light source. It is suggested that this could be overcome by including variations in grey level distribution as modes in the model.

   51.   DELANGES, P, BENOIS, J, and BARBA, D, "ACTIVE CONTOURS APPROACH TO OBJECT TRACKING IN IMAGE SEQUENCES WITH COMPLEX BACKGROUND," PATTERN RECOGNITION LETTERS, vol. 16, pp. 171-178, 1995.

Abstract:   Active contour models (''snakes'') are a powerful tool for deformable object tracking in moving images. But the existing snake models are not well-adapted for tracking corners and objects on a complex background. In this paper, we present a novel active contour model, the ''Adjustable Polygons'', which is a set of active segments that can fit any object shape (including corners). A new energy based on textural characteristics of objects is also proposed, in order to resolve conflict situations while tracking objects on multiple contour background.

   52.   DAVATZIKOS, CA, and PRINCE, JL, "AN ACTIVE CONTOUR MODEL FOR MAPPING THE CORTEX," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 14, pp. 65-80, 1995.

Abstract:   A new active contour model for finding and mapping the outer cortex in brain images is developed, A cross-section of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approach are proposed to achieve this goal, The primary difference between this formulation and that of snakes is in the specification of the external force acting on the active contour. A study of the uniqueness and fidelity of solutions is made through convexity and frequency domain analyses, and a criterion for selection of the regularization coefficient is developed. Examples demonstrating the performance of this method on simulated and real data are provided.

   53.   ASHTON, EA, BERG, MJ, PARKER, KJ, WEISBERG, J, CHEN, CW, and KETONEN, L, "SEGMENTATION AND FEATURE-EXTRACTION TECHNIQUES, WITH APPLICATIONS TO MRI HEAD STUDIES," MAGNETIC RESONANCE IN MEDICINE, vol. 33, pp. 670-677, 1995.

Abstract:   To obtain a three-dimensional reconstruction of the hippocampus from a volumetric MRI head study, it is necessary to separate that structure not only from the surrounding white matter, but also from contiguous areas of gray matter-the amygdala and cerebral cortex. At present it is necessary for a physician to manually segment the hippocampus on each slice of the volume to obtain such a reconstruction. This process is time consuming, and is subject to inter- and intra-operator variation as well as large discontinuities between slices. We propose a novel technique, making use of a combination of gray scale and edge-detection algorithms and some a priori knowledge, by which a computer may make an unsupervised identification of a given structure through a series of contiguous images. This technique is applicable even if the structure includes so-called false contours or missing contours. Applications include three-dimensional reconstruction of difficult-to-segment regions of the brain, and volumetric measurements of structures from series of two-dimensional images.

   54.   WOLBERG, WH, STREET, WN, and MANGASARIAN, OL, "IMAGE-ANALYSIS AND MACHINE LEARNING APPLIED TO BREAST-CANCER DIAGNOSIS AND PROGNOSIS," ANALYTICAL AND QUANTITATIVE CYTOLOGY AND HISTOLOGY, vol. 17, pp. 77-87, 1995.

Abstract:   Fine needle aspiration (FNA) accuracy is limited by, among other factors, the subjective interpretation of the aspirate. We have increased breast FNA accuracy by coupling digital image analysis methods with machine learning techniques. Additionally, our mathematical approach captures nuclear features (''grade'') that are prognostically more accurate than are estimates based on tumor size and lymph node status. An interactive computer system evaluates, diagnoses and determines prognosis based on nuclear features derived directly from a digital scan of FNA slides. A consecutive series of 569 patients provided the data for the diagnostic study. A 166-patient subset provided the data for the prognostic study. An additional 75 consecutive, new patients provided samples to test the diagnostic system. The projected prospective accuracy of the diagnostic system was estimated to be 97% by 10-fold cross-validation, and the actual accuracy on 75 new samples runs 100%. The projected prospective accuracy of the prognostic system was estimated to be 86% by leave-one-out testing.

   55.   NOBLE, JA, "FROM INSPECTION TO PROCESS UNDERSTANDING AND MONITORING - A VIEW ON COMPUTER VISION IN MANUFACTURING," IMAGE AND VISION COMPUTING, vol. 13, pp. 197-214, 1995.

Abstract:   We describe some of the current challenges in developing and validating computer vision algorithms for manufacturing applications. We focus on the general theme of template-based processing, where geometric templates provide a basis for local feature analysis, registration and recognition (via constraint-based modelling) and model adaptation using statistical methods. We describe recent successful applications of template-based techniques in the areas of manufacturing part inspection and process understanding and monitoring. We also examine the question 'Why are there so few computer vision applications in manufacturing?' We suggest that two of the major bottlenecks remain speed of algorithm development and how to validate algorithm performance with a limited data set. Finally, we identify some of what we see as emerging and future potential application areas of computer vision methods in manufacturing, where the current trend is to provide tools for continuous product improvement rather than (final) product inspection, and 3D measurement capabilities.

   56.   Bothe, HH, and vonBotticher, N, "Key-picture selection for the analysis of visual speech with fuzzy methods," ADVANCES IN INTELLIGENT COMPUTING - IPMU '94, LECTURE NOTES IN COMPUTER SCIENCE, vol. 945, pp. 577-583, 1995.

Abstract:   The goal of the described work is to model visual articulation movements of prototypic speakers with respect to custom-made text A language-wide extension of the motion model leads to a visible speech synthesis and further more to an artificial computer trainer for speechreading. The developed model is based on a set of specific video key-pictures and the interpolation of interim pictures. The key-picture selection is realized by a fuzzy-c-means classification algorithm.

   57.   AYACHE, N, "MEDICAL COMPUTER VISION, VIRTUAL-REALITY AND ROBOTICS," IMAGE AND VISION COMPUTING, vol. 13, pp. 295-313, 1995.

Abstract:   The automated analysis of 3D medical images can improve both diagnosis and therapy significantly. This automation raises a number of new fascinating research problems in the fields of computer vision, graphics and robotics. In this paper, I propose a list of such problems after a review of the current major 3D imaging modalities, and a description of the related medical needs. I then present some of the past and current work done in our research group EPIDAURE* at INRIA, on the following topics: segmentation of 3D images; 3D shape modelling; 3D rigid and nonrigid registration; 3D motion analysis; and 3D simulation of therapy. Most topics are discussed in a synthetic manner, and illustrated by results. Rigid matching is treated more thoroughly as an illustration of a transfer from computer vision towards 3D image processing. The later topics are illustrated by preliminary results, and a number of promising research tracks are suggested.

   58.   SCHWARZINGER, M, NOLL, D, and VONSEELEN, W, "OBJECT RECOGNITION WITH CONSTRAINED ELASTIC MODELS," MATHEMATICAL AND COMPUTER MODELLING, vol. 22, pp. 163-184, 1995.

Abstract:   We present a model-based method for object identification in images of natural scenes. It has successfully been implemented for the classification of cars based on their rear view. In a first step, characteristic features such as lines and corners are detected within the image. Generic models of object-classes, described by the same set of features, are stored in a database. Each model represents a whole class of objects (e.g., passenger cars, vans, big trucks). In a preprocessing stage, the most probable object is selected by means of a corner-feature based Hough transform. This transformation also suggests the position and scale of the object in the image. Guided by similarity measures, the model is then aligned with image features using a matching algorithm based on the elastic net technique [1]. During this iterative process, the model is allowed to undergo changes in scale, position and certain deformations. Deformations are kept within limits such that one model can fit to all objects belonging to the same class, but not to objects of other classes. In each iteration step, quantities to assess the matching process are obtained.

   59.   GOSHTASBY, A, and TURNER, DA, "SEGMENTATION OF CARDIAC CINE MR-IMAGES FOR EXTRACTION OF RIGHT AND LEFT-VENTRICULAR CHAMBERS," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 14, pp. 56-64, 1995.

Abstract:   A two-stage algorithm for extraction of the ventricular chambers (endocardial surfaces) in flow-enhanced magnetic resonance images is described, In the first stage, the approximate locations and sizes of the endocardial surfaces are determined by intensity thresholding. In the second stage, points on each approximated surface are repositioned to nearest locally maximum gradient magnitude points and a generalized cylinder is fitted to them, Examples of ventricular chambers in cine MR images determined by this algorithm are presented.

   60.   HOWARTH, R, "INTERPRETING A DYNAMIC AND UNCERTAIN WORLD - HIGH-LEVEL VISION," ARTIFICIAL INTELLIGENCE REVIEW, vol. 9, pp. 37-63, 1995.

Abstract:   When interpreting a dynamic and uncertain world it is important to have a high-level vision component that can guide the reasoning of the whole vision system. This guidance is provided by an attentional mechanism that exploits knowledge of the specific problem being solved. Here we survey work relevant to the development of such an attentional mechanism, using surveillance as an application domain to tie together issues of spatial representation, events, behaviour, control and planning. The paper culminates in a brief description of HIVIS-WATCHER a program that makes use of all these areas.

   61.   SCLAROFF, S, and PENTLAND, AP, "MODAL MATCHING FOR CORRESPONDENCE AND RECOGNITION," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 17, pp. 545-561, 1995.

Abstract:   Modal matching is a new method for establishing correspondences and computing canonical descriptions, The method is based on the idea of describing objects in terms of generalized symmetries, as defined by each object's eigenmodes. The resulting modal description is used for object recognition and categorization, where shape similarities are expressed as the amounts of modal deformation energy needed to align the two objects, In general, modes provide a global-to-local ordering of shape deformation and thus allow for selecting which types of deformations are used in object alignment and comparison, In contrast to previous techniques, which required correspondence to be computed with an initial or prototype shape, modal matching utilizes a new type of finite element formulation that allows for an object's eigenmodes to be computed directly from available image information, This improved formulation provides greater generality and accuracy, and is applicable to data of any dimensionality, Correspondence results with 2D contour and point feature data are shown, and recognition experiments with 2D images of hand tools and airplanes are described.

   62.   GOSHTASBY, A, and SHYU, HL, "EDGE-DETECTION BY CURVE-FITTING," IMAGE AND VISION COMPUTING, vol. 13, pp. 169-177, 1995.

Abstract:   Edge detection is formulated as a curve fitting problem. First, high-gradient pixels are grouped into elongated regions and then a curve is fitted to each. The curve fitting method used in this work does not require solving a system of equations, and therefore is fast. Examples of edge detection by curve fitting on synthetic and real images are presented, and results obtained are compared with those determined by the Laplacian of Gaussian operator.

   63.   WOLBERG, WH, STREET, WN, HEISEY, DM, and MANGASARIAN, OL, "COMPUTERIZED BREAST-CANCER DIAGNOSIS AND PROGNOSIS FROM FINE-NEEDLE ASPIRATES," ARCHIVES OF SURGERY, vol. 130, pp. 511-516, 1995.

Abstract:   Objective: To use digital image analysis and machine learning to (1) improve breast mass diagnosis based on fine-needle aspirates and (2) improve breast cancer prognostic estimations. Design: An interactive computer system evaluates, diagnoses, and determines prognosis based on cytologic features derived from a digital scan of fine-needle aspirate slides. Setting: The University of Wisconsin (Madison) Departments of Computer Science and Surgery and the University of Wisconsin Hospital and Clinics. Patients: Five hundred sixty-nine consecutive patients (212 with cancer and 357 with benign masses) provided the data for the diagnostic algorithm, and an additional 118 (31 with malignant masses and 87 with benign masses) consecutive, new patients tested the algorithm. One hundred ninety of these patients with invasive cancer and without distant metastases were used for prognosis. Interventions: Surgical biopsy specimens were taken from all cancers and some benign masses. The remaining cytologically benign masses were followed up for a year and surgical biopsy specimens were taken if they changed in size or character. Patients with cancer received standard treatment. Outcome Measures: Cross validation was used to project the accuracy of the diagnostic algorithm and to determine the importance of prognostic features. In addition, the mean errors were calculated between the actual times of distant disease occurrence and the times predicted using various prognostic features. Statistical analyses were also done. Results The predicted diagnostic accuracy was 97% and the actual diagnostic accuracy on 118 new samples was 100%. Tumor size and lymph node status were weak prognosticators compared with nuclear features, in particular those measuring nuclear size. Compared with the actual time for recurrence, the mean error of predicted times for recurrence with the nuclear features was 17.9 months and was 20.1 months with tumor size and lymph node status (P=.11). Conclusion: Computer technology will improve breast fine-needle aspirate accuracy and prognostic estimations.

   64.   LUNDERVOLD, A, and STORVIK, G, "SEGMENTATION OF BRAIN PARENCHYMA AND CEREBROSPINAL-FLUID IN MULTISPECTRAL MAGNETIC-RESONANCE IMAGES," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 14, pp. 339-349, 1995.

Abstract:   This paper presents a new method to segment brain parenchyma and cerebrospinal fluid spaces automatically in routine axial spin echo multispectral MR images. The algorithm simultaneously incorporates information about anatomical boundaries (shape) and tissue signature (grey scale) using a priori knowledge. The head and brain are divided into four regions and seven different tissue types. Each tissue type c is modeled by a multivariate Gaussian distribution N(mu(c), Sigma(c)). Each region is associated with a finite mixture density corresponding to its constituent tissue types, Initial estimates of tissue parameters {mu(c), Sigma(c)}(c=1,...,7) are obtained from L-means clustering of a single slice used for training. The first algorithmic step uses the EM-algorithm for adjusting the initial tissue parameter estimates to the MR data of new patients, The second step uses a recently developed model of dynamic contours to detect three simply closed nonintersecting curves in the plane, constituting the arachnoid/dura mater boundary of the brain, the border between the subarachnoid space and brain parenchyma, and the inner border of the parenchyma toward the lateral ventricles, The model, which is formulated by energy functions in a Bayesian framework, incorporates a priori knowledge, smoothness constraints, and updated tissue type parameters, Satisfactory maximum a posteriori probability estimates of the closed contour curves defined by the model were found using simulated annealing.

   65.   ONYANGO, CM, MARCHANT, JA, and RUFF, BP, "MODEL-BASED LOCATION OF PIGS IN SCENES," COMPUTERS AND ELECTRONICS IN AGRICULTURE, vol. 12, pp. 261-273, 1995.

Abstract:   Point distribution models (PDMs) allow a compact description of an object's shape to be found from a set of example images. In previous work by the second author a method of incorporating grey level information into a PDM was developed. Further work investigated fitting such a composite model to image data consisting of a set of images of a pig viewed from above. This paper describes work on images containing more than one pig. A technique for initialising the model is used which searches the image for ridges in the grey level landscape. These generally lie along the backbone of the animal and provide a good starting point for automatic fitting. By minimising an objective function which measures the difference in grey level and the error in boundary correspondence, an accurate fit of model to data is obtained. Ridge detection initialises the model to within +/-12 pixels of the object. Strict limits on the boundaries of the search space constrain the minimisation process allowing convergence to the true minimum. The resulting fit is good even for objects which are partially obscured. Poor final values of the objective function allow the detection of erroneous results.

   66.   HERAULT, L, and HORAUD, R, "SMOOTH CURVE EXTRACTION BY MEAN-FIELD ANNEALING," ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, vol. 13, pp. 281-300, 1995.

Abstract:   In this paper, we attack the figure-ground discrimination problem from a combinatorial optimization perspective. In general, the solutions proposed in the past solved this problem only partially: either the mathematical model encoding the figure-ground problem was too simple or the optimization methods that were used were not efficient enough or they could not guarantee to find the global minimum of the cost function describing the figure-ground model. The method that we devised and which is described in this paper is tailored around the following contributions. First, we suggest a mathematical model encoding the figure-ground discrimination problem that makes explicit a definition of shape (or figure) based on cocircularity, smoothness, proximity, and contrast. This model consists of building a cost function on the basis of image element interactions. Moreover, this cost function fits the constraints of an interacting spin system, which in turn is a well suited physical model to solve hard combinatorial optimization problems. Second, we suggest a combinatorial optimization method for solving the figure-ground problem, namely mean field annealing which combines the mean field approximation and annealing. Mean field annealing may well be viewed as a deterministic approximation of stochastic methods such as simulated annealing. We describe in detail the theoretical bases of this method, derive a computational model, and provide a practical algorithm. Finally, some experimental results are shown for both synthetic and real images.

   67.   YAMAMOTO, K, "OPTIMIZATION APPROACHES TO CONSTRAINT SATISFACTION PROBLEMS IN COMPUTER VISION," IMAGE AND VISION COMPUTING, vol. 13, pp. 335-340, 1995.

Abstract:   This paper describes several new image understanding methods based on parallel operation. There are several constraint satisfaction approaches using an energy minimization. We show how we reconstruct three-dimensional surfaces from contours without elevation data and sparse points of known elevation data using this approach. We also introduce Active Net using this approach, and apply this model to segmentation and binocular stereo matching. We experimented with these energy minimization approaches to solve the problems of early and intermediate levels of computer vision, and show some of the results of our recent research.

   68.   VIEREN, C, CABESTAING, F, and POSTAIRE, JG, "CATCHING MOVING-OBJECTS WITH SNAKES FOR MOTION TRACKING," PATTERN RECOGNITION LETTERS, vol. 16, pp. 679-685, 1995.

Abstract:   We propose an efficient method for tracking several objects moving through a sequence of monocular images against a non-uniform background. Each object entering the scene is intercepted by an active contour model which locks on it as long as it moves in the scene. The procedure does not necessitate an interactive initialization. Some results are presented in case of real traffic scenes.

   69.   PARVIN, BA, PENG, C, JOHNSTON, W, and MAESTRE, FM, "TRACKING OF TUBULAR MOLECULES FOR SCIENTIFIC APPLICATIONS," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 17, pp. 800-805, 1995.

Abstract:   In this paper, we present a system for detection and tracking of tubular molecules in images. The automatic detection and characterization of the shape, location, and motion of these molecules can enable new laboratory protocols in several scientific disciplines. The uniqueness of the proposed system is twofold: At the macro level, the novelty of the system lies in the integration of object localization and tracking using geometric properties; at the micro level, in the use of high and low level constraints to model the detection and tracking subsystem. The underlying philosophy for object detection is to extract perceptually significant features from the pixel level image, and then use these high level cues to refine the precise boundaries. In the case of tubular molecules, the perceptually significant features are antiparallel line segments or, equivalently, their axis of symmetries. The axis of symmetry infers a coarse description of the object in terms of a bounding polygon. The polygon then provides the necessary boundary condition for the refine ment process, which is based on dynamic programming. For tracking the object in a time sequence of images, the refined contour is then projected onto each consecutive frame.

   70.   WESSELINK, W, and VELTKAMP, RC, "INTERACTIVE DESIGN OF CONSTRAINED VARIATIONAL CURVES," COMPUTER AIDED GEOMETRIC DESIGN, vol. 12, pp. 533-546, 1995.

Abstract:   A constrained variational curve is a curve that minimizes some energy functional under certain interpolation constraints. Modeling curves using constrained variational principles is attractive, because the designer is not bothered with the precise representation of the curve (e.g. control points). Until now, the modeling of variational curves is mainly done by means of constraints. If such a curve of least energy is deformed locally (e.g, by moving its control points) the concept of energy minimization is lost. In this paper we introduce deform operators with built-in energy terms. We have tested our ideas in a prototype system for modeling uniform B-spline curves. Through the use of widgets, the user can interactively modify the range of influence and other properties of the operators. Experiments show that these operators offer a very intuitive way of modeling.

   71.   PEARSON, DE, "DEVELOPMENTS IN MODEL-BASED VIDEO CODING," PROCEEDINGS OF THE IEEE, vol. 83, pp. 892-906, 1995.

Abstract:   This paper reports on current developments in the area of model-based video coding, a technique which shows promise of achieving very, large bit-rate reductions for moving images. After an introduction and historical review, advances are summarized in several areas, among them improved 3D tracking of the human head and of facial expressions, the use of muscle-driven model animation with skin synthesis, techniques for luminance compensation, and switched coders. Bit rates ranging from 64 kb/s down to about 1 kb/s have been obtained using head-and-shoulder video sequences. Problems with model-based methods are identified and future developments in both CBR and VER transmission discussed.

   72.   SNELL, JW, MERICKEL, MB, ORTEGA, JM, GOBLE, JC, BROOKEMAN, JR, and KASSELL, NF, "MODEL-BASED BOUNDARY ESTIMATION OF COMPLEX OBJECTS USING HIERARCHICAL ACTIVE SURFACE TEMPLATES," PATTERN RECOGNITION, vol. 28, pp. 1599-1609, 1995.

Abstract:   A method for the segmentation of complex, three-dimensional objects using hierarchical active surface templates is presented. The templates consist of one or more active surface models which are specified according to a priori knowledge about the expected shape and location of the desired object. This allows complex objects to be naturally modeled as collections of simple subparts which are geometrically constrained. The template is adaptively deformed by the three-dimensional image data in which it is initialized such that the template boundaries are brought into correspondence with the assumed image object. An external energy field is developed based on a vector distance transform such that the surfaces are deformed according to object shape. The method is demonstrated by the segmentation of the human brain from three-dimensional magnetic resonance images of the head given an a priori model of a normal brain.

   73.   Wong, WH, and Ip, HHS, "Force-driven optimization for correspondence establishment," IMAGE ANALYSIS APPLICATIONS AND COMPUTER GRAPHICS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1024, pp. 43-50, 1995.

Abstract:   Correspondence establishment has been a difficult problem in machine vision. In this paper, we present an optimization technique for the task. The geometric constraints to the solution are formulated as forces, which are combined to provide clue for mapping between two sets of points such that the geometric constraints are best satisfied. The strong point of this method is that it is easy to integrate several sources of information to obtain a solution while keeping the decision simple, and does not suffer from the uncontrollable flexibility as in active contour models. We illustrate the method with the problem of establishing correspondence between parallel curves.

   74.   KIMIA, BB, TANNENBAUM, AR, and ZUCKER, SW, "SHAPES, SHOCKS, AND DEFORMATIONS .1. THE COMPONENTS OF 2-DIMENSIONAL SHAPE AND THE REACTION-DIFFUSION SPACE," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 15, pp. 189-224, 1995.

Abstract:   We undertake to develop a general theory of two-dimensional shape by elucidating several principles which any such theory should meet. The principles are organized around two basic intuitions: first, if a boundary were changed only slightly, then, in general, its shape would change only slightly. This leads us to propose an operational theory of shape based on incremental contour deformations. The second intuition is that not all contours are shapes, but rather only those that can enclose ''physical'' material. A theory of contour deformation is derived from these principles, based on abstract conservation principles and Hamilton-Jacobi theory. These principles are based on the work of Sethian (1985a, c), the Osher-Sethian (1988), level set formulation the classical shock theory of Lax (1971; 1973), as well as curve evolution theory for a curve evolving as a function of the curvature and the relation to geometric smoothing of Gage-Hamilton-Grayson (1986; 1989). The result is a characterization of the computational elements of shape: deformations, parts, bends, and seeds, which show where to place the components of a shape. The theory unifies many of the diverse aspects of shapes, and leads to a space of shapes (the reaction/diffusion space), which places shapes within a neighborhood of ''similar'' ones. Such similarity relationships underlie descriptions suitable for recognition.

   75.   WOLBERG, WH, STREET, WN, HEISEY, DM, and MANGASARIAN, OL, "COMPUTER-DERIVED NUCLEAR GRADE AND BREAST-CANCER PROGNOSIS," ANALYTICAL AND QUANTITATIVE CYTOLOGY AND HISTOLOGY, vol. 17, pp. 257-264, 1995.

Abstract:   Visual assessments of nuclear grade are subjective yet still prognostically important. Now, computer-based analytical techniques can objectively and accurately measure size, shape and texture features, which constitute nuclear grade. The cell samples used in this study were obtained by fine needle aspiration (FNA) during the diagnosis of 187 consecutive patients with invasive breast cancer. Regions of FNA preparations to be analyzed were digitized and displayed on a computer monitor. Nuclei to be analyzed were roughly outlined by an operator using a mouse. Next, the computer generated a ''snake'' that precisely enclosed each designated nucleus. Ten nuclear features were then calculated for each nucleus based on these snakes. These results were analyzed statistically and by an inductive machine learning technique that we developed and call ''recurrence surface approximation'' (RSA). Both the statistical and RSA machine learning analyses demonstrated that computer-derived nuclear features are prognostically move important than are the classic prognostic features, tumor size and lymph node status.

   76.   PANKANTI, S, and JAIN, AK, "INTEGRATING VISION MODULES - STEREO, SHADING, GROUPING, AND LINE LABELING," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 17, pp. 831-842, 1995.

Abstract:   It is generally agreed that individual visual cues are fallible and often ambiguous. This has generated a lot of interest in design of integrated vision systems which are expected to give a reliable performance in practical situations. The design of such systems is challenging since each vision module works under a different and possibly conflicting set of assumptions. We have proposed and implemented a multiresolution system which integrates perceptual organization (grouping), segmentation, stereo, shape from shading, and line labeling modules. We demonstrate the efficacy of our approach using images of several different realistic scenes. The output of the integrated system is shown to be insensitive to the constraints imposed by the individual modules. The numerical accuracy of the recovered depth is assessed in case of synthetically generated data. Finally, we have qualitatively evaluated our approach by reconstructing geons from the depth data obtained from the integrated system. These results indicate that integrated vision systems are likely to produce better reconstruction of the input scene than the individual modules.

   77.   YOUNG, AA, KRAITCHMAN, DL, DOUGHERTY, L, and AXEL, L, "TRACKING AND FINITE-ELEMENT ANALYSIS OF STRIPE DEFORMATION IN MAGNETIC-RESONANCE TAGGING," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 14, pp. 413-421, 1995.

Abstract:   Magnetic resonance tissue tagging allows noninvasive in vivo measurement of soft tissue deformation, Planes of magnetic saturation are created, orthogonal to the imaging plane, which form dark lines (stripes) in the image, We describe a method for tracking stripe motion in the image plane, and show how this information can be incorporated into a finite element model of the underlying deformation, Human heart data were acquired from several imaging planes in different orientations and were combined using a deformable model of the left ventricle wall, Each tracked stripe point provided information on displacement orthogonal to the original tagging plane, i.e., a one-dimensional (1-D) constraint on the motion, Three-dimensional (3-D) motion and deformation was then reconstructed by fitting the model to the data constraints by linear least squares, The average root mean squared (rms) error between tracked stripe points and predicted model locations was 0.47 mm (n = 3100 points). In order to validate this method and quantify the errors involved, we applied it to images of a silicone gel phantom subjected to a known, well-controlled, 3-D deformation. The finite element strains obtained were compared to an analytic model of the deformation known to be accurate in the central axial plane of the phantom, The average rms errors were 6% in both the reconstructed shear strains and 16% in the reconstructed radial normal strain.

 
1996

   78.   Nakajima, C, and Yazawa, T, "A recognition method of facility drawings and street maps utilizing the facility management database," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E79D, pp. 555-560, 1996.

Abstract:   This paper proposes a new approach for inputting handwritten Distribution Facility Drawings (DFD) and their maps into a computer automatically by using the Facility Management Database (FMD). Our recognition method makes use of external information for drawing/map recognition. It identifies each electric-pole symbol and support cable symbol on drawings simply by consulting the FMD. Other symbols such as transformers and electric wires can be placed on drawings automatically. In this positioning of graphic symbols, we present an automatic adjustment method of a symbol's position on the latest digital maps. When a contradiction is unsolved due to an inconsistency between the content of the DFD and the FMD, the system requests a manual feedback from the operator. Furthermore, it uses the distribution network of the DFD to recognize the street lines on the maps which aren't computerized. This can drastically reduce the cost for computerizing drawings and maps.

   79.   Cohen, I, and Cohen, LD, "A hybrid hyperquadric model for 2-D and 3-D data fitting," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 63, pp. 527-541, 1996.

Abstract:   We present in this paper a new curve and surface implicit model. This implicit model is based on hyperquadrics and allows a local and global control of the shape and a wide variety of allowable shapes. We define a hybrid hyperquadric model by introducing implicitly some local properties on a global shape model. The advantage of our model is that it describes global and local properties through a unique implicit equation, yielding a representation of the shape by means of its parameters, independently of the chosen numerical resolution. The data fitting is obtained through the minimization of energy, modeling the attraction to data independently of the implicit description of the shape, After studying the geometry of hyperquadrics and how the shape deforms when we modify slightly its implicit equation, we are able to define an algorithm for automatic refining of the fit by adding an adequate term to the implicit representation, This geometric approach malt:es possible an efficient description of the data points and an automatic tuning of the fit according to the desired accuracy. (C) 1996 Academic Press, Inc.

   80.   Qian, RJ, and Huang, TS, "Optimal edge detection in two-dimensional images," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, pp. 1215-1220, 1996.

Abstract:   This paper presents a new edge detection scheme that detects two-dimensional (2-D) edges by a curve-segment-based detection functional guided by the zero-crossing contours of the Laplacian-of-Gaussian (LOG) to approach the true edge locations. The detection functional is shown to be optimal in terms of signal-to-noise ratio (SNR) and edge localization accuracy; it also preserves the nice scaling property held uniquely by the LOG in scale space.

   81.   Fua, P, and Leclerc, YG, "Taking advantage of image-based and geometry-based constraints to recover 3-D surfaces," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 64, pp. 111-127, 1996.

Abstract:   A unified framework for 3-D shape reconstruction allows us to combine image-based and geometry-based information sources. The image information is akin to stereo and shape-from-shading, while the geometric information may be provided in the form of 3-D points, 3-D features, or 2-D silhouettes. A formal integration framework is critical in recovering complicated surfaces because the information from a single source is often insufficient to provide a unique answer. Our approach to shape recovery is to deform a generic object-centered 3-D representation of the surface so as to minimize an objective function, This objective function is a weighted sum of the contributions of the various information sources. We describe these various terms individually, our weighting scheme, and our optimization method, Finally, we present results on a number of difficult images of real scenes for which a single source of information would have proved insufficient. (C) 1996 Academic Press, Inc.

   82.   Mitiche, A, and Bouthemy, P, "Computation and analysis of image motion: A synopsis of current problems and methods," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 19, pp. 29-55, 1996.

Abstract:   The goal of this paper is to offer a structured synopsis of the problems in image motion computation and analysis, and of the methods proposed, exposing the underlying models and supporting assumptions. A sufficient number of pointers to the literature will be given, concentrating mostly on recent contributions. Emphasis will be on the detection, measurement and segmentation of image motion. Tracking, and deformable motion isssues will be also addressed. Finally, a number of related questions which could require more investigations will be presented.

   83.   Ge, YR, Fitzpatrick, JM, Dawant, BM, Bao, J, Kessler, RM, and Margolin, RA, "Accurate localization of cortical convolutions in MR brain images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 418-428, 1996.

Abstract:   Analysis of brain images often requires accurate localization of cortical convolutions. Although magnetic resonance (MR) brain images offer sufficient resolution for identifying convolutions in theory, the nature of tomographic imaging prevents clear definition of convolutions in individual slices, Existing methods for solving this problem rely on heuristic adaptation of brain atlases created from a small number of individuals, These methods do not usually provide high accuracy because of large biological variations among individuals. We propose to localize convolutions by linking realistic visualizations of the cortical surface with the original image volume. We have developed a system so that a user can quickly localize key convolutions in several visualizations of an entire brain surface, Because of the links between the visualizations and the original volume, these convolutions are simultaneously localized in the original image slices, In the process of our development, we have implemented a fast and easy method for visualizing cortical surfaces in MR images, thereby making our scheme usable in practical applications.

   84.   Thompson, P, and Toga, AW, "A surface-based technique for warping three-dimensional images of the brain," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 402-417, 1996.

Abstract:   We have devised, implemented, and tested a fast, spatially accurate technique for calculating the high-dimensional deformation held relating the brain anatomies of an arbitrary pair of subjects, The resulting three-dimensional (3-D) deformation map can be used to quantify anatomic differences between subjects or within the same subject over time and to transfer functional information between subjects or integrate that information on a single anatomic template. The new procedure is based on developmental processes responsible for variations in normal human anatomy and is applicable to 3-D brain images in general, regardless of modality, Hybrid surface models known as Chen surfaces (based on superquadrics and spherical harmonics) are used to efficiently initialize 3-D active surfaces, and these then extract from both scans the developmentally fundamental surfaces of the ventricles and cortex. The construction of extremely complex surface deformation maps on the internal cortex is made easier by budding a generic surface structure to model it, Connected systems of parametric meshes model several deep sulci whose trajectories represent critical functional boundaries, These sulci are sufficiently extended inside the brain to reflect subtle and distributed variations in neuroanatomy between subjects. The algorithm then calculates the high-dimensional volumetric warp (typically with 384(2) x 256 x 3 approximate to 0.1 billion degrees of freedom) deforming one 3-D scan into structural correspondence with the other. Integral distortion functions are used to extend the deformation held required to elastically transform nested surfaces to their counterparts in the target scan. The algorithm's accuracy is tested, by warping 3-D magnetic resonance imaging (MRI) volumes from normal subjects and Alzheimer's patients, and by warping full-color 1024(3) digital cryosection volumes of the human head onto MRI volumes, Applications are discussed, including the transfer of multisubject 3-D functional, vascular, and histologic maps onto a single anatomic template; the mapping of 3-D brain atlases onto the scans of new subjects; and the rapid detection, quantification, and mapping of local shape changes in 3-D medical images in disease and during normal or abnormal growth and development.

   85.   Cohen, LD, "Auxiliary variables and two-step iterative algorithms in computer vision problems," JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 6, pp. 59-83, 1996.

Abstract:   We present a new mathematical formulation of some curve and surface reconstruction algorithms by the introduction of auxiliary variables. For deformable models and templates, the extraction of a shape is obtained through the minimization of an energy composed of an internal regularization term (not necessary in the case of parametric models) and an external attraction potential. Two-step iterative algorithms have been often used where, at each iteration, the model is first locally deformed according to the potential data attraction and then globally smoothed (or fitted in the parametric case). We show how these approaches can be interpreted as the introduction of auxiliary variables and the minimization of a two-variables energy. The first variable corresponds to the original model we are looking for, while the second variable represents an auxiliary shape close to the first one. This permits to transform an implicit data constraint defined by a non convex potential into an explicit convex reconstruction problem. This approach is much simpler since each iteration is composed of two simple to solve steps. Our formulation permits a more precise setting of parameters in the iterative scheme to ensure convergence to a minimum. We show some mathematical properties and results on this new auxiliary problem, in particular when the potential is a function of the distance to the closest feature point. We then illustrate our approach for some deformable models and templates.

   86.   Malladi, R, Sethian, JA, and Vemuri, BC, "A fast level set based algorithm for topology-independent shape modeling," JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 6, pp. 269-289, 1996.

Abstract:   Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This paper presents a new approach to shape modeling which retains some of the attractive features of existing methods, and overcomes some of their limitations. Our technique can be applied to model arbitrarily complex shapes, which include shapes with significant protrusions, and to situations where no a priori assumption about the object's topology is made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. This method is based on the ideas developed by Osher and Sethian to model propagating solid/liquid interfaces with curvature-dependent speeds. The interface (front) is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. It is moved by solving a ''Hamilton-Jacobi'' type equation written for a function in which the interface is a particular level set. A speed term synthesized from the image is used to stop the interface in the vicinity of object boundaries. The resulting equation of motion is solved by employing entropy-satisfying upwind finite difference schemes. We also introduce a new algorithm for rapid advancement of the front using what we call a narrow-band update scheme. The efficacy of the scheme is demonstrated with numerical experiments on low contrast medical images.

   87.   Zhang, SQ, Douglas, MA, Yaroslavsky, L, Summers, RM, Dilsizian, V, Fananapazir, L, and Bacharach, SL, "A Fourier based algorithm for tracking SPAMM tags in gated magnetic resonance cardiac images," MEDICAL PHYSICS, vol. 23, pp. 1359-1369, 1996.

Abstract:   A method is described for automatically tracking spatial modulation of magnetization tag lines on gated cardiac images. The method differs from previously reported methods in that it uses Fourier based spatial frequency and phase information to separately track horizontal and vertical tag lines. Use of global information from the frequency spectrum of an entire set of tag lines was hypothesized to result in a robust algorithm with decreased sensitivity to noise. The method was validated in several ways: first, actual tagged cardiac images at end diastole were deformed known amounts, and the algorithm's predictions compared to the known deformations. Second, tagged, gated images of the thigh muscle (assumed to have similar signal to noise characteristics as cardiac images, but to not deform with time) were created. Again the algorithmic predictions could be compared to the known (zero magnitude) deformations and to thigh images which had been artificially deformed. Finally, actual cardiac tagged images were acquired, and comparisons made between manual, visual, determinations of tag line locations, and those predicted by the algorithm. At 0.5 T, the mean bias of the method was <0.34 mm even at large deformations and at late (noisy) times. The standard deviation of the method, estimated from the tagged thigh images, was <0.7 mm even at late times. The method may be expected to have even lower error at higher field strengths.

   88.   Eviatar, H, and Somorjai, RL, "A fast, simple active contour algorithm for biomedical images," PATTERN RECOGNITION LETTERS, vol. 17, pp. 969-974, 1996.

Abstract:   A new method for the application of active contours to biomedical images is described. The new approach, which involves extensive modification of the internal energy function acid a different method of minimising the energy functional, yields rapid, excellent fits to MR images.

   89.   Chalana, V, Winter, TC, Cyr, DR, Haynor, DR, and Kim, YM, "Automatic fetal head measurements from sonographic images," ACADEMIC RADIOLOGY, vol. 3, pp. 628-635, 1996.

Abstract:   Rationale and Objectives. We designed an image processing technique to automatically measure the biparietal diameter (BPD) and head circumference (HC) from prenatal sonograms, We evaluated the performance of the algorithm by comparing the resulting measurements with those made by experienced sonographers. Methods. Thirty-five digitized sonograms of the fetal head were obtained during routine imaging, The BPD and HC were automatically computed by detecting the inner and outer boundaries of the fetal skull using the computer vision technique known as the ''active contour model.'' Six experienced sonographers also measured the BPD and HC on these images. Results. The algorithm failed to locate the boundaries in two of the 35 cases. For the remaining cases, the mean absolute difference betnieen the automated measurements and the average of the six observers was 1.4% for BPD and 2.9% for HC. The correlations were .999 for the BPD and .994 for the HC. The computer's measurements were no different from the six observers' measurements than the observers' measurements were from one another. Conclusion. The tested algorithm effectively and accurately measures BPD and HC automatically. We are currently in the process of integrating this algorithm into an ultrasound machine.

   90.   Yang, ZY, "Nonlinear superposition of receptive fields and phase transitions," PHYSICS LETTERS A, vol. 219, pp. 277-281, 1996.

Abstract:   We present a principle of nonlinear superposition of receptive fields. Changes of connection weight, applied field or the distances between the input centers can lead to a new phase with all neurons encoding certain shapes excited. This process is a kind of phase transition and can be used for information processing.

   91.   Dow, AI, Shafer, SA, Kirkwood, JM, Mascari, RA, and Waggoner, AS, "Automatic multiparameter fluorescence imaging for determining lymphocyte phenotype and activation status in melanoma tissue sections," CYTOMETRY, vol. 25, pp. 71-81, 1996.

Abstract:   A system has been developed that combines multiparameter fluorescence imaging and computer vision techniques to provide automatic phenotyping of multiple cell types in a single tissue section. This system identifies both the nuclear and cytoplasmic boundary of each cell. A routine based on the watershed algorithm has been developed to segment an image of Hoechst-stained nuclei with an accuracy of greater than 85%. Deformable splines initially positioned at the nuclear boundaries are applied to images of fluorescently labelled cell-surface antigens. The splines lock onto the peak fluorescence signal surrounding the cell, providing an estimate of the cell boundary. From measurements acquired at this boundary, each cell is classified according to antigen expression. The system has been piloted in biopsies from melanoma patients participating in a clinical trial of the antibody R(24). Thin tissue sections have been stained with Hoechst and three different fluorescent antibodies to antigens that permit the typing and evaluation of activity of T-cells. Changes in the infiltrates evaluated by multiparameter imaging were consistent with results obtained by immunoperoxidase analysis. The multiparameter fluorescent technique enables simultaneous determination of multiple cell subsets and can provide the spatial relationships of each cell type within the tissue. (C) 1996 Wiley-Liss, Inc.

   92.   Bulpitt, AJ, and Efford, ND, "An efficient 3D deformable model with a self-optimising mesh," IMAGE AND VISION COMPUTING, vol. 14, pp. 573-580, 1996.

Abstract:   Deformable models are a powerful and popular tool for image segmentation, but in 3D imaging applications the high computational cost of fitting such models can be a problem. A further drawback is the need to select the initial size and position of a model in such a way that it is close to the desired solution. This task may require particular expertise on the part of the operator, and, furthermore, may be difficult to accomplish in three dimensions without the use of sophisticated visualisation techniques. This article describes a 3D deformable model that uses an adaptive mesh to increase computational efficiency and accuracy. The model employs a distance transform in order to overcome some of the problems caused by inaccurate initialisation. The performance of the model is illustrated by its application to the task of segmentation of 3D MR images of the human head and hand. A quantitative analysis of the performance is also provided using a synthetic test image.

   93.   Ip, HHS, and Yu, RPK, "Recursive splitting of active contours in multiple clump segmentation," ELECTRONICS LETTERS, vol. 32, pp. 1564-1566, 1996.

Abstract:   A new technique is presented for clump decomposition based on the recursive splitting of active contours. The approach does not require prior knowledge of tlx number of objects and the sizes of the objects to be segmented.

   94.   Nesi, P, and Magnolfi, R, "Tracking and synthesizing facial motions with dynamic contours," REAL-TIME IMAGING, vol. 2, pp. 67-79, 1996.

Abstract:   Many researchers have studied techniques related to the analysis and synthesis of human heads under motion with face deformations. These techniques can be used for defining low-rate Image compression algorithms (model-based image coding), cinema technologies, videophones, as well as for applications of virtual reality, etc. Such techniques need a real-time performance and a strong integration between the mechanisms of motion estimation and those of rendering and animation of the 3D synthetic head/face. In this paper, a complete and integrated system for tracking and synthesizing facial motions in real-time with low-cost architectures is presented. Facial deformations curves represented as spatiotemporal B-splines are used for tracking in order to model the main facial features. In addition, the system proposed is capable of adapting a generic 3D wire-frame model of a head/face to the face that must be tracked; therefore: the simulations of the face deformations are produced by using a realistic patterned face. (C) 1996 Academic Press Limited

   95.   Beylot, P, Gingins, P, Kalra, P, Thalmann, NM, Maurel, W, Thalmann, D, and Fasel, J, "3D interactive topological modeling using visible human dataset," COMPUTER GRAPHICS FORUM, vol. 15, pp. C33-&, 1996.

Abstract:   Availability of Visible Human Dataset (VHD) has provided numerous possibilities for its exploitation bl both medical applications and 3D animation. In this paper, we present our interactive tools which enable extraction of surfaces for different organs, including bones, muscles, fascia, and skin, from the VHD. The reconstructed surfaces then are used for defining the inter-relationship of organs, a process bye refer to as topological modeling. A data base is constructed, which encapsulates structural, topological, mechanical and other relevant information about organs. A 3D interactive tool enables the building and editing of this data base. Such a data base can later be used for different applications in fields such as medicine, sports, education, and entertainment.

   96.   Berger, MO, Chevrier, C, and Simon, G, "Compositing computer and video image sequences: Robust algorithms for the reconstruction of the camera parameters," COMPUTER GRAPHICS FORUM, vol. 15, pp. C23-&, 1996.

Abstract:   Augmented reality shows great promises in fields where a simulation in situ would be impossible or too expensive. When mixing synthetic and real objects in the same animated sequence, we must be sure that the geometrical coherence as well as the photometrical coherence is ensured. One major challenge is to compute the camera viewpoint with sufficient accuracy to ensure a satisfactory composition. We especially address this point in this paper using computer vision techniques and robust statistical methods. We prove that such techniques make it possible to compute almost automatically the viewpoint for long video sequences even for bad quality images in outdoor environments. Significant results on the lighting simulation of the bridges of Paris are shown.

   97.   Mason, DC, and Davenport, IJ, "Accurate and efficient determination of the shoreline in ERS-1 SAR images," IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 34, pp. 1243-1253, 1996.

Abstract:   Extraction of the shoreline in SAR images is a difficult task to perform using simple image processing operations such as grey-value thresholding, due to the presence of speckle and because the signal returned from the sea surface may be similar to that from the land. A semiautomatic method for detecting the shoreline accurately and efficiently in ERS-1 SAR images is presented. This is aimed primarily at a particular application; namely the construction of a digital elevation model of an intertidal zone using SAR images and hydrodynamic model output, but could be carried over to other applications. A coarse-fine resolution processing approach is employed, in which sea regions are first detected as regions of low edge density in a low resolution image, then image areas near the shoreline are subjected to more elaborate processing at high resolution using an active contour model. Over 90% of the shoreline detected by the automatic delineation process appear visually correct.

   98.   Lejeune, A, and Ferrie, FP, "Finding the parts of objects in range images," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 64, pp. 230-247, 1996.

Abstract:   A key problem in the interpretation of visual form is the partitioning of a shape into components that correspond to the parts of an object. This paper presents a method for partitioning a set of surface estimates obtained with a laser range finding system into subsets corresponding to such parts. Parts are defined implicitly by means of a feature set that identifies putative part boundaries that have been computed by external means. The strategy employed makes use of two complementary representations for surfaces: one that describes local structures in terms of differential properties (e.g., edges, lines, contours) and the other that represents the surface as a collection of smooth patches at different scales. It is shown that by enforcing a consistent interpretation between these two representations, it is possible to derive a partitioning algorithm that is both efficient and robust. Examples of its performance on a set of range images are presented. (C) 1996 Academic Press, Inc.

   99.   Cramer, C, Gelenbe, E, and Bakircioglu, H, "Low bit-rate video compression with neural networks and temporal subsampling," PROCEEDINGS OF THE IEEE, vol. 84, pp. 1529-1543, 1996.

Abstract:   Image and video compression is becoming an increasingly important area of investigation, with numerous applications to video conferencing, interactive education, home entertainment, and potential applications to earth observation, medical imaging, digital libraries, and many other areas. In this paper we describe a novel neural network technique for video compression, using a ''point-process'' type neural network model we have developed [1]-[4] which is closer to biophysical reality and is mathematically much more tractable than standard models. Our algorithm uses an adaptive approach based upon the users' desired video quality Q, and achieves compression ratios of up to 500:1 for moving gray-scale images, based on a combination of motion detection, compression ratio of over 1000:1 for full-color video sequences with the addition of the standard 4:1:1 spatial subsampling ratios in the chrominance images. The signal-to-noise ratio (SNR) obtained varies with the compression level and ranges from 29 dB to over 34 dB. Our method is computationally fast so that compression and decompression could possibly be preformed in real-time software. Compression is preformed using a combination of motion detection, neural networks, and temporal subsampling of frames. A set of neural networks is used to adaptively select the desired compression of each picture block as a function of the reconstruction quality. The motion detection process separates out regions of the frame which need to be retransmitted. Temporal subsampling of frames, along with reconstruction technique, lead to the high compression ratios reported in this paper.

  100.   Tannenbaum, A, "Three snippets of curve evolution theory in computer vision," MATHEMATICAL AND COMPUTER MODELLING, vol. 24, pp. 103-119, 1996.

Abstract:   In this paper, we discuss some uses of curve evolution theory for problems in computer vision. We concentrate on three problem areas: shape theory, active contours, and geometric invariant scale spaces. The solutions to these key problems will all be based on flows which are obtained in a completely natural manner from geometric and physical principles.

  101.   Olstad, B, and Torp, AH, "Encoding of a priori information in active contour models," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, pp. 863-872, 1996.

Abstract:   The theory of active contours models the problem of contour recovery as an energy minimization process. The computational solutions based on dynamic programming require that the energy associated with a contour candidate can be decomposed into an integral of local energy contributions. In this paper we propose a grammatical framework that can model different local energy models and a set of allowable transitions between these models. The grammatical encodings are utilized to represent a priori knowledge about the shape of the object and the associated signatures in the underlying images. The variability encountered in numerical experiments is addressed with the energy minimization procedure which is embedded in the grammatical framework. We propose an algorithmic solution that combines a nondeterministic version of the Knuth-Morris-Pratt algorithm for string matching with a time-delayed discrete dynamic programming algorithm for energy minimization. The numerical experiments address practical problems encountered in contour recovery such as noise robustness and occlusion.

  102.   Staib, LH, and Duncan, JS, "Model-based deformable surface finding for medical images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 720-731, 1996.

Abstract:   This paper describes a new global shape parameterization for smoothly deformable three-dimensional (3-D) objects, such as those found in biomedical images, whose diversity and irregularity make them difficult to represent in terms of fixed features or parts. This representation is used for geometric surface matching to 3-D medical image data, such as from magnetic resonance imaging (MRI). The parameterization decomposes the surface into sinusoidal basis functions. Four types of surfaces are modeled: tori, open surfaces, closed surfaces and tubes. This parameterization allows a wide variety of smooth surfaces to be described with a small number of parameters. Extrinsic model-based information is incorporated by introducing prior probabilities on the parameters. Surface finding is formulated as an optimization problem, Results of the method applied to synthetic images and 3-D medical images of the heart and brain are presented.

  103.   Wang, G, Snyder, DL, OSullivan, JA, and Vannier, MW, "Iterative deblurring for CT metal artifact reduction," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 657-664, 1996.

Abstract:   Iterative deblurring methods using the expectation maximization (EM) formulation and the algebraic reconstruction technique (ART), respectively, are adapted for metal artifact reduction in medical computed tomography (CT). In experiments with synthetic noise-free and additive noisy projection data of dental phantoms, it is found that both simultaneous iterative algorithms produce superior image quality as compared to filtered backprojection after linearly fitting projection gaps. Furthermore, the EM-type algorithm converges faster than the ART-type algorithm in terms of either the I-divergence or Euclidean distance between ideal and reprojected data in our simulation. Also, for a given iteration number, the EM-type deblurring method produces better image clarity but stronger noise than the ART-type reconstruction. The computational complexity of EM- and ART-based iterative deblurring is essentially the same, dominated by reprojection and backprojection. Relevant practical and theoretical issues are discussed.

  104.   Hutchinson, S, Hager, GD, and Corke, PI, "A tutorial on visual servo control," IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, vol. 12, pp. 651-670, 1996.

Abstract:   This article provides a tutorial introduction to visual servo control of robotic manipulators, Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework, We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process, We then present a taxonomy of visual servo control systems, The two major classes of systems, position-based and image-based systems, are then discussed in detail, Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking, We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

  105.   Kichenassamy, S, Kumar, A, Olver, P, Tannenbaum, A, and Yezzi, A, "Conformal curvature flows: From phase transitions to active vision," ARCHIVE FOR RATIONAL MECHANICS AND ANALYSIS, vol. 134, pp. 275-301, 1996.

Abstract:   In this paper, we analyze geometric active contour models from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new feature-based Riemannian metrics. This leads to a novel edge-detection paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus an edge-seeking curve is attracted very naturally and efficiently to the desired feature. Comparison with the Allen-Cahn model clarifies some of the choices made in these models, and suggests inhomogeneous models which may in return be useful in phase transitions. We also consider some 3-dimensional active surface models based on these ideas. The justification of this model rests on the careful study of the viscosity solutions of evolution equations derived from a level-set approach.

  106.   Denzler, J, and Niemann, H, "3D data driven prediction for active contour models based on geometric bounding volumes," PATTERN RECOGNITION LETTERS, vol. 17, pp. 1171-1178, 1996.

Abstract:   Active contour models have proven to be a promising approach for data driven object tracking without knowledge about the problem domain and the object. Problems arise in case of fast moving objects and in natural scenes with heterogeneous background. In these cases, a prediction step is an essential part of the tracking mechanism. In this paper we describe a new approach for modelling the contour of moving objects in the 3D world. The key point is the description of moving objects by a simplified geometric model, the sc-called bounding volume. The contour of moving objects is predicted by estimating the movement and the shape of the bounding volume in the 3D world and by projecting its contour to the image plane. Stochastic optimization algorithms are used to estimate shape parameters of the bounding volume. The 2D contour of the bounding volume is used to initialize the active contour, which then extracts the contour of the moving object. Thus, an update of the motion and model parameters of the bounding volume is possible. No task specific knowledge and no a priori knowledge about the object is necessary. We will show that in the case of convex polyhedral bounding volumes, this method can be applied to real-time closed-loop object tracking on standard Unix workstations. Furthermore, we present experiments which prove that the robustness for tracking moving objects in front of a heterogeneous background can be improved.

  107.   Couvignou, PA, Papanikolopoulos, NP, Sullivan, M, and Khosla, PK, "The use of active deformable models in model-based robotic visual servoing," JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, vol. 17, pp. 195-221, 1996.

Abstract:   This paper presents a new approach for visual tracking and servoing in robotics. We introduce deformable active models as a powerful means for tracking a rigid or semi-rigid (possibly partially occluded) object in movement within the manipulator's workspace. Deformable models imitate, in real-time, the dynamic behavior of elastic structures. These computer-generated models are designed to capture the silhouette of objects with well-defined boundaries, in terms of image gradient. By means of an eye-in-hand robot arm configuration, the desired motion of the end-effector is computed with the objective of keeping the target's position and shape invariant with respect to the camera frame. Optimal estimation and control techniques (LQG regulator) have been successfully implemented in order to deal with noisy measurements provided by our vision sensor. Experimental results are presented for the tracking of a rigid or semi-rigid object. The experiments performed in a real-time environment show the effectiveness and robustness of the proposed method for servoing tasks based on visual feedback.

  108.   Yan, RH, Tokuda, N, and Miyamichi, J, "A model-based active landmarks tracking method," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E79D, pp. 1477-1482, 1996.

Abstract:   Unlike The time-consuming contour tracking method of snakes [5] which requires a considerable number of iterated computations before contours are successfully tracked down, we present a faster and accurate model-based ''landmarks'' tracking method where a single iteration of the dynamic programming is sufficient to obtain a local minimum to an integral measure of the elastic and the image energy functionals. The key lies in choosing a relatively small number of salient ''landmarks'', or features of objects, rather than their contours as a target of tracking within the image structure. The landmarks comprising singular points along the model contours are tracked down within the image structure all inside restricted search areas of 41 x 41 pixels whose respective locations in image structure are dictated by their locations in the model. A Manhattan distance and a template corner detection function of Singh and Shneier [7] are used as elastic energy and image energy respectively in the algorithm. A first approximation to the image contour is obtained in our method by applying the thin-plate spline transformation of Bookstein [2] using these landmarks as fixed points of the transformation which is capable of preserving a global shape information of the model including the relative configuration of landmarks and consequently surrounding contours of the model in the image structure. The actual image contours are further tracked down by applying an active edge tracker using now simplified line search segments so that individual differences persisting between the mapped model contour are substantially eliminated. We have applied our method tentatively to portraits of a class album to demonstrate the effectiveness of the method. Our experiments convincingly show that using only about 11 feature points our method provides not only a much improved computational complexity requiring only 0.94 sec. in CPU time by SGI's indigo2 but also more accurate shape representations than those obtained by the snakes methods. The method is powerful in a problem domain where the model-based approach is applicable, possibly allowing real time processing because a most time consuming algorithm of corner template evaluation can be easily implemented by parallel processing firmware.

  109.   Huang, TS, Stroming, JW, Kang, Y, and Lopez, R, "Advances in very low bit rate video coding in North America," IEICE TRANSACTIONS ON COMMUNICATIONS, vol. E79B, pp. 1425-1433, 1996.

Abstract:   Research in very low-bit rate coding has made significant advancements in the past few years. Most recently, the introduction of the MPEG-4 proposal has motivated a wide variety of approaches aimed al achieving a new level of video compression. In this paper we review progress in VLBV categorized into 3 main areas: (1) Waveform coding, (2) 2D Content-based coding, and (3) Model-based coding. Where appropriate we also described proposals to the MPEG-4 committee in each of these areas.

  110.   Germain, O, and Refregier, P, "Optimal snake-based segmentation of a random luminance target on a spatially disjoint background," OPTICS LETTERS, vol. 21, pp. 1845-1847, 1996.

Abstract:   We describe a segmentation processor that is optimal for tracking the shape of a target with random white Gaussian intensity appearing on a random white Gaussian spatially disjoint background. This algorithm, based on an active contours model (snakes), consists of correlations of binary reference's with preprocessed versions of the scene image. This result can provide a practical method to adapt the reference image to correlation techniques. (C) 1996 Optical Society of America

  111.   Wang, M, Evans, J, Hassebrook, L, and Knapp, C, "A multistage, optimal active contour model," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, pp. 1586-1591, 1996.

Abstract:   Energy-minimizing active contour models or snakes can be used in many applications such as edge detection, motion tracking, image matching, computer vision, and three-dimensional (3-D) reconstruction. We present a novel snake that is superior both in accuracy and convergence speed over previous snake algorithms. High performance is achieved by using spline representation and dividing the energy-minimization process into multiple stages. The first stage is designed to optimize the convergence speed in order to allow the snake to quickly approach the minimum-energy state. The second stage is devoted to snake refinement and to local minimization of energy, thereby driving the snake to a quasiminimum-energy state. The third stage uses the Bellman optimality principle to fine-tune the snake to the global minimum-energy state. This three-stage scheme is optimized for both accuracy and speed.

  112.   Malladi, R, and Sethian, JA, "A unified approach to noise removal, image enhancement, and shape recovery," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, pp. 1554-1568, 1996.

Abstract:   We present a unified approach to noise removal, image enhancement, and shape recovery in images. The underlying approach relies on the level set formulation of curve and surface motion, which leads to a class of PDE-based algorithms. Beginning with an image, the first stage of this approach removes noise and enhances the image by evolving the image under flow controlled by min/max curvature and by the mean curvature. This stage is applicable to both salt-and-pepper grey-scale noise and full-image continuous noise present in black and white images, grey-scale images, texture images, and color images. The noise removal/enhancement schemes applied in this stage contain only one enhancement parameter, which in most cases is automatically chosen. The other key advantage of our approach is that a stopping criteria is automatically picked from the image; continued application of the scheme produces no further change. The second stage of our approach is the shape recovery of a desired object; we again exploit the level set approach to evolve an initial curve/surface toward the desired boundary, driven by an image-dependent speed function that automatically stops at the desired boundary.

  113.   Smith, CE, Richards, CA, Brandt, SA, and Papanikolopoulos, NP, "Visual tracking for intelligent vehicle-highway systems," IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, vol. 45, pp. 744-759, 1996.

Abstract:   The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved, These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations, Of the sensors available, vision sensors protide information that Is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system, In this paper we propose robust detection and tracking techniques for intelligent vehicle-highway applications where computer vision plays a crucial role, In particular, se demonstrate that the Controlled Active Vision framework [15] can be utilized to provide a visual tracking modality to a traffic advisory system in order to increase the overall safety margin in a variety bf common traffic situations, We have selected two application examples. vehicle tracking and pedestrian tracking, to demonstrate that the framework fan provide precisely the type of information required to effectively manage the given traffic situation.

  114.   Nastar, C, and Ayache, N, "Frequency-based nonrigid motion analysis: Application to four dimensional medical images," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, pp. 1067-1079, 1996.

Abstract:   We present a method for nonrigid motion analysis in time sequences of volume images (4D data). In this method, nonrigid motion of the deforming object contour is dynamically approximated by a physically-based deformable surface. In order to reduce the number of parameters describing the deformation, we make use of a modal analysis which provides a spatial smoothing of the surface. The deformation spectrum, which outlines the main excited modes, can be efficiently used for deformation comparison. Fourier analysis on time signals of the main deformation spectrum components provides a temporal smoothing of the data. Thus a complex nonrigid deformation is described by only a few parameters: the main excited modes and the main Fourier harmonics. Therefore, 4D data can be analyzed in a very concise manner. The power and robustness of the approach is illustrated by various results on medical data. We believe that our method has important applications in automatic diagnosis of heart diseases and in motion compression.

  115.   Lee, JD, "Genetic approach to select wavelet features for contour extraction in medical ultrasonic imaging," ELECTRONICS LETTERS, vol. 32, pp. 2137-2138, 1996.

Abstract:   An efficient and robust approach, which is based on wavelet transform (WT), is proposed to contour extraction for medical ultrasonic images having low signal-to-noise ratio. Furthermore, the best wavelet features for profile analysis is estimated by GAs without manual operation. No image preprocessing is needed, so computation time is fast. Experimental results to confirm the proposed algorithm are also included.

  116.   Laprie, Y, and Berger, MO, "Cooperation of regularization and speech heuristics to control automatic formant tracking," SPEECH COMMUNICATION, vol. 19, pp. 255-269, 1996.

Abstract:   This paper describes an automatic formant tracking algorithm incorporating speech knowledge. It operates in two phases. The first detects and interprets spectrogram peak lines in terms of formants. The second uses an image contour extraction method to regularise the peak lines thus detected. Speech knowledge served as acoustic constraints to guide the interpretation of peak lines. The proposed algorithm has the advantage of providing formant trajectories which, in addition to being sufficiently close to the spectral peaks of the respective formants, are sufficiently smooth to allow an accurate evaluation of formant transitions. The results obtained highlight the interest of the proposed approach.

  117.   Maurer, CR, Aboutanos, GB, Dawant, BM, Maciunas, RJ, and Fitzpatrick, JM, "Registration of 3-D images using weighted geometrical features," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 836-849, 1996.

Abstract:   In this paper, we present a weighted geometrical feature (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay's iterative closest point (ICP) algorithm. We use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. We define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived front contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

  118.   Davatzikos, C, and Bryan, RN, "Using a deformable surface model to obtain a shape representation of the cortex," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 785-795, 1996.

Abstract:   This paper examines the problem of obtaining a mathematical representation of the outer cortex of the human brain, which is a key problem in several applications, including morphological analysis of the brain, and spatial normalization and registration of brain images. A parameterization of the outer cortex is first obtained using a deformable surface algorithm which, motivated by the structure of the cortex, is constructed to find the central layer of thick surfaces. Based on this parameterization, a hierarchical representation of the outer cortical structure is proposed through its depth map and its curvature maps at various scales. Various experiments on magnetic resonance data are presented.

  119.   Toklu, C, Erdem, AT, Sezan, MI, and Tekalp, AM, "Tracking motion and intensity variations using hierarchical 2-D mesh modeling for synthetic object transfiguration," GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 58, pp. 553-573, 1996.

Abstract:   We propose a method for tracking the motion and intensity variations of a 2-D mildly deformable image object using a hierarchical 2-D mesh model. The proposed method is applied to synthetic object transfiguration, namely, replacing an object in a real video clip with another synthetic or natural object via digital postprocessing. Successful transfiguration requires accurate tracking of both motion and intensity (contrast and brightness) variations of the object-to-be-replaced so that the replacement object can be rendered in exactly the same way from a single still picture. The proposed method is capable of tracking image regions corresponding to scene objects with nonplanar and/or mildly deforming surfaces, accounting for intensity variations, and is shown to be effective with real image sequences. (C) 1996 Academic Press, Inc.

  120.   Demongeot, J, and Leitner, F, "Compact set valued flows .1. Applications in medical imaging," COMPTES RENDUS DE L ACADEMIE DES SCIENCES SERIE II FASCICULE B-MECANIQUE PHYSIQUE CHIMIE ASTRONOMIE, vol. 323, pp. 747-754, 1996.

Abstract:   Compact set valued dynamical systems have a large field of applications in image processing and morphogenesis modelling. In section 2 of this paper, we will define the notion of compact set valued Row. In section 3, we will propose some examples of potential Rows used in 3D-image contouring. In section 4, we will introduce the notion of mixed potential-hamiltonian flows, which could be used in 4D-image contouring, which generalizes the 2D potential-hamiltonian contouring method. Finally, in section 5 we will give a simple example of compact set valued iterations, and in section 6 an example of distribution tube iterations.

  121.   Kita, Y, "Elastic-model driven analysis of several views of a deformable cylindrical object," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, pp. 1150-1162, 1996.

Abstract:   This paper proposes a method to extract regions of a deformable object from several views of it while finding the correspondence of the object among the views. The method has been developed to analyze X-ray images of a stomach. Owing to the physical (not physiological) deformation oi the stomach and changes of the camera angle, the shape oi the stomach regions are fairly different among the Images. In order to collectively analyze these images, we use an elastic stomach model. Firstly, our method builds an elastic stomach model based on the stomach shape in one image. Considering each photographing condition, the deformation of the stomach in each image is simulated with the elastic model. Referring to the predicted contour which is obtained by projecting the deformed model from the camera angle of each image, the contour is robustly extracted from noisy images in a model-driven way. Since the predicted contour registered in each image corresponds with the elastic model, the position of each stomach part in the image is simultaneously obtained; corresponding parts can be found among the images through the model. Experimental results of analyzing several types of stomach X-ray images are shown and discussed.

  122.   Lee, S, Wolberg, G, Chwa, KY, and Shin, SY, "Image metamorphosis with scattered feature constraints," IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, vol. 2, pp. 337-354, 1996.

Abstract:   This paper describes an image metamorphosis technique to handle scattered feature constraints specified with points, polylines, and splines. Solutions to the following three problems are presented: feature specification, warp generation, and transition control. We demonstrate the use of snakes to reduce the burden of feature specification. Next, we propose the use of multilevel free-form deformations (MFFD) to compute C-2-continuous and one-to-one mapping functions among the specified features. The resulting technique, based on B-spline approximation, is simpler and faster than previous warp generation methods. Furthermore, it produces smooth image transformations without undesirable ripples and foldovers. Finally, we simplify the MFFD algorithm to derive transition functions to control geometry and color blending. Implementation details are furnished and comparisons among Various metamorphosis techniques are presented.

  123.   Irrgang, R, and Irrgang, H, "An intelligent snake growing algorithm for fuzzy shape detection," EXPERT SYSTEMS WITH APPLICATIONS, vol. 11, pp. 531-536, 1996.

Abstract:   A novel, robust algorithm for connectivity detection in the shape recognition process has been developed. The algorithm is a simplified version of the snake or active contour technique for object boundary detection. Access to a case base and constraint system is available at all stages of the process which increases the probability that semantically meaningful objects will be detected. The technique has proved valuable for a number of applications from the aerospace industry including shape recognition and fatigue crack detection. The algorithm has also been used to generate an improved version of the STIRS technique for shape recognition. Copyright (C) 1996 Elsevier Science Ltd

  124.   Sato, K, Sugawara, K, Narita, Y, and Namura, I, "Consideration of the method of image diagnosis with respect to frontal lobe atrophy," IEEE TRANSACTIONS ON NUCLEAR SCIENCE, vol. 43, pp. 3230-3239, 1996.

Abstract:   This paper proposes a segmentation method for a quantitative image diagnosis as a means of realizing an objective diagnosis of the frontal lobe atrophy, From the data obtained on the grade of membership, the fractal dimensions of the cerebral tissue [cerebral spinal fluid (CSF), gray matter, and white matter] and the contours are estimated, The mutual relationship between the degree of atrophy and the fractal dimension has been analyzed based on the estimated fractal dimensions, Using a sample of 42 male and Female cases, ranging in age from 50's to 70's, it has been concluded that the frontal lobe atrophy can be quantified by regarding it as an expansion of CSF region on the magnetic resonance imaging (MRI) of the brain, Furthermore, when the process of frontal lobe atrophy is separated into early and advanced stages, the volumetric change of CSF and white matter in frontal lobe displays meaningful differences between the two stages, demonstrating that the fractal dimension of CSF rises with the progress of atrophy, Moreover, an interpolation method for three-dimensional (3-D) shape reconstruction of the region of diagnostic interest is proposed and 3-D shape visualization, with respect to the degree and form of atrophy, is performed on the basis of the estimated fractal dimension of the segmented cerebral tissue.

  125.   Atkins, MS, and Mackiewich, BT, "Automatic segmentation of the brain in MRI," VISUALIZATION IN BIOMEDICAL COMPUTING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1131, pp. 241-246, 1996.

Abstract:   This paper describes a robust fully automatic method for segmenting the brain from head MR images, which works even in the presence of RF inhomogeneities. It has been successful in segmenting the brain in every slice from head images acquired from three different MRI scanners, using different resolution images and different echo sequences. The three-stage integrated method employs image processing techniques based on anisotropic filters, ''snakes'' contouring techniques, and a-priori knowledge. First the background noise is removed leaving a head mask, then a rough outline of the brain is found, and finally the rough brain outline is refined to a final mask.

  126.   Kelemen, A, Szekely, G, Reist, HW, and Gerig, G, "Automatic segmentation of cell nuclei from confocal laser scanning microscopy images," VISUALIZATION IN BIOMEDICAL COMPUTING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1131, pp. 193-202, 1996.

Abstract:   In this paper we present a method for the fully automatic segmentation of cell nuclei from 3D confocal laser microscopy images. The method is based on the combination of previously proposed techniques which have been refined for the requirements of this task. A 3D extension of a wave propagation technique applied to gradient magnitude images allows us a precise initialization of elastically deformable Fourier models and therefore a fully automatic image analysis. The shape parameters are transformed into invariant descriptors and provide the basis of a statistical analysis of cell nucleus shapes. This analysis will be carried out in order to determine average intersection lengths between cell nuclei and single particle tracks of ionizing radiation. This allows a quantification of absorbed energy on living cells leading to a better understanding of the biological significance of exposure to radiation in low doses.

  127.   McAuliffe, MJ, Eberly, D, Fritsch, DS, Chaney, EL, and Pizer, SM, "Scale-space boundary evolution initialized by cores," VISUALIZATION IN BIOMEDICAL COMPUTING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1131, pp. 173-182, 1996.

Abstract:   A novel interactive segmentation method has been developed which uses estimated boundaries, generated from cores, to initialize a scale-space boundary evolution process in greyscale medical images. Presented is an important addition to core extraction methodology that improves core generation for objects that are in the presence of interfering objects. The boundary at the scale of the core (BASOC) and its associated width information, both derived from the core, are used to initialize the second stage of the segmentation process. In this automatic refinement stage, the BASOC is allowed to evolve in a spline-snake-like manner that makes use of object-relevant width information to make robust measurements of local edge positions.

  128.   Masutani, Y, Masamune, K, and Dohi, T, "Region-growing based feature extraction algorithm for tree-like objects," VISUALIZATION IN BIOMEDICAL COMPUTING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1131, pp. 161-171, 1996.

Abstract:   To overcome limitations of the conventional 'toward-axis' voxel-removal way of thinning operations, a new 'along-axis' style of algorithm was developed for topological information acquisition of tree-like objects like vascular shapes based on region-growing technique. The theory of mathematical morphology is extended for closed space inside binary shapes, and the 'closed space dilation' operation is introduced as generalized form of region growing. Using synthetic and clinical 3D images, its superior features, such as parametric controllability were shown.

  129.   Rogowska, J, Batchelder, K, Gazelle, GS, Halpern, EF, Connor, W, and Wolf, GL, "Evaluation of selected two-dimensional segmentation techniques for computed tomography quantitation of lymph nodes," INVESTIGATIVE RADIOLOGY, vol. 31, pp. 138-145, 1996.

Abstract:   RATIONALE AND OBJECTIVES. AS contrast agents that selectively target normal lymph nodes are undergoing development and evaluation, it has become important to accurately and reproducibly determine nodal boundaries to study. the agents and determine such values as lymph node area or mean nodal contrast concentration. This study was performed to evaluate the accuracy of different two-dimensional computer segmentation methods, tested on acrylic phantoms constructed to imitate the appearance of lymph nodes surrounded bg fat. METHODS. Five segmentation techniques (manual tracing, semiautomatic local criteria threshold selection, Sobel/watershed technique, interactive deformable contour algorithm and thresholding) were evaluated using phantoms, Subsequently, the first three methods were applied to the images of enhanced lymph nodes in rabbits. RESULTS. Minimum errors in phantom area measurement (<5%) and interoperator variation (<5%) were seen with the Sobel/watershed technique and the interactive deformable contour algorithm, These two techniques were significantly better than thresholding and semiautomated thresholding based on local properties. CONCLUSION. Methods based on Sobel edge detection offer more objective tools than thresholding methods for segmenting objects similar to lymph nodes in computed tomography images, Both methods, Sobel/watershed and interactive deformable contour algorithm, are fast and have simple user interfaces.

  130.   Carlsson, S, "Projectively invariant decomposition and recognition of planar shapes," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 17, pp. 193-209, 1996.

Abstract:   An algorithm is presented for computing a decomposition of planar shapes into convex subparts represented by ellipses, The method is invariant to projective transformations of the shape, and thus the conic primitives can be used for matching and definition of invariants in the same way as points and lines. The method works for arbitrary planar shapes admitting at least four distinct tangents and it is based on finding ellipses with four points of contact to the given shape. The cross ratio computed from the four points on the ellipse can then be used as a projectively invariant index. It is demonstrated that a given shape has a unique parameter-free decomposition into a finite set of ellipses with unit cross ratio. For a given shape, each pair of ellipses can be used to compute two independent projective invariants. The set of invariants computed for each ellipse pair can be used as indexes to a hash table from which model hypothesis can be generated Examples of shape decomposition and recognition are given for synthetic shapes and shapes extracted from grey level images of real objects using edge detection.

  131.   Worring, M, Smeulders, AWM, Staib, LH, and Duncan, JS, "Parameterized feasible boundaries in gradient vector fields," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 63, pp. 135-144, 1996.

Abstract:   Segmentation of(noisy) images containing a complex ensemble of objects is difficult to achieve on the basis of local image information only. It is advantageous to attack the problem of object boundary extraction by a model-based segmentation procedure, Segmentation is achieved by tuning the parameters of the geometrical model in such a way that the boundary template locates and describes the object in the image in an optimal way, The optimality of the solution is based on an objective function taking into account image information as well as the shape of the template. Objective functions in literature are mainly based on the gradient magnitude and a measure describing the smoothness of the template. In this contribution, we propose a new image objective function based on directional gradient information derived from Gaussian smoothed derivatives of the image data, The proposed method is designed to accurately locate an object boundary even in the case of a conflicting object positioned close to the object of interest, We further introduce a new smoothness objective to ensure the physical feasibility of the contour. The method is evaluated on artificial data, Results on real medical images show that the method is very effective in accurately locating object boundaries in very complex images. (C) 1996 Academic Press, Inc.

  132.   Alpert, NM, Berdichevsky, D, Levin, Z, Morris, ED, and Fischman, AJ, "Improved methods for image registration," NEUROIMAGE, vol. 3, pp. 10-18, 1996.

Abstract:   We report a system for PET-MRI registration that is improved or optimized in several areas: (1) Automatic scalp/brain segmentation replaces manual drawing operations, (2) a new fast and accurate method of image registration, (3) visual assessment of registration quality is enhanced by composite imaging methods (i.e., fusion) and (4) the entire procedure is embedded in a commercially available scientific visualization package, thereby providing a consistent graphical user interface. The segmentation algorithm was tested on 17 MRI data sets and was successful in all cases. Accuracy of image registration was equal to that of the Woods algorithm, but 10 times faster for PET-PET and 4 times faster for PET-MRI. The image fusion method allows detection of misalignments on the order of 2-3 mm. These results demonstrate an integrated system for intermodality image registration, which is important because the procedure can be performed by technicians with no anatomic knowledge and reduces the required time from hours to about 15 min on a modern computer workstation. (C) 1996 Academic Press, Inc.

  133.   Tai, YC, Lin, KP, Dahlbom, M, and Hoffman, EJ, "A hybrid attenuation correction technique to compensate for lung density in 3-D total body PET," IEEE TRANSACTIONS ON NUCLEAR SCIENCE, vol. 43, pp. 323-330, 1996.

Abstract:   A hybrid attenuation correction technique (ACT) is under development for F-18-FDG total body positron emission tomography (PET). With a short transmission scan of the thorax, any time within a few days of the imaging session, this technique can correct for attenuation in the entire body, Segmentation, registration, and active contour finding techniques are applied to both emission and short transmission images to locate and map the major attenuating structures in the body. This technique eliminates the need for the patient to remain still from the start of the transmission scan to the end of the emission scan without the added noise of simultaneous or post emission transmission scan measurements, The results of volunteer studies are comparable to standard measured ACT, both visually and quantitatively, The efficient use of scanner time and improved patient comfort make this technique particularly suitable for clinical imaging.

  134.   Chuang, GCH, and Kuo, CCJ, "Wavelet descriptor of planar curves: Theory and applications," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, pp. 56-70, 1996.

Abstract:   By using the wavelet transform, we develop a hierarchical planar curve descriptor that decomposes a curve into components of different scales so that the coarsest scale components carry the global approximation information while the finer scale components contain the local detailed information, We show that the wavelet descriptor has many desirable properties such as multiresolution representation, invariance, uniqueness, stability, and spatial localization, A deformable wavelet descriptor is also proposed by interpreting the wavelet coefficients as random variables, The applications of the wavelet descriptor to character recognition and model-based contour extraction from low SNR images are examined, Numerical experiments are performed to illustrate the performance of the wavelet descriptor.

  135.   Krishnamachari, S, and Chellappa, R, "Delineating buildings by grouping lines with MRFs," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, pp. 164-168, 1996.

Abstract:   Traditionally, Markov random field (MRF) models have been used in low-level image analysis. This correspondence presents an MRF-based scheme to perform object delineation. The proposed edge-based approach involves extracting straight lines from the edge map of an image. Then, an MRF model is used to group these lines to delineate buildings in aerial images.

  136.   Davatzikos, C, Vaillant, M, Resnick, SM, Prince, JL, Letovsky, S, and Bryan, RN, "A computerized approach for morphological analysis of the corpus callosum," JOURNAL OF COMPUTER ASSISTED TOMOGRAPHY, vol. 20, pp. 88-97, 1996.

Abstract:   Objective: A new technique for analyzing the morphology of the corpus callosum is presented, and it is applied to a group of elderly subjects. Materials and Methods: The proposed approach normalizes subject data into the Talairach space using an elastic deformation transformation. The properties of this transformation are used as a quantitative description of the callosal shape with respect to the Talairach atlas, which is treated as a standard. In particular, a deformation function measures the enlargement/shrinkage associated with this elastic deformation. Intersubject comparisons are made by comparing deformation functions. Results: This technique was applied to eight male and eight female subjects. Based on the average deformation functions of each group, the posterior region of the female corpus callosum was found to be larger than its corresponding region in the males. The average callosal shape of each group was also found, demonstrating visually the callosal shape differences between the two groups in this sample. Conclusion: The proposed methodology utilizes the full resolution of the data, rather than relying on global descriptions such as area measurements. The application of this methodology to an elderly group indicated sex-related differences in the callosal shape and size.

  137.   Yang, QS, and Marchant, JA, "Accurate blemish detection with active contour models," COMPUTERS AND ELECTRONICS IN AGRICULTURE, vol. 14, pp. 77-89, 1996.

Abstract:   This paper presents a novel image analysis scheme for accurate detection of fruit blemishes. The detection procedure consists of two steps: initial segmentation and refinement. In the first step, blemishes are coarsely segmented out with a flooding algorithm and in the second step an active contour model, i.e. a snake algorithm, is applied to refine the segmentation so that the localization and size accuracy of detected blemishes is improved. The concept and the formulation of the snake algorithm are briefly introduced and then the refinement procedure is described. The initial tests for sample apple images have shown very promising results.

  138.   Fishman, EK, Kuszyk, BS, Heath, DG, Gao, LM, and Cabral, B, "Surgical planning for liver resection," COMPUTER, vol. 29, pp. 64-&, 1996.

Abstract:   Effective surgical planning requires 3D images that show tumor location relative to key blood vessels. This research uses volume rendering of CT data to meet these requirements.

  139.   Kent, JT, Mardia, KV, and Walder, AN, "Conditional cyclic Markov random fields," ADVANCES IN APPLIED PROBABILITY, vol. 28, pp. 1-12, 1996.

Abstract:   Grenander et al. (1991) proposed a conditional cyclic Gaussian Markov random field model for the edges of a closed outline in the plane. In this paper the model is recast as an improper cyclic Gaussian Markov random field for the vertices. The limiting behaviour of this model when the vertices become closely spaced is also described and in particular its relationship with the theory of 'snakes' (Kass et al. 1987) is established. Applications are given in Grenander et al. (1991), Mardia el al. (1991) and Kent et al. (1992).

  140.   Taubin, G, and Ronfard, R, "Implicit simplicial models for adaptive curve reconstruction," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, pp. 321-325, 1996.

Abstract:   Parametric deformable models have been extensively and very successfully used for reconstructing free-form curves and surfaces, and for tracking nonrigid deformations, but they require previous knowledge of the topological type of the data, and good initial curve or surface estimates. With deformable models, it is also computationally expensive to check for and to prevent self-intersections while tracking deformations. The Implicit Simplicial Models that we introduce in this paper are implicit curves and surfaces defined by piece-wise linear functions. This representation allows for local deformations, control of the topological type, and prevention of self-intersections during deformations. As a first application, we also describe in this paper an algorithm for two-dimensional curve reconstruction from unorganized sets of data points. The topology, the number of connected components, and the geometry of the data are all estimated using an adaptive space subdivision approach. The main four components of the algorithm are topology estimation, curve fitting, adaptive space subdivision, and mesh relaxation.

  141.   Jolly, MPD, Lakshmanan, S, and Jain, AK, "Vehicle segmentation and classification using deformable templates," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, pp. 293-308, 1996.

Abstract:   This paper proposes a segmentation algorithm using deformable template models to segment a vehicle of interest both from the stationary complex background and other moving vehicles in an image sequence. We define a polygonal template to characterize a general model of a vehicle and derive a prior probability density function to constrain the template to be deformed within a set of allowed shapes. We propose a likelihood probability density function which combines motion information and edge directionality to ensure that the deformable template is contained within the moving areas in the image and its boundary coincides with strong edges with the same orientation in the image. The segmentation problem is reduced to a minimization problem and solved by the Metropolis algorithm. The system was successfully tested on 405 image sequences containing multiple moving vehicles on a highway.

  142.   Jain, AK, Zhong, Y, and Lakshmanan, S, "Object matching using deformable templates," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 18, pp. 267-278, 1996.

Abstract:   We propose a general object localization and retrieval scheme based on object shape using deformable templates. Prior knowledge of an object shape is described by a prototype template which consists of the representative contour/edges, and a set of probabilistic deformation transformations on the template. A Bayesian scheme, which is based on this prior knowledge and the edge information in the input image, is employed to find a match between the deformed template and objects in the image. Computational efficiency is achieved via a coarse-to-fine implementation of the matching algorithm. Our method has been applied to retrieve objects with a variety of shapes from images with complex background. The proposed scheme is invariant to location, rotation, and moderate scale changes of the template.

  143.   Carstensen, JM, "An active lattice model in a Bayesian framework," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 63, pp. 380-387, 1996.

Abstract:   A Markov Random Field is used as a structural model of a deformable rectangular lattice. When used as a template prior in a Bayesian framework this model is powerful for making inferences about lattice structures in images, The model assigns maximum probability to the perfect regular lattice by penalizing deviations in alignment and lattice node distance, The Markov random field represents prior knowledge about the lattice structure, and through an observation model that incorporates the visual appearance of the nodes, we can simulate realizations from the posterior distribution. A maximum a posteriori (MAP) estimate, found by simulated annealing, is used as the reconstructed lattice. The model was developed as a central part of an algorithm for automatic analysis of genetic experiments, positioned in a lattice structure by a robot. The algorithm has been successfully applied to many images, and it seems to be a fast, accurate, and robust solution to the problem. Several possible extensions of the model are described. (C) 1996 Academic Press, Inc.

  144.   Robert, L, "Camera calibration without feature extraction," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 63, pp. 314-325, 1996.

Abstract:   This paper presents an original approach to the problem of camera calibration using a calibration pattern, It consists of directly searching for the camera parameters that best project three-dimensional points of a calibration pattern onto intensity edges in an image of this pattern, without explicitly extracting the edges. Based on a characterization of image edges as maxima of the intensity gradient or zero-crossings of the Laplacian, we express the whole calibration process as a one-stage optimization problem. A classical iterative optimization technique is used in order to solve it. Our approach is different from the classical calibration techniques which involve two consecutive stages: extraction of image features and computation of the camera parameters. Thus, our approach is easier to implement and to use, less dependent on the type of calibration pattern that is used, and more robust. First, we describe the details of the approach, Then, we show some experiments in which two implementations of our approach and two classical two-stage approaches are compared, Tests on real and synthetic data allow us to characterize our approach in terms of convergence, sensitivity to the initial conditions, reliability, and accuracy. (C) 1996 Academic Press, Inc.

  145.   Trier, OD, Jain, AK, and Taxt, T, "Feature extraction methods for character recognition - A survey," PATTERN RECOGNITION, vol. 29, pp. 641-662, 1996.

Abstract:   This paper presents an overview of feature extraction methods for off-line recognition of segmented (isolated) characters. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representations df the characters, such as solid binary characters, character contours, skeletons (thinned characters) or gray-level subimages of each individual character. The feature extraction methods are discussed in terms of invariance properties, reconstructability and expected distortions and variability of the characters. The problem of choosing the appropriate feature extraction method for a given application is also discussed. When a few promising feature extraction methods have been identified, they need to be evaluated experimentally to find the best method for the given application.

  146.   Lam, KM, and Yan, H, "Locating and extracting the eye in human face images," PATTERN RECOGNITION, vol. 29, pp. 771-779, 1996.

Abstract:   Facial feature extraction is an important step in automated visual interpretation and human face recognition. Among the facial features, the eye plays the most important part in the recognition process. The deformable template can be used in extracting the eye boundaries. However, the weaknesses of the deformable template are that the processing time is lengthy and that its success relies on the initial position of the template. in this paper, the head boundary is first located in a head-and-shoulders image. The approximate positions of the eyes are estimated by means of average anthropometric measures. Corners, the salient features of the eyes, are detected and used to set the initial parameters of the eye templates. The corner detection scheme introduced in this paper can provide accurate information about the corners. Based on the corner positions, we can accurately locate the templates in relation to the eye images and greatly reduce the processing time for the templates. The performance of the deformable template is assessed with and without using the information on corner positions. Experiments show that a saving in execution time of about 40% on average and a better eye boundary representation can be achieved by using the corner information.

  147.   Kraitchman, DL, Wilke, N, Hexeberg, E, JeroschHerold, M, Wang, Y, Parrish, TB, Chang, CN, Zhang, Y, Bache, RJ, and Axel, L, "Myocardial perfusion and function in dogs with moderate coronary stenosis," MAGNETIC RESONANCE IN MEDICINE, vol. 35, pp. 771-780, 1996.

Abstract:   MRI studies of first-pass contrast enhancement with polylysine-Gd-DTPA and myocardial tagging using spatial modulation of magnetization (SPAMM) were performed to assess the feasibility of a combined regional myocardial blood flow and 2D deformation exam, Instrumented closed-chest dogs were imaged at a baseline control state (Cntl) followed by two interventions: moderate coronary stenosis (St) achieved by partial occlusion of the left anterior descending (LAD) and moderate coronary stenosis with dobutamine loading (StD), Hypoperfusion of the anterior region (ANT) of the myocardium (LAD distribution) relative to the posterior wall (POS) based on the upslope of the signal intensity time curve from the contrast-enhanced MR images was demonstrated only with dobutamine loading (ANT:POS Cntl = 1.077 +/- 0.15 versus ANT:POS StD = 0.477 +/- 0.11, P < 0.03) and was confirmed with radiolabeled microspheres measurements (ANT:POS Cntl = 1.18 +/- 0.2 ml/min/g versus ANT:POS StD = 0.44 +/- 0.1 ml/min/g; P < 0.002). Significant changes in regional myocardial shortening were only seen in the StD state (P < 0.02); the anterior region showed impaired myocardial shortening with dobutamine loading (P = NS), whereas the nonaffected POS region showed a marked increase in shortening when compared with Cntl (Cntl = 0.964 +/- 0.02 versus StD = 0.884 +/- 0.03; P < 0.001). These results demonstrate that an integrated quantitative assessment of regional myocardial function and semiquantitative assessment of myocardial blood flow can be performed noninvasively with ultrafast MRI.

  148.   Helterbrand, JD, "One-pixel-wide closed boundary identification," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, pp. 780-783, 1996.

Abstract:   An appropriate space of one-pixel-wide closed (OPWC) boundary configurations is explicitly defined and an automatic algorithm to obtain OPWC contour estimates from a segmented image is presented. The motivation is to obtain a reasonable starting estimate for a Markov chain Monte Carlo-based (McMC-based) boundary optimization algorithm.

  149.   Yuen, PC, Wong, YY, and Tong, CS, "Contour detection using enhanced snakes algorithm," ELECTRONICS LETTERS, vol. 32, pp. 202-204, 1996.

Abstract:   An enhanced snakes algorithm (ESA) for detecting object contours is designed and developed. In the ESA, a novel split and merge technique is added to the original snakes model to enhance the model to support the detection of concave object contours. A set of handtools is selected to evaluate the proposed algorithm, and the results are encouraging.

  150.   Chalana, V, Linker, DT, Haynor, DR, and Kim, YM, "A multiple active contour model for cardiac boundary detection on echocardiographic sequences," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 15, pp. 290-298, 1996.

Abstract:   Tracing of left-ventricular epicardial and endocardial borders on echocardiographic sequences is essential for quantification of cardiac function. We designed a method based on an extension of active contour models to detect both epicardial and endocardial borders on short-axis cardiac sequences spanning the entire cardiac cycle, We validated the results by comparing the computer-generated boundaries to the boundaries manually outlined by four expert observers on 44 clinical data sets, The mean boundary distance between the computer-generated boundaries and the manually outlined boundaries was 2.80 mm (sigma = 1.28 mm) for the epicardium and 3.61 (sigma = 1.68 mm) for the endocardium, These distances were comparable to interobserver distances, which had a mean of 3.79 mm (sigma = 1.53 mm) for epicardial borders and 2.67 mm (sigma = 0.88 mm) for endocardial borders, The correlation coefficient between the areas enclosed by the computer-generated boundaries and the average manually outlined boundaries was 0.95 for epicardium and 0.91 for endocardium, The algorithm is fairly insensitive to the choice of the initial curve, Thus, we have developed an effective and robust algorithm to extract left-ventricular boundaries from echocardiographic sequences.

  151.   Hoch, M, and Litwinowicz, PC, "A semi-automatic system for edge tracking with snakes," VISUAL COMPUTER, vol. 12, pp. 75-83, 1996.

Abstract:   Active contour models, or ''snakes,'' developed in (Kass et al. 1988), use a simple physical model to track edges in image sequences. Snakes as originally defined however, tend to shrink, stretch and slide back and forth in unwanted ways along a tracked edge and are also confused by multiple edges, always grabbing the nearest one. In this paper a semi-automatic system is presented that combines motion estimation techniques with snakes to overcome these problems. An algorithm is presented that uses a block matching technique to guide the endpoints of the snake, optical flow to push the snake in the direction of the underlying motion, followed by the traditional snake edge-fitting minimization process. We use this technique for tracking facial features of an actor for driving computer animated characters.

 
1997

  152.   Zhou, P, and Pycock, D, "Robust statistical models for cell image interpretation," IMAGE AND VISION COMPUTING, vol. 15, pp. 307-316, 1997.

Abstract:   A robust and adaptable model-based scheme for cell image interpretation is presented that can accommodate the wide natural variation in the appearance of cells. This is achieved using multiple models and an interpretation process that permits a smooth transition between models. Boundaries are represented using trainable statistical models that are invariant to transformations of scaling, shift, rotation and contrast; a Gaussian and a circular autoregressive (CAR) model are investigated. The interpretation process optimises the match between models and data using a Bayesian distance measure. We demonstrate how objects that vary in both shape and grey-level pattern can reliably be segmented. The results presented show that overall performance is comparable with that for manual segmentation; the area within the automatically and manually selected cell boundaries that is not common to both is less than 5% in 96% of the cases tested. The results also show that the computationally simpler Gaussian boundary model is at least as effective as the CAR model.

  153.   Androutsos, D, Trahanias, PE, and Venetsanopoulos, AN, "Application of active contours for photochromic tracer flow extraction," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 284-293, 1997.

Abstract:   This paper addresses the implementation of image processing and computer vision techniques to automate tracer flow extraction in images obtained by the photochromic dye technique. This task is important in modeled arterial blood flow studies. Currently, it is performed via manual application of B-spline curve fitting. However, this is a tedious and error-prone procedure and its results are nonreproducible, In the proposed approach, active contours, snakes, are employed in a new curve-fitting method for tracer flow extraction in photochromic images. An algorithm implementing snakes is introduced to automate extraction, Utilizing correlation matching, the algorithm quickly locates and localizes all flow traces in the images. The feasibility of the method for tracer flow extraction is demonstrated. Moreover, results regarding the automation algorithm are presented showing its accuracy and effectiveness. The proposed approach for tracer flow extraction has potential for real-system application.

  154.   Boyer, E, and Berger, MO, "3D surface reconstruction using occluding contours," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 22, pp. 219-233, 1997.

Abstract:   This paper addresses the problem of 3D surface reconstruction using image sequences. It has been shown that shape recovery from three or more occluding contours of the surface is possible given a known camera motion. Several algorithms, which have been recently proposed, allow such a reconstruction under the assumption of a linear camera motion. A new approach is presented which deals with the reconstruction problem directly from a discrete point of view. First, a theoretical study of the epipolar correspondence between occluding contours is achieved. A correct depth formulation is then derived from a local approximation of the surface up to order two. This allows the local shape to be estimated, given three consecutive contours, without any constraints on the camera motion. Experimental results are presented for both synthetic and real data.

  155.   Szeliski, R, and Coughlan, J, "Spline-based image registration," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 22, pp. 199-218, 1997.

Abstract:   The problem of image registration subsumes a number of problems and techniques in multiframe image analysis, including the computation of optic flow (general pixel-based motion), stereo correspondence, structure from motion, and feature tracking. We present a new registration algorithm based on spline representations of the displacement field which can be specialized to solve all of the above mentioned problems. In particular, we show how to compute local flow, global (parametric) flow, rigid flow resulting from camera egomotion, and multiframe versions of the above problems. Using a spline-based description of the flow removes the need for overlapping correlation windows, and produces an explicit measure of the correlation between adjacent flow estimates. We demonstrate our algorithm on multiframe image registration and the recovery of 3D projective scene geometry. mie also provide results on a number of standard motion sequences.

  156.   Lai, SH, and Vemuri, BC, "Physically based adaptive preconditioning for early vision," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 594-607, 1997.

Abstract:   Several problems in early vision have been formulated in the past in a regularization framework. These problems, when discretized, lead to large sparse linear systems. In this paper, we present a novel physically based adaptive preconditioning technique which can be used in conjunction with a conjugate gradient algorithm to dramatically improve the speed of convergence for solving the aforementioned linear systems. A preconditioner, based on the membrane spline, or the thin plate spline, or a convex combination of the two, is termed a physically based preconditioner for obvious reasons. The adaptation of the preconditioner to an early vision problem is achieved via the explicit use of the spectral characteristics of the regularization filter in conjunction with the data. This spectral function is used to modulate the frequency characteristics of a chosen wavelet basis, and these modulated values are then used in the construction of our preconditioner. We present the preconditioner construction for three different early vision problems namely, the surface reconstruction, the shape from shading, and the optical flow computation problems. Performance of the preconditioning scheme is demonstrated via experiments on synthetic and real data sets. We note that our preconditioner outperforms other methods of preconditioning for these early vision problems, described in computer Vision literature.

  157.   Richards, CA, and Papanikolopoulos, NP, "Detection and tracking for robotic visual servoing systems," ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, vol. 13, pp. 101-120, 1997.

Abstract:   Robot manipulators require knowledge about their environment in order to perform their desired actions. In several robotic tasks, vision sensors play a critical role by providing the necessary quantity and quality of information regarding the robot's environment. For example, ''visual servoing'' algorithms may control a robot manipulator in order to track moving objects that are being imaged by a camera. Current visual servoing systems often lack the ability to detect automatically objects that appear within the camera's field of view. In this research, we present a robust ''figure/ground'' framework for visually detecting objects of interest. An important contribution of this research is a collection of optimization schemes that allow tbe detection framework to operate within the real-time Limits of visual servoing systems. The most significant of these schemes involves the use of ''spontaneous'' and ''continuous'' domains. The number and location of continuous domains are,allowed to change over time, adjusting to the dynamic conditions of the detection process. We have developed actual servoing systems in order to test the framework's feasibility and to demonstrate its usefulness for visually controlling a robot manipulator. (C) 1997 Elsevier Science Ltd.

  158.   Hamza, R, Zhang, XDD, Macosko, CW, Steve, R, and Listemann, M, "Imaging open-cell polyurethane foam via confocal microscopy," POLYMERIC FOAMS, ACS SYMPOSIUM SERIES, vol. 669, pp. 165-177, 1997.

Abstract:   Flexible polyurethane foam is based on a 3-dimensional cellular network. The mechanical properties of foam material depend upon cell structure and cell size distribution. In this work, we use laser confocal microscopy to image the foam cells and recover its 3-dimensional cellular network. Based on this technique we provide a statistical analysis and compare several foam samples. Confocal microscopic images are also used to visualize foam compression. Images for foam network structure under different mechanical compressions are also obtained. Limitations of confocal microscope are discussed and a new method - nuclear magnetic resonance imaging is proposed.

  159.   Smyth, PP, Taylor, CJ, and Adams, JE, "Automatic measurement of vertebral shape using active shape models," IMAGE AND VISION COMPUTING, vol. 15, pp. 575-581, 1997.

Abstract:   In this paper, we describe how Active Shape Models (ASMs) have been used to accurately and robustly locate vertebrae in lateral Dual Energy X-ray Absorptiometry (DXA) images of the spine. DXA images are of low spatial resolution, and contain significant random and structural noise, providing a difficult challenge for object location methods. All vertebrae in the image were searched for simultaneously, improving robustness in location of individual vertebrae by making use of constraints on shape provided by the position of other vertebrae. We show that the use of ASMs with minimal user interaction allows accuracy to be obtained which is as good as that achievable by human operators, as well as high precision. Having located each vertebra, it is desirable to evaluate whether it has been located sufficiently accurately for shape measurements to be useful. We determined this on the basis of grey-level model fit, which was shown to usefully detect poorly located vertebrae, which should enable accuracy to be improved by rejecting proposed search solutions whose grey-level fit was poorer than a threshold. (C) 1997 Elsevier Science B.V.

  160.   Pathak, SD, Chalana, V, and Kim, YM, "Interactive automatic fetal head measurements from ultrasound images using multimedia computer technology," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 23, pp. 665-673, 1997.

Abstract:   We have developed a tool to automatically detect inner and outer skull boundaries of a fetal head in ultrasound images, These boundaries are used to measure biparietal diameter (BPD) and head circumference (HC). The algorithm is based on active contour models and takes 32 s on a Sun SparcStation 20/71, A high-performance desktop multimedia system called MediaStation 5000 (MS5000) is used as a model for our future ultrasound subsystem, On the MS5000, the optimized implementation of this algorithm takes 248 ms, The difference (between the computer-measured values on MS5000 and the gold standard) for BPD and HC was 1.43% (sigma = 1,00%) and 1.96% (sigma = 1.96%), respectively. According to our data analysis, no significant differences exist in the BPD and HC measurements made on the MS5000 and those measurements made on the Sun SparcStation 20/71, Reduction in the overall execution time from 32 s to 248 ms will help making this algorithm a practical ultrasound tool for sonographers, (C) 1997 World Federation for Ultrasound in Medicine and Biology.

  161.   Chiou, GI, and Hwang, JN, "Lipreading from color video," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 6, pp. 1192-1195, 1997.

Abstract:   We have designed and implemented a lipreading system that recognizes isolated words using only color video of human lips (without acoustic data). The system performs video recognition using ''snakes'' to extract visual features of geometric space, Karhunen-Loeve transform (KLT) to extract principal components in the color eigenspace, and hidden Markov models (HMM's) to recognize the combined visual features sequences. With the visual information alone, we were able to achieve 94% accuracy for ten isolated words.

  162.   Moghaddam, B, and Pentland, A, "Probabilistic visual learning for object representation," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 696-710, 1997.

Abstract:   We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixture-of-Gaussians model (for multimodal distributions). These probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands.

  163.   Gruen, A, and Li, HH, "Semi-automatic linear feature extraction by dynamic programming and LSB-Snakes," PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, vol. 63, pp. 985-995, 1997.

Abstract:   This paper deals with semi-automatic linear feature extraction from digital images for GIS data capture, where the identification task is performed manually on a single image, while a special automatic digital module performs the high precision feature tracking in two-dimensional (2-D) image space or even three-dimensional (3-D) object space. A human operator identifies the object from an on-screen display of a digital image, selects the particular class this object belongs to, and provides a very few coarsely distributed seed points. Subsequently, with these seed points as an approximation of the position and shape, the linear feature will be extracted automatically by either a dynamic programming approach or by LSB-Snakes (Least-Squares B-spline Snakes). With dynamic programming, the optimization problem is set up as a discrete multistage decision process and is solved by a time delayed'' algorithm. It ensures global optimality, is numerically stable, and allows for hard constraints to be enforced on the solution. In the least-squares approach, we combine three types of observation equations, one radiometric, formulating the matching of a generic object model with image data, and two that express the internal geometric constraints of a curve and the location of operator-given seed points. The solution is obtained by solving a pair of independent normal equations to estimate the parameters of the spline curve. Both techniques can be used in a monoplotting mode, which combines one image with its underlying DTM. The LSB-Snakes approach is also implemented in a multi-image mode, which uses multiple images simultaneously and provides for a robust and mathematically sound full 3D approach. These techniques are not restricted to aerial images. They can be applied to satellite and close-range images as well. The issues related to the mathematical modeling of the proposed methods are discussed and experimental results are shown in this paper too.

  164.   Kollnig, H, and Nagel, HM, "3D pose estimation by directly matching polyhedral models to gray value gradients," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 23, pp. 283-302, 1997.

Abstract:   This contribution addresses the problem of pose estimation and tracking of vehicles in image sequences from traffic scenes recorded by a stationary camera. In a new algorithm, the vehicle pose is estimated by directly matching polyhedral vehicle models to image gradients without an edge segment extraction process. The new approach is significantly more robust than approaches that rely on feature extraction since the new approach exploits more information from the image data. We successfully tracked vehicles that were partially occluded by textured objects, e.g., foliage, where a previous approach based on edge segment extraction failed. Moreover, the new pose estimation approach is also used to determine the orientation and position of the road relative to the camera by matching an intersection model directly to image gradients. Results from various experiments with real world traffic scenes are presented.

  165.   Ashton, EA, Parker, KJ, Berg, MJ, and Chen, CW, "A novel volumetric feature extraction technique with applications to MR images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 365-371, 1997.

Abstract:   A semiautomated feature extraction algorithm is presented for the extraction and measurement of the hippocampus from volumetric magnetic resonance imaging (MRI) head scans. This algorithm makes use of elements of both deformable model and region growing techniques and allows incorporation of a priori operator knowledge of hippocampal location and shape, Experimental results indicate that the algorithm is able to estimate hippocampal volume and asymmetry with an accuracy which approaches that of laborious manual outlining techniques.

  166.   Wolberg, WH, Street, WN, and Mangasarian, OL, "Computer-derived nuclear features compared with axillary lymph node status for breast carcinoma prognosis," CANCER CYTOPATHOLOGY, vol. 81, pp. 172-179, 1997.

Abstract:   BACKGROUND. Both axillary lymph node involvement and tumor anaplasia, as expressed by visually assessed grade, have been shown to be prognostically important in breast carcinoma outcome. In this study, axillary lymph node involvement was used as the standard against which prognostic estimations based on computer-derived nuclear features were gauged, METHODS, The prognostic significance of nuclear morphometric features determined by computer-based image analysis were analyzed in 198 consecutive preop preoperative samples obtained by fine-needle aspiration (FNA) from patients with invasive breast carcinoma. A novel multivariate prediction method was used to model the time of distant recurrence as a function of the nuclear features. Prognostic predictions based on the nuclear feature data were cross-validated to avoid overly optimistic conclusions. The estimated accuracy of these prognostic determinations was compared with determinations based on the extent of axillary lymph node involvement. RESULTS. The predicted outcomes based on nuclear features were divided into three groups representing best, intermediate, and worst prognosis, and compared with the traditional TNM lymph node stratification. Nuclear feature stratification better separated the prognostically best from the intermediate group whereas lymph node stratification better separated the prognostically intermediate from the worst group. Prognostic accuracy was not increased by adding lymph node status or tumor size to the nuclear features. CONCLUSIONS. Computer analysis of a preoperative FNA more accurately identified prognostically favorable patients than did pathologic examination of axillary lymph nodes and may obviate the need for routine axillary lymph node dissection. (C) 1997 American Cancer Society.

  167.   March, R, and Dozio, M, "A variational method for the recovery of smooth boundaries," IMAGE AND VISION COMPUTING, vol. 15, pp. 705-712, 1997.

Abstract:   Variational methods for image segmentation try to recover a piecewise smooth function together with a discontinuity set which represents the boundaries of the segmentation. This paper deals with a variational method that constrains the formation of discontinuities along smooth contours. The functional to be minimized, which involves the computation of the geometrical properties of the boundaries, is approximated by a sequence of functionals which can be discretized in a straightforward way. Computer examples of real images are presented to illustrate the feasibility of the method. (C) 1997 Elsevier Science B.V.

  168.   Whaite, P, and Ferrie, FP, "On the sequential determination of model misfit," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 899-905, 1997.

Abstract:   Many strategies in computer vision assume the existence of general purpose models that can be used to characterize a scene or environment at various levels of abstraction. The usual assumptions are that a selected model is competent to describe a particular attribute and that the parameters of this model can be estimated by interpreting the input data in an appropriate manner (e.g., location of lines and edges, segmentation into parts or regions, etc.). This paper considers the problem of how to determine when those assumptions break down. The traditional approach is to use statistical misfit measures based on an assumed sensor noise model. The problem is that correct operation often depends critically on the correctness of the noise model. Instead, we show how this can be accomplished with a minimum of a priori knowledge and within the framework of an active approach which builds a description of environment structure and noise over several viewpoints.

  169.   Taylor, CJ, Cootes, TF, Lanitis, A, Edwards, G, Smyth, P, and Kotcheff, ACW, "Model-based interpretation of complex and variable images," PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY OF LONDON SERIES B-BIOLOGICAL SCIENCES, vol. 352, pp. 1267-1274, 1997.

Abstract:   The ultimate goal of machine vision is image understanding-the ability not only to recover image structure but also to know what it represents. By definition, this involves the use of models which describe and label the expected structure of the world. Over the past decade, model-based vision has been applied successfully to images of man-made objects. It has proved much more difficult to develop model-based approaches to the interpretation of images of complex and variable structures such as faces or the internal organs of the human body (as visualized in medical images). In such cases it has been problematic even to recover image structure reliably without a model to organize the often noisy and incomplete image evidence. The key problem is that of variability. To be useful, a model needs to be specific-that is, to be capable of representing only 'legal' examples of the modelled object(s). It has proved difficult to achieve this whilst allowing for natural variability. Recent developments have overcome this problem; it has been shown that specific patterns of variability in shape and grey-level appearance can be captured by statistical models that can be used directly in image interpretation. The details of the approach are outlined and practical examples from medical image interpretation and face recognition are used to illustrate how previously intractable problems can now be tackled successfully. It is also interesting to ask whether these results provide any possible insights into natural vision; for example, we show that the apparent changes in shape which result from viewing three-dimensional objects from different viewpoints can be modelled quite well in two dimensions; this may lend some support to the 'characteristic views' model of natural vision.

  170.   Cohen, LD, and Kimmel, R, "Global minimum for active contour models: A minimal path approach," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 24, pp. 57-78, 1997.

Abstract:   A new boundary detection approach for shape modeling is presented. It detects the global minimum of an active contour model's energy between two end points. Initialization is made easier and the curve is not trapped at a local minimum by spurious edges. We modify the ''snake'' energy by including the internal regularization term in the external potential term. Our method is based on finding a path of minimal length in a Riemannian metric. We then make use of a new efficient numerical method to find this shortest path. It is shown that the proposed energy, though based only on a potential integrated along the curve, imposes a regularization effect like snakes. We explore the relation between the maximum curvature along the resulting contour and the potential generated from the image. The method is capable to close contours, given only one point on the objects' boundary by using a topology-based saddle search routine. We show examples of our method applied to real aerial and medical images.

  171.   Poon, CS, and Braun, M, "Image segmentation by a deformable contour model incorporating region analysis," PHYSICS IN MEDICINE AND BIOLOGY, vol. 42, pp. 1833-1841, 1997.

Abstract:   Deformable contour models are useful tools for image segmentation. However, many models depend mainly on local edge-based image features to guide the convergence of the contour. This makes the models sensitive to noise and the initial estimate. Our model incorporates region-based image features to improve its convergence and to reduce its dependence on initial estimation. Computational efficiency is achieved by an optimization strategy, modified from the greedy algorithm of Williams and Shah. The model allows a simultaneous optimization of multiple contours, making it useful for a large variety of segmentation problems.

  172.   Shih, WSV, Lin, WC, and Chen, CT, "Morphologic field morphing: Contour model-guided image interpolation," INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, vol. 8, pp. 480-490, 1997.

Abstract:   An interpolation method using contours of organs as the control parameters is proposed to recover the intensity information in the physical gaps of serial cross-sectional images. In our method, contour models are used to generate the control lines required for the warping algorithm. Contour information derived from this contour model-based segmentation process is processed and used as the control parameters to warp the corresponding regions in both input images into compatible shapes. In this way, the reliability of establishing the correspondence among different segments of the same organs is improved and the intensity information for the interpolated intermediate slices can be derived more faithfully, To improve the efficiency for calculating the image warp in the field morphing process, a hierarchic decomposition process is proposed to localize the influence of each control line segment, In comparison with the existing intensity interpolation algorithms that only search for corresponding points in a small physical neighborhood, this method provides more meaningful correspondence relationships by warping regions in images into similar shapes before resampling to account for significant shape differences. Several sets of experimental result are presented to show that this method generates more realistic and less blurred interpolated images, especially when the shape difference of corresponding contours is significant. (C) 1997 John Wiley & Sons, Inc.

  173.   Dickinson, SJ, Christensen, HI, Tsotsos, JK, and Olofsson, G, "Active object recognition integrating attention and viewpoint control," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 67, pp. 239-260, 1997.

Abstract:   We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. (C) 1997 Academic Press.

  174.   Juhan, V, Nazarian, B, Malkani, K, Bulot, R, Bartoli, JM, and Sequeira, J, "Geometrical modelling of abdominal aortic aneurysms," CVRMED-MRCAS'97, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1205, pp. 243-252, 1997.

Abstract:   Stent graft combination devices have been developed as a new alternative for treating abdominal aortic aneurysms. The major risk using this new technique with standard devices is the perigraft leak. In order to choose a suitable graft for each patient, and thus avoid such a risk, we have developed a program which provides three-dimensional representations of such aneurysms. Images of abdominal regions are obtained by spiral C.T.. These images are then transferred to a graphics workstation and processed to provide sets of contours which represent the shape of the aorta and other vessels. Then, a surface joining all these contours is computed; we obtain a tree-like structure represented as a set of generalized cylinders which are joined by means of flee-form surfaces. Such geometrical models provide an efficient mathematical support for further developments involving diagnosis, surgery and endoprostheses design.

  175.   Delingette, H, "Decimation of isosurfaces with deformable models," CVRMED-MRCAS'97, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1205, pp. 83-92, 1997.

Abstract:   For many medical applications including computer-assisted surgery, it is necessary to perform scientific computations, such as mechanical deformation, on anatomical structure models. Such patient-based anatomical models are often extracted from volumetric medical images as isosurfaces. In this paper, we introduce a new algorithm for the decimation of isosurfaces based on deformable models. The method emphasizes the creation of mesh of high geometric and topological properties well suited for performing scientific computation. rt allows a close control of the distance of the mesh to the isosurface as well a the overall smoothness of the mesh. The isosurface is stored in a data-structure that enables the fast computation of the distance to the isosurface. Finally, our method can handle very large datasets by merging pieces of isosurfaces.

  176.   Jones, TN, and Metaxas, DN, "Segmentation using deformable models with affinity-based localization," CVRMED-MRCAS'97, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1205, pp. 53-62, 1997.

Abstract:   We have developed an algorithm for segmenting objects with simple closed curves, such as the heart and the lungs, that is independent of the imaging modality used (e.g., MRI, CT, echocardiography). Our method is automatic and requires as initialization a single pixel within the boundaries of the object. Existing segmentation techniques either require much more information during initialization, such as an approximation to the object's boundary, or are not robust to the types of noisy data encountered in the medical domain. By integrating region-based and physics-based modeling techniques we have devised a hybrid design that overcomes these limitations. In our experiments we demonstrate that this integration automates and significantly improves the object boundary detection results, independent of the imaging modality used.

  177.   McInerney, T, and Terzopoulos, D, "Medical image segmentation using topologically adaptable surfaces," CVRMED-MRCAS'97, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1205, pp. 23-32, 1997.

Abstract:   Efficient and powerful topologically adaptable deformable surfaces can be created by embedding and defining discrete deformable surface models in terms of an Affine Cell Decomposition (ACD) framework. The ACD framework, combined with a novel and original reparameterization algorithm, creates a simple but elegant mechanism for multiresolution deformable curve, surface, and solid models to ''flow'' or ''grow'' into objects with complex geometries and topologies, and adapt their shape to recover the object boundaries. ACD-based models maintain the traditional parametric physics-based formulation of deformable models, allowing them to incorporate a priori knowledge in the form of energy and force-based constraints, and provide intuitive interactive capabilities. This paper describes ACD-based deformable surfaces and demonstrates their potential for extracting and reconstructing some of the most complex biological structures from medical image volumes.

  178.   Montagnat, J, and Delingette, H, "Volumetric medical images segmentation using shape constrained deformable models," CVRMED-MRCAS'97, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1205, pp. 13-22, 1997.

Abstract:   In this paper we address the problem of extracting geometric models from lour contrast volumetric images, given a template or reference shape of that model. We proceed by deforming a reference model in a volumetric image. This reference deformable model is represented as a simplex mesh submitted to regularizing shape constraint. Furthermore, we introduce an original approach that combines the deformable model framework with the elastic registration (based on iterative closest point algorithm) method. This new method increases the robustness of segmentation while allowing very complex deformation, of the original template. Examples of segmentation of the liver and brain ventricles are provided.

  179.   Carmona, RA, Hwang, WL, and Torresani, B, "Characterization of signals by the ridges of their wavelet transforms," IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 45, pp. 2586-2590, 1997.

Abstract:   We present a couple of new algorithmic procedures for the detection of ridges in the modulus of the (continuous) wavelet transform of one-dimensional (1-D) signals, These detection procedures are shown to be robust to additive white noise, We also derive and test a new reconstruction procedure, The latter uses only information from the restriction of the wavelet transform to a sample of points from the ridge. This provides a very efficient way to code the information contained in the signal.

  180.   Caselles, V, Kimmel, R, Sapiro, G, and Sbert, C, "Minimal surfaces: a geometric three dimensional segmentation approach," NUMERISCHE MATHEMATIK, vol. 77, pp. 423-451, 1997.

Abstract:   A novel geometric approach for three dimensional object segmentation is presented. The scheme is based on geometric deformable surfaces moving towards the objects to be detected, We show that this model is related to the computation of surfaces of minimal area (local minimal surfaces). The space where these surfaces are computed is induced from the three dimensional image in which the objects are to be detected. The general approach also shows the relation between classical deformable surfaces obtained via energy minimization and geometric ones derived from curvature flows in the surface evolution framework. The scheme is stable, robust, and automatically handles changes in the surface topology during the deformation. Results related to existence, uniqueness, stability, and correctness of the solution to this geometric deformable model are presented as well. Based on an efficient numerical algorithm for surface evolution, we present a number of examples of object detection in real and synthetic images.

  181.   Axel, L, "Noninvasive measurement of cardiac strain with MRI," ANALYTICAL AND QUANTITATIVE CARDIOLOGY, ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY, vol. 430, pp. 249-256, 1997.

Abstract:   The motion sensitivity of cardiac magnetic resonance imaging (MRI) can be exploited to measure the motion patterns within the heart wall and thus to noninvasively calculate the intramyocardial strain. The resulting large data sets pose a challenge for visualization, but offer the potential of a greatly improved picture of cardiac dynamics. This may have both basic research and clinical applications.

  182.   Grzeszczuk, RP, and Levin, DN, "''Brownian strings'': Segmenting images with stochastically deformable contours," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 1100-1114, 1997.

Abstract:   This paper describes an image segmentation technique in which an arbitrarily shaped contour was deformed stochastically until it fitted around an object of interest. The evolution of the contour was controlled by a simulated annealing process which caused the contour to settle into the global minimum of an image-derived ''energy'' function. The nonparametric energy function was derived from the statistical properties of previously segmented images, thereby incorporating prior experience. Since the method was based on a state space search for the contour with the best global properties, it was stable in the presence of image errors which confound segmentation techniques based on local criteria, such as connectivity. Unlike ''snakes'' and other active contour approaches, the new method could handle arbitrarily irregular contours in which each interpixel crack represented an independent degree of freedom. Furthermore, since the contour evolved toward the global minimum of the energy, the method was more suitable for fully automatic applications than the snake algorithm, which frequently has to be reinitialized when the contour becomes trapped in local energy minima. High computational complexity was avoided by efficiently introducing a random local perturbation in a time independent of contour length, providing control over the size of the perturbation, and assuring that resulting shape changes were unbiased. The method was illustrated by using it to find the brain surface in magnetic resonance head images and to track blood vessels in angiograms.

  183.   Park, JS, and Han, JH, "Estimating optical flow by tracking contours," PATTERN RECOGNITION LETTERS, vol. 18, pp. 641-648, 1997.

Abstract:   We present a novel method of velocity field estimation for the points on moving contours in a 2-D image sequence. The method determines the corresponding point in a next image frame by minimizing the curvature change of a given contour point. As a first step, snakes are used to locate smooth curves in 2-D imagery. Thereafter, the extracted curves are tracked continuously computing the corresponding point for each contour point. (C) 1997 Published by Elsevier Science B.V.

  184.   Hozumi, T, Yoshida, K, Yoshioka, H, Yagi, T, Akasaka, T, Takagi, T, Nishiura, M, Watanabe, M, and Yoshikawa, J, "Echocardiographic estimation of left ventricular cavity area with a newly developed automated contour tracking method," JOURNAL OF THE AMERICAN SOCIETY OF ECHOCARDIOGRAPHY, vol. 10, pp. 822-829, 1997.

Abstract:   Development of an automated contour tracking method provides detection and tracking of the endocardial boundary using the energy minimization method without tracing a region of interest. The purpose of this study was to compare the automated contour tracking method and manually drawn methods for the measurement of left ventricular cavity areas and fractional area change. Apical four-chamber view was visualized and recorded for off-line analysis in 11 patients by means of two-dimensional echocardiography. The automated contour tracking method automatically traces the endocardial border from the recorded images and calculates left ventricular cavity areas (end-diastole and end-systole) and fractional area change. In the same images selected as end-diastole and end-systole in the automated contour tracking method, left ventricular endocardial border was manually traced to calculate left ventricular cavity areas and fractional area change. Both methods were compared by Linear regression analysis for the measurement of cavity areas and fractional area change. Left ventricular areas measured by the automated contour tracking method showed an excellent correlation with those by the manual method (end-diastole: r = 0.99, y = 0.83x + 2.6, standard error of the estimate = 1.5 cm(2); end-systole: r = 0.99, y = 0.96x - 0.8, standard error of the estimate = 1.2 cm(2)). The mean differences between the automated contour tracking and manual methods were -3.1 +/- 5.1 cm(2) and -1.6 +/- 2.4 cm(2) at end-diastole and end-systole, respectively. Fractional area change determined by the automated contour tracking method correlated well with that by the manual method (r = 0.95, y = 1.17x - 6.5, standard error of the estimate = 3.4%). The mean difference between the automated contour tracking and manual methods was -0.8% +/- 7.1%. In conclusion, a newly developed automated contour tracking method correlates highly with the manual method for the estimation of left ventricular cavity areas and fractional area change in high-quality images. This suggests that this new technique may be useful in the automated quantitation of left ventricular function in patients with high-quality images with no dropout and no intercavity artifact or structure.

  185.   Fasel, JHD, Gingins, P, Kalra, P, MagnenatThalmann, N, Baur, C, Cuttat, JF, Muster, M, and Gailloud, P, "Liver of the ''visible man''," CLINICAL ANATOMY, vol. 10, pp. 389-393, 1997.

Abstract:   Endoscopic surgery, also called minimally invasive surgery, is presumed drastically to reduce postoperative morbidity and thus to offer both human and economic benefits. For the surgeon, however, this approach leads to a number of gestural challenges that require extensive training to be mastered. In order to replace experimentation on animals and patients, we developed a simulator for endoscopic surgery. To achieve this goal, a first step was to develop a working prototype, a ''standard patient,'' on which the informatic and microengineering tools could be validated. We used the visible man dataset for this purpose. The external shape of the Visible man's liver, his biliary passages, and his extrahepatic portal system turned out to be fully within the standard pattern of normal anatomy. Anatomic Variations were observed in the intrahepatic right portal vein, the hepatic veins, and the arterial blood supply to the liver. Thus, the visible man dataset reveals itself to be well suited for the simulation of minimally invasive surgical operation such as endoscopic cholecystectomy. (C) 1997 Wiley-Liss, Inc.

  186.   Breen, DE, "Cost minimization for animated geometric models in computer graphics," JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, vol. 8, pp. 201-220, 1997.

Abstract:   This paper describes how the concept of imposing geometric constraints by minimizing cost functions may be used and extended to accomplish a variety of animated modelling tasks for computer graphics. In this approach a complex 3-D geometric problem is mapped into a scalar minimization formulation. The mapping provides a straightforward method for converting abstract geometric concepts into a construct that is easily computed, The minimization approach is demonstrated in three application areas: computer animation, visualization, and physically-based modelling. In the computer animation application, cost minimization may be used to generate motion paths and joint parameters for animated actors. The approach may also be used to generate deformable models that extract closed 3-D geometric models from volume data for visualization, In the final application, the approach provides the fundamental structure to a physically-based model of woven cloth. (C) 1997 John Wiley & Sons, Ltd.

  187.   Hanson, KM, Cunningham, GS, and McKee, RJ, "Uncertainty assessment for reconstructions based on deformable geometry," INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, vol. 8, pp. 506-512, 1997.

Abstract:   Deformable geometric models can be used in the context of Bayesian analysis to solve ill-posed tomographic reconstruction problems. The uncertainties associated with a Bayesian analysis may be assessed by generating a set of random samples from the posterior, which may be accomplished using a Markov Chain Monte Carlo (MCMC) technique. We demonstrate the combination of these techniques for a reconstruction of a two-dimensional object from two orthogonal noisy projections. The reconstructed object is modeled in terms of a deformable geometrically defined boundary with a uniform interior density yielding a nonlinear reconstruction problem. We show how an MCMC sequence can be used to estimate uncertainties in the location of the edge of the reconstructed object. (C) 1997 John Wiley & Sons, Inc.

  188.   Guy, G, and Medioni, G, "Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3D data," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 1265-1277, 1997.

Abstract:   We address the problem of obtaining dense surface information from a sparse set of 3D data in the presence of spurious noise samples. The input can be in the form of points, or points with an associated tangent or normal, allowing both position and direction to be corrupted by noise. Most approaches treat the problem as an interpolation problem, which is solved by fitting a surface such as a membrane or thin plate to minimize some function. We argue that these physical constraints are not sufficient, and propose to impose additional perceptual constraints such as good continuity and ''cosurfacity.'' These constraints allow us to not only infer surfaces, but also to detect surface orientation discontinuities, as well as junctions, all at the same time. The approach Imposes no restriction on genus, number of discontinuities, number of objects, and is noniterative. The result is in the form of three dense saliency maps for surfaces, intersections between surfaces (i.e., 3D curves), and 3D junctions, respectively. These saliency maps are then used to guide a ''marching'' process to generate a description (e.g., a triangulated mesh) making information about surfaces, space curves, and 3D junctions explicit. The traditional marching process needs to be refined as the polarity of the surface orientation is not necessarily locally consistent. These three maps are currently not integrated, and this is the topic of our ongoing research. We present results on a variety of computer-generated and real data, having varying curvature, of different genus, and multiple objects.

  189.   Sapiro, G, "Color snakes," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 68, pp. 247-253, 1997.

Abstract:   A framework for object segmentation in vector-valued images is presented in this paper. The first scheme proposed is based on geometric active contours moving toward the objects to be detected in the vector-valued image. Object boundaries are obtained as geodesics or minimal weighted-distance curves, where the metric is given by a definition of edges in vector-valued data. The curve flow corresponding to the proposed active contours holds formal existence, uniqueness, stability, and correctness results. The scheme automatically handles changes in the deforming curve topology. The technique is applicable, for example, to color and texture images as well as multiscale representations. We then present an extension of these vector active contours, proposing a possible image flow for vector-valued image segmentation. The algorithm is based on moving each one of the image level sets according to the proposed vector active contours. This extension also shows the relation between active contours and a number of partial-differential-equations-based image processing algorithms as anisotropic diffusion and shock filters. (C) 1997 Academic Press.

  190.   Ip, HHS, and Wong, WH, "Detecting perceptually parallel curves: Criteria and force-driven optimization," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 68, pp. 190-208, 1997.

Abstract:   We have developed several theorems for the detection of parallel curves in the continuous space. In this paper, we studied issues in carrying the continuous algorithm to the discrete case and also the perceptual characteristics leading to human recognition of parallelism. By formulating these properties in terms of several distinctive forces, we developed a force-driven model as a new optimization strategy to perform correspondence establishment between points in the matching curves. This force-driven mechanism provides a good coupling (or correspondence matching) result, which is the prerequisite for the correct detection of parallelism between curves. Convergence of the algorithm and implementation efficiency are also investigated and discussed. Experimental results on the relative weightings of these forces also shed light on the perceptual priority imposed by the human vision system. (C) 1997 Academic Press.

  191.   Siddiqi, K, Kimia, BB, and Shu, CW, "Geometric shock-capturing ENO schemes for subpixel interpolation, computation and curve evolution," GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 59, pp. 278-301, 1997.

Abstract:   Subpixel methods that locate curves and their singularities, and that accurately measure geometric quantities, such as orientation and curvature, are of significant importance in computer vision and graphics. Such methods often use local surface fits or structural models for a local neighborhood of the curve to obtain the interpolated curve. Whereas their performance is good in smooth regions of the curve, it is typically poor in the vicinity of singularities. Similarly, the computation of geometric quantities is often regularized to deal with noise present in discrete data. However, in the process, discontinuities are blurred over, leading to poor estimates at them and in their vicinity. In this paper we propose a geometric interpolation technique to overcome these limitations by locating curves and obtaining geometric estimates while (1) not blurring across discontinuities and (2) explicitly and accurately placing them, The essential idea is to avoid the propagation of information across singularities. This is accomplished by a one-sided smoothing technique, where information is propagated from the direction of the side with the ''smoother'' neighborhood. When both sides are nonsmooth, the two existing discontinuities are relieved by placing a single discontinuity, or shock. The placement of shacks is guided by geometric continuity constraints, resulting in subpixel interpolation with accurate geometric estimates. Since the technique was originally motivated by curve evolution applications, we demonstrate its usefulness in capturing not only smooth evolving curves, but also ones with orientation discontinuities. In particular, the technique is shown to be far better than traditional methods when multiple or entire curves are present in a very small neighborhood. (C) 1997 Academic Press.

  192.   Goudail, F, and Refregier, P, "Optimal target tracking on image sequences with a deterministic background," JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, vol. 14, pp. 3197-3207, 1997.

Abstract:   Until now, most optical pattern recognition filters have been designed to process one image at a time. However, in image sequences, successive frames are highly correlated, so that it is useful to take this correlation into account while designing the filter. We develop a target tracking processor following this method. The images are assumed to consist of a moving object appearing against a moving background. A model that takes into account two successive frames is designed. From this model we determine the maximum-likelihood processor for tracking the object from one frame to the next. Since this processor is based on correlation operations, it could be implemented on a hybrid optoelectronic system that makes use of the rapidity of optical correlation. (C) 1997 Optical Society of America.

  193.   Noll, D, and vonSeelen, W, "Object recognition by deterministic annealing," IMAGE AND VISION COMPUTING, vol. 15, pp. 855-860, 1997.

Abstract:   In this paper we describe a feature-based approach to object recognition. The correspondence problem is solved by optimization of an energy function. While similar approaches suffer from local minima, we derive an energy function suitable for minimizing by deterministic annealing. Hereby global optimization can be achieved. Algorithms matching model features to image features in a coarse-to-fine manner are described. (C) 1997 Elsevier Science B.V.

  194.   Seitz, SM, and Dyer, CR, "View-invariant analysis of cyclic motion," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 25, pp. 231-251, 1997.

Abstract:   This paper presents a general framework for image-based analysis of 3D repeating motions that addresses two limitations in the state of the art. First, the assumption that a motion be perfectly even from one cycle to the next is relaxed. Real repeating motions tend not to be perfectly even, i.e., the length of a cycle varies through time because of physically important changes in the scene. A generalization of period is defined for repeating motions that makes this temporal variation explicit. This representation, called the period trace, is compact and purely temporal, describing the evolution of an object or scene without reference to spatial quantities such as position or velocity. Second, the requirement that the observer be stationary is removed. Observer motion complicates image analysis because an object that undergoes a 3D repeating motion will generally not produce a repeating sequence of images. Using principles of affine invariance, we derive necessary and sufficient conditions for an image sequence to be the projection of a 3D repeating motion, accounting for changes in viewpoint and other camera parameters. Unlike previous work in visual invariance, however, our approach is applicable to objects and scenes whose motion is highly non-rigid. Experiments on real image sequences demonstrate how the approach may be used to detect several types of purely temporal motion features, relating to motion trends and irregularities. Applications to athletic and medical motion analysis are discussed.

  195.   Neuenschwander, WM, Fua, P, Iverson, L, Szekely, G, and Kubler, O, "Ziplock snakes," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 25, pp. 191-201, 1997.

Abstract:   We propose a snake-based approach that allows a user to specify only the distant end points of the curve he wishes to delineate without having to supply an almost complete polygonal approximation. This greatly simplifies the initialization process and yields excellent convergence properties. This is achieved by using the image information around the end points to provide boundary conditions and by introducing an optimization schedule that allows a snake to take image information into account first only near its extremities and then, progressively, toward its center. In effect, the snakes are clamped onto the image contour in a manner reminiscent of a ziplock being closed. These snakes can be used to alleviate the often repetitive task practitioners face when segmenting images by eliminating the need to sketch a feature of interest in its entirety, that is, to perform a painstaking, almost complete, manual segmentation.

  196.   Dryden, IL, Mardia, KV, and Walder, AN, "Review of the use of context in statistical image analysis," JOURNAL OF APPLIED STATISTICS, vol. 24, pp. 513-538, 1997.

Abstract:   This paper is a review of the use of contextual information in statistical image analysis. After defining what we mean by 'context', we describe the Bayesian approach to high-level image analysis using deformable templates. We describe important aspects of work on character recognition and syntactic pattern recognition; in particular, aspects of the work which are relevant to scene understanding. We conclude with a review of some work on knowledge-based systems which use context to aid object recognition.

  197.   Kervrann, C, Davoine, F, Perez, P, Forchheimer, R, and Labit, C, "Generalized likelihood ratio-based face detection and extraction of mouth features," PATTERN RECOGNITION LETTERS, vol. 18, pp. 899-912, 1997.

Abstract:   We describe a system to detect the speaker's face and mouth in videophone sequences. A statistical scheme based on a subspace method is described for detecting and tracking faces under varying poses. A matching criterion based on a Generalized Likelihood Ratio is optimized efficiently with respect to a perspective transformation using a coarse-to-fine search strategy combined with a simulated annealing algorithm. Moreover, we analyze the amplitude projections around the speaker's mouth to describe the shape of the lips. All computations are performed on lossy H263-coded images, The proposed algorithms are well-suited to a further real-time implementation. (C) 1997 Elsevier Science B.V.

  198.   Proesmans, M, and Van Gool, L, "One-shot 3D-shape and texture acquisition of facial data," AUDIO- AND VIDEO-BASED BIOMETRIC PERSON AUTHENTICATION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1206, pp. 411-418, 1997.

Abstract:   In this paper we present new methods to simultaneously extract and exploit the three-dimensional shape of a face and its surface texture. It is based on an active technique, i.e. special illumination, but in contrast to traditional active sensing does not require scanning or sequential projection of multiple patterns. This one-shot nature of the devise allows to capture moving objects, e.g. for making a 3D reconstruction of a face even when the person is talking. The use of the system is illustrated using simple methods to extract both textural and geometrical features from faces, that can be used for authentication purposes. The advantage of using 3D data is that both types of features can be made more invariant under changes in head pose or illumination conditions.

  199.   Guttman, MA, Zerhouni, EA, and McVeigh, ER, "Analysis of cardiac function from MR images," IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 17, pp. 30-38, 1997.

Abstract:   This paper describes an image metamorphosis technique to handle scattered feature constraints specified with points, polylines, and splines. Solutions to the following three problems are presented: feature specification, warp generation, and transition control. We demonstrate the use of snakes to reduce the burden of feature specification. Next, we propose the use of multilevel free-form deformations (MFFD) to compute C-2-continuous and one-to-one mapping functions among the specified features. The resulting technique, based on B-spline approximation, is simpler and faster than previous warp generation methods. Furthermore, it produces smooth image transformations without undesirable ripples and foldovers. Finally, we simplify the MFFD algorithm to derive transition functions to control geometry and color blending. Implementation details are furnished and comparisons among Various metamorphosis techniques are presented.

  200.   Zhao, WY, Nandhakumar, N, and Smith, PW, "Model-based interpretation of stereo imagery of textured surfaces," MACHINE VISION AND APPLICATIONS, vol. 10, pp. 201-213, 1997.

Abstract:   We present a scheme for reliable and accurate surface reconstruction from stereoscopic images containing only fine texture and no stable high-level features. Partial shape information is used to improve surface computation: first by fitting an approximate, global, parametric model, and then by refining this model via local correspondence processes. This scheme eliminates the window size selection problem in existing area-based stereo correspondence schemes, These ideas are integrated in a practical vision system that is being used by environmental scientists to study wind erosion of bulk material such as coal ore being transported in open rail cars.

  201.   Hinshaw, KP, and Brinkley, JF, "Using 3-D shape models to guide segmentation of MR brain images," JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, vol. 10, pp. 469-473, 1997.

Abstract:   Accurate segmentation of medical images poses one of the major challenges in computer vision. Approaches that rely solely on intensity information frequently fail because similar intensity values appear in multiple structures. This paper presents a method for using shape knowledge to guide the segmentation process, applying it to the task of finding the surface of the brain. A 3-D model that includes local shape constraints is fitted to an MR volume dataset. The resulting low-resolution surface is used to mask out regions far from the cortical surface, enabling an isosurface extraction algorithm to isolate a more detailed surface boundary. The surfaces generated by this technique are comparable to those achieved by other methods, without requiring user adjustment of a large number of ad hoc parameters.

  202.   Le Goualher, G, Barillot, C, and Bizais, Y, "Modeling cortical sulci with active ribbons," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 11, pp. 1295-1315, 1997.

Abstract:   We propose a method for the 3D segmentation and representation of cortical folds with a special emphasis on the cortical sulci. These cortical structures are represented using "active ribbons". Active ribbons are built from active surfaces, which represent the median surface of a particular sulcus filled by CSF. Sulci modeling is obtained from MRI acquisitions (usually T1 images). The segmentation is performed using an automatic labeling procedure to separate gyri from sulci based on curvature analysis of the different iso-intensity surfaces of the original MRI volume. The outer parts of the sulci are used to initialize the convergence of the active ribbon from the outer parts of the brain to the interior. This procedure has two advantages: first, it permits the labeling of voxels belonging to sulci on the external part of the brain as well as on the inside (which is often the hardest point) and secondly, this segmentation allows 3D visualization of the sulci in the MRI volumetric environment as well as showing the sophisticated shapes of the cortical structures by means of isolated surfaces. Active ribbons can be used to study the complicated shape of the cortical anatomy, to model the variability of these structures in shape and position, to assist nonlinear registrations of human brains by locally controlling the warping procedure, to map brain neurophysiological functions into morphology or even to select the trajectory of an intra-sulci (virtual) endoscope.

  203.   Marescaux, J, Clement, JM, Nord, M, Russier, Y, Tassetti, V, Mutter, D, Cotin, S, and Ayache, N, "A new concept in digestive surgery: the computer assisted surgical procedure, from virtual reality to telemanipulation," BULLETIN DE L ACADEMIE NATIONALE DE MEDECINE, vol. 181, pp. 1609-1623, 1997.

Abstract:   Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world. virtual reality: Virtual reality extends the perception of our five senses by representing more than rite real state of things by the means of computer sciences and robotics. It consists of three concepts : immersion, navigation and interaction. Three reasons have led tts to develop this simulator: the first:rt is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial jet pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists of a sophisticated simulator which has to include five requirements, : visual fidelity: interactivity: physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described as are force feedback and real-time interaction.

  204.   Held, K, Kops, ER, Krause, BJ, Wells, WM, Kikinis, R, and Muller-Gartner, HW, "Markov random field segmentation of brain MR images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 878-886, 1997.

Abstract:   We describe a fully automatic three-dimensional (3-D)-segmentation technique for brain magnetic resonance (MR) images, By means of Markov random fields (MRF's) the segmentation algorithm captures three features that are of special importance for MR images, i.e., nonparametric distributions of tissue intensities, neighborhood correlations, and signal inhomogeneities, Detailed simulations and real MR images demonstrate the performance of the segmentation algorithm, In particular, the impact of noise, inhomogeneity, smoothing, and structure thickness are analyzed quantitatively, Even single-echo MR images are well classified into gray matter, white matter, cerebrospinal fluid, scalp-bone, and background, A simulated annealing and an iterated conditional modes implementation are presented.

  205.   Gilson, SJ, and Damper, RI, "An empirical comparison of neural techniques for edge linking of images," NEURAL COMPUTING & APPLICATIONS, vol. 6, pp. 64-78, 1997.

Abstract:   Edge linking is a fundamental computer vision task, yet presents difficulties arising from the lack of information in the image. Viewed ns a constrained optimisation problem, it is NP hard - being isomorphic to the classical travelling salesman problem. Self-learning neural techniques boast the ability to solve hard, ill-defined problems, and hence offer promise for such an application, This paper examines the suitability of four well-known unsupervised techniques for rite task of edge linking, by applying them to a test bed of edge point images and then evaluating their performance both quantitatively and qualitatively. Techniques studied are the elastic net, active contours, Kohonen map and Burr's modified elastic net. Of these, only the elastic ner and the Kohonen map are realistic contenders for general edge-linking tasks. However, the other two exhibit behaviour which may make them particularly suited to some specific image-processing and computer vision applications.

  206.   Delibasis, K, Undrill, PE, and Cameron, GG, "Designing texture filters with genetic algorithms: An application to medical images," SIGNAL PROCESSING, vol. 57, pp. 19-33, 1997.

Abstract:   The problem of texture recognition is addressed by studying appropriate descriptors in the spatial frequency domain. During a training phase a filter is configured to determine different classes of texture by the response of its correlation with the Fourier spectrum of training-image templates. This is achieved by genetic algorithm-based optimisation. The technique is tested on standard texture patterns and then applied to magnetic resonance images of the brain to segment the cerebellum from the surrounding white and grey matter. Comparisons with established texture recognition techniques are presented, which show that the proposed method performs as well as, or better than, traditional techniques for the chosen instances of standard and anatomical texture and has the advantage of not having to decide which texture measure to use for a specific image structure. (C) 1997 Elsevier Science B.V.

  207.   Matsuyama, T, and Wada, T, "Cooperative spatial reasoning for image understanding," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 11, pp. 205-227, 1997.

Abstract:   Spatial Reasoning, reasoning about spatial information (i.e. shape end spatial relations), is a crucial function of image understanding and computer vision systems. This paper proposes a novel spatial reasoning scheme for image understanding and demonstrates its utility and effectiveness in two different systems: region segmentation and aerial image understanding systems. The scheme is designed based on a so-called Multi-Agent/Cooperative Distributed Problem Solving Paradigm, where a group of intelligent agents cooperate with each other to fulfill a complicated task. The first part of the paper describes a cooperative distributed region segmentation system, where each region in an image is regarded as an agent. Starting from seed regions given at the initial stage, region agents deform their shapes dynamically so that the image is partitioned into mutually disjoint regions. The deformation of each individual region agent is realized by the snake algorithm(14) and neighboring region agents cooperate with each other to find common region boundaries between them. In the latter part of the paper, we first give a brief description of the cooperative spatial reasoning method used in our aerial image understanding system SIGMA. In SIGMA, each recognized object such as a house and a road is regarded as an agent. Each agent generates hypotheses about its neighboring objects to establish spatial relations and to detect missing objects. Then, we compare its reasoning method with that used in the region segmentation system. We conclude the paper by showing further utilities of the Multi-Agent/Cooperative Distributed Problem Solving Paradigm for image understanding.

  208.   Dickinson, SJ, and Metaxas, D, "Using aspect graphs to control the recovery and tracking of deformable models," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 11, pp. 115-141, 1997.

Abstract:   Active or deformable models have emerged as a popular modelling paradigm in computer vision. These models have the flexibility to adapt themselves to the image data, offering the potential for both generic object recognition and non-rigid object tracking. Because these active models are underconstrained, however, deformable shape recovery often requires manual segmentation or good model initialization, while active contour trackers have been able to track only an object's translation in the image. In this paper, we report our current progress in using a part-based aspect graph representation of an object(14) to provide the missing constraints on data-driven deformable model recovery and tracking processes.

  209.   Delibasis, K, Undrill, PE, and Cameron, GG, "Designing Fourier descriptor-based geometric models for object interpretation in medical images using genetic algorithms," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 66, pp. 286-300, 1997.

Abstract:   In previous work we have modeled simple 3D anatomical objects using deformed superquadrics and established their optimal position with the aid of genetic algorithms (GAs). Here we extend the complexity of the search object using 3D Fourier descriptor (FD) representations and allow GAs once again to optimize the object's shape and position. Using magnetic resonance image data, we perform an approximate segmentation on one lateral ventricle in the brain and use the FDs from this as seeding values for the GAs to search for the left and right lateral ventricles in seven 3D data sets. We show that the method is capable of coping with normal biological variation. Finally, we compare FD/GA-guided segmentation with a manually guided interactive region growing method and find an agreement of 78 +/- 10% in voxel classification with a corresponding average edge placement error of 2.2 +/- 0.4 mm. (C) 1997 Academic Press.

  210.   Zhong, Y, and Jain, AK, "Object localization using color, texture and shape," ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1223, pp. 279-294, 1997.

Abstract:   We address the problem of localizing objects using color, texture and shape. Given a handrawn sketch for querying an object shape, and its color and texture, the algorithm automatically searches the database images for objects which meet the query attributes. The database images do not need to be presegmented or annotated. The proposed algorithm operates in two stages. In the first stage, we use local texture and color features to find a small number of candidate images, and identify regions in the candidate images which share similar texture and color as the query example. To speed up the processing, the texture and color features are directly extracted from the Discrete Cosine Transform (DCT) compressed domain. In the second stage, we use a deformable template matching method to match the query shape to the image edges at the locations which possess the desired texture and color attributes. This algorithm is different from the other content-based image retrieval algorithms in that: (i) no presegmentation of the database images is needed, and (ii) the color and texture features are directly extracted from the compressed images. Experimental results show that substantial computational savings can be achieved utilizing multiple image cues.

  211.   Fua, P, "Consistent modeling of terrain and drainage using deformable models," ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1223, pp. 459-474, 1997.

Abstract:   We propose an automated approach to modeling drainage channels-and, more generally, linear features that lie on the terrain-from multiple images, which results not only in high-resolution, accurate and consistent models of the features, but also of the surrounding terrain. In our specific case, we have chosen to exploit the fact that rivers flow downhill and lie at the bottom of local depressions in the terrain, valley floors tend to be "U" shaped, and the drainage pattern appears as a network of linear features that can be visually detected in single gray-level images. Different approaches have explored individual facets of this problem. Ours unifies these elements in a common framework. We accurately model terrain and features as 3-dimensional objects from several information sources that may be in error and inconsistent with one another. This approach allows us to generate models that are faithful to sensor data, internally consistent and consistent with physical constraints. We have proposed generic models that have been applied to the specific task at hand-river delineation and data elevation model (DEM) refinement-and show that the constraints can be expressed in a computationally effective way and, therefore, enforced while initializing the models and then fitting them to the data. We will also argue that the same techniques are robust enough to work on other features that are constrained by predictable forces.

  212.   Glasbey, CA, "SAR image registration and segmentation using an estimated DEM," ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1223, pp. 507-520, 1997.

Abstract:   Synthetic aperture radar (SAR) images are notoriously difficult to interpret. Segmentation is simplified if a digital map is available, to which the image can be registered. Also, registration is simplified if a digital elevation model (DEM) is available. In this paper it is shown that, if a DEM is unavailable, it can be estimated by minimising an energy functional consisting of a measure of agreement between the SAR image and a digital map together with a thin-plate bending-energy term. A computationally-efficient, finite-element algorithm is proposed to solve the optimisation problem. The method is applied to automatically align an airborne SAR image with a digital map of field boundaries, producing an image which is simultaneously registered and segmented.

  213.   Luettin, J, and Thacker, NA, "Speechreading using probabilistic models," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 65, pp. 163-178, 1997.

Abstract:   We describe a robust method for locating and tracking lips in gray-level image sequences. Our approach learns patterns of shape variability from a training set which constrains the model during image search to only deform in ways similar to the training examples, Image search is guided by a learned gray-level model which is used to describe the large appearance variability of lips, Such variability might be due to different individuals, illumination, mouth opening, specularity, or visibility of teeth and tongue, Visual speech features are recovered from the tracking results and represent both shape and intensity information, We describe a speechreading (lip-reading) system, where the extracted features are modeled by Gaussian distributions and their temporal dependencies by hidden Markov models. Experimental results are presented for locating lips, tracking lips, and speechreading. The database used consists of a broad variety of speakers and was recorded in a natural environment with no special lighting or lip markers used, For a speaker independent digit recognition task using visual information only, the system achieved an accuracy about equivalent to that of untrained humans. (C) 1997 Academic Press.

  214.   Fua, P, and Brechbuhler, C, "Imposing hard constraints on deformable models through optimization in orthogonal subspaces," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 65, pp. 148-162, 1997.

Abstract:   An approach is presented for imposing generic hard constraints an deformable models at a low computational cost, while preserving the good convergence properties of snake-like models. We believe this capability to be essential not only for the accurate modeling of individual objects that obey known geometric and semantic constraints but also for the consistent modeling of sets of objects. Many of the approaches to this problem that have appeared in the vision literature rely on adding penalty terms to the objective functions. They rapidly become intractable when the number of constraints increases, Applied mathematicians have developed powerful constrainted optimization algorithms that, in theory, can address this problem. However, these algorithms typically do not take advantage of the specific properties of snakes, We have therefore designed a new algorithm that is closely related to Lagrangian methods but is tailored to accommodate the particular brand of deformable models used in the image understanding community, We demonstrate the validity of our approach first in two dimensions using synthetic images and then in three dimensions using real aerial images to simultaneously model terrain, roads, and ridgelines under consistency constraints. (C) 1997 Academic Press.

  215.   Undrill, PE, Delibasis, K, and Cameron, GG, "An application of genetic algorithms to geometric model-guided interpretation of brain anatomy," PATTERN RECOGNITION, vol. 30, pp. 217-227, 1997.

Abstract:   This work applies 3D Fourier Descriptors (FDs) and Genetic Algorithms (GAs) to the optimisation of the shape and position of models of anatomical objects within the human brain. Using magnetic resonance image data, we perform an approximate segmentation on one lateral ventricle and use the FDs from this as seeding values for the GAs to search for the left and right lateral ventricles in subsequent 3D image data sets, showing that the method is capable of coping with normal biological variation within and between individuals. Finally, we compare the GA-guided segmentation with a manual region growing method and find an agreement of 79.9+/-5.8% in voxel classification with a corresponding mean edge placement error of 2.1+/-0.4 mm. Copyright (C) 1997 Pattern Recognition Society.

  216.   Jain, AK, and Dorai, C, "Practicing vision: Integration, evaluation and applications," PATTERN RECOGNITION, vol. 30, pp. 183-196, 1997.

Abstract:   Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the ''Grand Challenges'' and also from its prominent role in the National Information Infrastructure. While the design of a general purpose vision system continues to be elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. We discuss and demonstrate advantages of (i) multi-sensor fusion, (ii) combination of features and classifiers, (iii) integration of visual modules, and (iv) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms. Copyright (C) 1997 Pattern Recognition Society.

  217.   Yezzi, A, Kichenassamy, S, Kumar, A, Olver, P, and Tannenbaum, A, "A geometric snake model for segmentation of medical imagery," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 199-209, 1997.

Abstract:   In this note, we employ the new geometric active contour models formulated in [25] and [26] for edge detection and segmentation of magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound medical imagery, Our method is based on defining feature-based metrics on a given image which in turn leads to a novel snake paradigm in which the feature of interest mag be considered to lie at the bottom of a potential well, Thus, the snake is attracted very quickly and efficiently to the desired feature.

  218.   Caselles, V, Kimmel, R, and Sapiro, G, "Geodesic active contours," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 22, pp. 61-79, 1997.

Abstract:   A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric is defined by the image content. This geodesic approach for object segmentation allows to connect classical ''snakes'' based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved, allowing stable boundary detection when their gradients suffer from large variations, including gaps. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. The scheme was implemented using an efficient algorithm for curve evolution. Experimental results of applying the scheme to real images including objects with holes and medical data imagery demonstrate its power. The results may be extended to 3D object segmentation as well.

  219.   Dachman, AH, Lieberman, J, Osnis, RB, Chen, SYJ, Hoffmann, KR, Chen, CT, Newmark, GM, and McGill, J, "Small simulated polyps in pig colon: Sensitivity of CT virtual colography," RADIOLOGY, vol. 203, pp. 427-430, 1997.

Abstract:   PURPOSE: The authors evaluated computed tomographic (CT) virtual colography for the detection of simulated polyps under ideal conditions, as well as the effects on lesion conspicuity of (a) collimation, (b) table pitch, and (c) orientation of the colon lumen with respect to the gantry. MATERIALS AND METHODS: Pig colon was resected and cleansed, and polyps with diameters of 3, 7, and 10 mm were created. Each specimen was scanned with collimation of 5 and 7 mm and table pitch of 1.0, 1.6, and 2.0 at angles of 0 degrees, 45 degrees, and 90 degrees to the gantry. The initial two-dimensional (2D) images were reconstructed at 1-mm intervals (2D reconstructions), from which three-dimensional (3D) virtual colography images were generated. Polyp conspicuity on the initial and reconstructed 2D images and the 3D reconstructions was evaluated on a three-point scale: 0 = polyp not depicted, 1 = polyp faintly depicted, and 2 = polyp clearly depicted. RESULTS: The 10-mm-diameter polyp was clearly depicted (grade 2 conspicuity) on every initial and reconstructed 2D image and 3D reconstruction without regard to collimation, table pitch, or angle to the gantry. The 7-mm-diameter polyp was clearly depicted (grade 2 conspicuity) on every initial and reconstructed 2D image, but conspicuity on 3D reconstructions varied as the imaging parameters varied. The 3-mm-diameter polyp was faintly depicted (grade 1 conspicuity) on the initial and reconstructed 2D images and 3D reconstructions, but conspicuity varied on the 3D reconstructions as the imaging parameters varied. CONCLUSION: CT virtual colography helped detection of small mucosal polyps regardless of the angle of the colon lumen to the gantry at which they were obtained.

  220.   Friedland, NS, and Rosenfeld, A, "An integrated approach to 2D object recognition," PATTERN RECOGNITION, vol. 30, pp. 525-535, 1997.

Abstract:   A multilevel Markov Random Field (MRF) energy environment has been developed that simultaneously performs delineation, representation and classification of two-dimensional objects by using a global optimization technique. This environment supports a multipolar shape representation which establishes a dynamic MRF structure. This structure is initialized as a single-center polar representation, and uses minimum description length tests to determine whether to establish new polar centers. The polar representations at these centers are compared with a database of such representations in order to identify pieces of objects, and the results of these comparisons are used to compile evidence for global object identifications. This method is potentially more robust than conventional multistaged approaches to object recognition because it incorporates all the information about the objects into a single adaptive decision process, and its use of a multipolar representation allows it to handle partially occluded objects.

  221.   Deng, JY, and Lai, FP, "Region-based template deformation and masking for eye-feature extraction and description," PATTERN RECOGNITION, vol. 30, pp. 403-419, 1997.

Abstract:   We propose an improved method for eye-feature extraction, descriptions, and tracking using deformable templates. Some existing algorithms are exploited to locate the initial position of eye features and then deformable templates are used for extracting and describing the eye features. Rather than using original energy minimization for matching the templates, the region-based approach is proposed for template deformation. Based on the region properties, the new strategy avoids problems such as template shrinking, adjusting the weights of energy terms, failure of orientation adjustment due to some exceptional cases. Our strategies are also coupled with Canny edge operator to give a new back-end processing. By integrating the local edge information from the edge detection and the global collector from our region-based template deformation, this processing stage can generate accurate eye-feature descriptions. Finally, the template deformation process is applied to tracking eye features. (C) 1997 Pattern Recognition Society.

  222.   Katkere, A, Moezzi, S, Kuramura, DY, Kelly, P, and Jain, R, "Towards video-based immersive environments," MULTIMEDIA SYSTEMS, vol. 5, pp. 69-85, 1997.

Abstract:   Video provides a comprehensive visual record of environment activity over time. Thus, video data is an attractive source of information for the creation of virtual worlds which require some real-world fidelity. This paper describes the use of multiple streams of video data for the creation of immersive virtual environments. We outline our multiple perspective interactive video (MPI-Video) architecture which provides the infrastructure for the processing and analysis of multiple streams of video data. Our MPI-Video system performs automated analysis of the raw video and constructs a model of the environment and object activity within this environment. This model provides a comprehensive representation of the world monitored by the cameras which, in turn, can be used in the construction of a virtual world. In addition, using the information produced and maintained by the MPI-Video system; our immersive video system generates virtual video sequences. These are sequences of the dynamic environment from an arbitrary view point generated using the real camera data. Such sequences allow a user to navigate through the environment and provide a sense of immersion in the scene. We discuss results from our MPI-Video prototype, outline algorithms for the construction of virtual views and provide examples of a variety of such immersive video sequences.

  223.   Bloomgarden, DC, Fayad, ZA, Ferrari, VA, Chin, B, Sutton, MGSJ, and Axel, L, "Global cardiac function using fast breath-hold MRI: Validation of new acquisition and analysis techniques," MAGNETIC RESONANCE IN MEDICINE, vol. 37, pp. 683-692, 1997.

Abstract:   Calculation of global cardiac function parameters has been validated using fast, segmented k-space, breath-hold, gradient-echo, magnetic resonance images. Images of phantoms, experimental animals, normal volunteers, and patients were acquired with a 1.5 T clinical scanner, Humans were imaged using two phased-array surface coils in multicoil mode, Myocardial contours were extracted using a new interactive, semi-automated method based on the active contour model method, Images were acquired in the short-axis orientation, and, using a new imaging and analysis strategy, in rotating plane long-axis orientations, to provide better definition of the valve planes and the apex, and also to reduce the number of slices (compared with the short-axis method) required to sample the whole heart, Validation was accomplished through calculation of the volumes of phantoms and left and right ventricular masses of animal hearts. Functional parameters from MRI were compared with those from echocardiograms and radionuclide angiograms in normal volunteers and patients, respectively.

  224.   Refregier, P, Germain, O, and Gaidon, T, "Optimal snake segmentation of target and background with independent Gamma density probabilities, application to speckled and preprocessed images," OPTICS COMMUNICATIONS, vol. 137, pp. 382-388, 1997.

Abstract:   We propose in this paper a snake-based segmentation processor to track the shape of a target with random white intensity appearing on a random white spatially disjoint background. We study the optimal solution for Gamma laws and we discuss the relevance of such statistics for realistic situations. This algorithm, based on an active contour model (snakes), consists in correlations of a binary reference with the scene image or with pre-processed version of the scene image. This method is a generalization of correlation techniques and thus opens new applications for digital and optical correlators.

  225.   Caselles, V, Kimmel, R, and Sapiro, G, "Minimal surfaces based object segmentation," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 394-398, 1997.

Abstract:   A geometric approach for 3D object segmentation and representation is presented. The segmentation is obtained by deformable surfaces moving towards the objects to be detected in the 3D image. The model is based on curvature motion and the computation of surfaces with minimal areas, better known as minimal surfaces. The space where the surfaces are computed is induced from the 3D image (volumetric data) in which the objects are to be detected. The model links between classical deformable surfaces obtained via energy minimization, and intrinsic ones derived from curvature based flows. The new approach is stable, robust, and automatically handles changes in the surface topology during the deformation.

  226.   Delp, SL, Loan, P, Basdogan, C, and Rosen, JM, "Surgical simulation: An emerging technology for training in emergency medicine," PRESENCE-TELEOPERATORS AND VIRTUAL ENVIRONMENTS, vol. 6, pp. 147-159, 1997.

Abstract:   The current methods of training medical personnel to provide emergency medical care have several important shortcomings. For example, in the training of wound debridement techniques, animal models are used to gain experience treating traumatic injuries. We propose an alternative approach by creating a three-dimensional, interactive computer model of the human body that can be used within a virtual environment to learn and practice wound debridement techniques and Advanced Trauma Life Support (ATLS) procedures. As a first step, we have developed a computer model that represents the anatomy and physiology of a normal and injured lower limb. When visualized and manipulated in a virtual environment, this computer model will reduce the need for animals in the training oi-trauma management and potentially provide a superior training experience. This article describes the development choices that were made in implementing the preliminary system and the challenges that must be met to create an effective medical training environment.

  227.   Sclaroff, S, "Deformable prototypes for encoding shape categories in image databases," PATTERN RECOGNITION, vol. 30, pp. 627-641, 1997.

Abstract:   An image database search method is described that uses strain energy from prototypes to represent shape categories. Rather than directly comparing a candidate shape with all entries in a database, shapes are ordered in terms of non-rigid deformations that relate them to a small subset of representative prototypes. Shape correspondences are obtained via modal matching, a decomposition for matching, describing, and comparing shapes despite sensor variations and non-rigid deformations. Deformation is decomposed into an ordered basis of orthogonal principal components. This allows selective invariance to in-plane rotation, translation, and scaling, and quasi-invariance to affine deformations. Retrieval accuracy and stability are evaluated in experiments with 2-D image databases. (C) 1997 Pattern Recognition Society.

  228.   Fejes, S, and Rosenfeld, A, "Discrete active models and applications," PATTERN RECOGNITION, vol. 30, pp. 817-835, 1997.

Abstract:   Optimization processes based on ''active models'' play central roles in many areas of computational vision as well as computational geometry. Unfortunately, current models usually require highly complex and sophisticated mathematical machinery and at the same time they suffer from a number of limitations which impose restrictions on their applicability. In this paper a simple class of discrete active models, called migration processes (MPs), is presented. The processes are based on iterated averaging over neighborhoods defined by constant geodesic distance. It is demonstrated that the MP model-a system of self-organizing active particles-has a number of advantages over previous models, both parametric active models (''snakes'') and implicit (contour evolution) models. Due to the generality of the MP model, the process can be applied to derive natural solutions to a variety of optimization problems,including defining (minimal) surface patches given their boundary curves; finding shortest paths joining sets of points; and decomposing objects into ''primitive'' parts. (C) 1997 Pattern Recognition Society.

  229.   Liang, KH, Tjahjadi, T, and Yang, YH, "Roof edge detection using regularized cubic B-spline fitting," PATTERN RECOGNITION, vol. 30, pp. 719-728, 1997.

Abstract:   A scheme employing one-dimensional (1-D) Regularized Cubic B-Spline (RCBS) fitting [G. Chen and Y. H. Yang, IEEE Trans. Systems Man Cybernet. 25, 636-693 (1995)] has been used successfully in the task of step edge detection. The regularized fitting is transformed into a quadratic energy equation to simplify the computation. This scheme, however, has three major limitations: it is non-linear, has a limited accuracy and is computationally expensive. This paper presents a modified scheme which overcomes these limitations. The modified scheme employs the I-D RCBS fitting on the horizontal and Vertical orientations of a window of an image to generate two I-D signals, which provide sufficient information about the local property of the sub-image for roof edge detection. Experimental results show that the scheme of roof edge detection is very sensitive to small signals. (C) 1997 Pattern Recognition Society.

  230.   Chin, TM, and Mariano, AJ, "Space-time interpolation of oceanic fronts," IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 35, pp. 734-746, 1997.

Abstract:   Oceanic temperature fronts observed through composite infrared images from the AVHRR satellite data are fragmented due mostly to cloud occlusion. The sampling frequency of such frontal position observations tends to be insufficiently high to resolve dynamics of the meandering features associated with the frontal contour, so that contour reconstruction using a standard space-time smoothing often leads to introduction of spurious features. Augmenting space-time smoothing with a simple point-feature detection/matching scheme, however, can dramatically improve the reconstruction product, This paper presents such a motion-compensated interpolation algorithm, for reconstruction of open contours evolving in time given fragmented position data, The reconstruction task is formulated as an optimization problem, and a time-sequential solution which adaptively estimates feature motion is provided. The resulting algorithm reliably interpolates position measurements of the surface temperature fronts associated with the highly convoluted portions of strong ocean currents such as the Gulf Stream and Kuroshio.

  231.   Sebbahi, A, Herment, A, deCesare, A, and Mousseaux, E, "Multimodality cardiovascular image segmentation using a deformable contour model," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 21, pp. 79-89, 1997.

Abstract:   An automatic segmentation method has been developed for cardiovascular multimodality imaging. A ''snake'' model based on a curve shaping and an energy-minimizing process is used to detect blood-wall interfaces on Cine-CT, MRI and ultrasound images. Deformation of a reduced set of contour points was made according to a discretized global, regional and local minimum energy criterion. A continuous regional optimization process was also integrated into the deformation model, it takes into account a cubic spline interpolation and adaptive regularity constraints. The constraints provided rapid convergence toward a final contour position by successively stopping spline segments. (C) 1997 Elsevier Science Ltd.

  232.   Olabarriaga, SD, and Smeulders, AWM, "Setting the mind for intelligent interactive segmentation: Overview, requirements, and framework," INFORMATION PROCESSING IN MEDICAL IMAGING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1230, pp. 417-422, 1997.

Abstract:   It is widely recognized that automatic segmentation is hard, leading to the state where user intervention cannot be avoided. In this paper we review existing literature and propose a systematic approach for the integration of automatic and interactive segmentation methods into one unified process. A framework and requirements for intelligent interactive segmentation are formulated, and an example is presented.

  233.   Fritsch, D, Pizer, S, Yu, LY, Johnson, V, and Chaney, E, "Segmentation of medical image objects using deformable shape loci," INFORMATION PROCESSING IN MEDICAL IMAGING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1230, pp. 127-140, 1997.

Abstract:   Robust segmentation of normal anatomical objects in medical images requires (1) methods for creating object models that adequately capture object shape and expected shape variation across a population, and (2) methods for combining such shape models with unclassified image data to extract modeled objects. Described in this paper is such an approach to model-based image segmentation, called deformable shape loci (DSL), that has been successfully applied to 2D MR slices of the brain ventricle and CT slices of abdominal organs. The method combines a model and image data by warping the model to optimize an objective function measuring both the conformation of the warped model to the image data and the preservation of local neighbor relationships in the model. Methods for forming the model and for optimizing the objective function are described.

  234.   Todd-Pokropek, A, "How to find the surface when you are drowning in data: boundary conditions and constraints in medical image processing," PHYSICA MEDICA, vol. 13, pp. 197-202, 1997.

Abstract:   A major problem with many current techniques in medical imaging is the sheer volume of data of the results; examples are in spiral CT, MRI especially functional imaging, SPECT and PET. In general the data are n-D, often 3-D plus time. Such data are hard to visualise without compression, specifically some kind of multi-dimensional projection to reduce dimensionality, for example reducing the n-D to a 2-D image. Both linear and non-linear operations can be considered, and two classes of method are important: data driven and hypothesis driven. Illustrative of data driven methods is principal component analysis (and factor analysis) where from the statistical aim of reducing correlation, axes in the multi-dimensional space can be defined for the projection operation. Unfortunately, in practice, a pure statistical method does not generally map well on to expected physiological functions (or models), and some kind of oblique rotation is required, based on the choice of appropriate constraints such as that of positivity. Hypothesis driven methods are all implicitly or explicitly based on models. Thus associating data driven and hypothesis driven ap preaches leads to constrained statistical data (image) processing. Examples are shown as used in nuclear medicine and MRI. Another important problem considered is that of multi-modality image registration and fusion. Although many methods exist, all based on the minimization of an appropriate distance functions between two image data sets, additional constraints are required when the images are not too similar. This leads to the idea of using mutual information as a distance measure, and imposing constraints by means of cluster analysis of the n-dimensional feature space. Finally, in the analysis of such data, tests against reference data sets (atlases) are required, normally requiring warping the data sets in space, for example by the use of optic flow, or some kind of diffusion equation. Again, the boundary values for the method need to be defined with respect to medical knowledge, a further good example of data driven algorithms supervised using clinical constraints or models.

  235.   Gunn, SR, and Nixon, MS, "Robust snake implementation; A dual active contour," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, pp. 63-68, 1997.

Abstract:   A conventional active contour formulation suffers difficulty in appropriate choice of an initial contour and values of parameters. Recent approaches have aimed to resolve these problems but can compromise other performance aspects. To relieve the problem in initialization, we use a dual active contour, which is combined with a local shape model to improve the parameterization. One contour expands from inside the target feature, the other contracts from the outside. The two contours are interlinked to provide a balanced technique with an ability to reject ''weak'' local energy minima.

  236.   Tagare, HD, "Deformable 2-D template matching using orthogonal curves," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 108-117, 1997.

Abstract:   In this paper a new formulation of the two-dimensional (2-D) deformable template matching problem is proposed. It uses a lower-dimensional search space than conventional methods by precomputing extensions of the deformable template along orthogonal curves. The reduction in search space allows the use of dynamic programming to obtain globally optimal solutions and reduces the sensitivity of the algorithm to initial placement of the template, Further, the technique guarantees that the result is a curve which does not collapse to a point in the absence of strong image gradients and is always nonself intersecting. Examples of the use of the technique on real-world images and in simulations at low signal-to-noise ratios (SNR's) are also provided.

  237.   Zhu, Y, and Yan, H, "Computerized tumor boundary detection using a Hopfield neural network," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 55-67, 1997.

Abstract:   In this paper, we present a new approach for detection of brain tumor boundaries in medical images using a Hopfield neural network. The boundary detection problem is formulated as an optimization process that seeks the boundary points to minimize an energy functional based on an active contour model, A modified Hopfield network is constructed to solve the optimization problem, Taking advantage of the collective computational ability and energy convergence capability of the Hopfield network, our method produces the results comparable to those of standard ''snakes''-based algorithms, but it requires less computing time, With the parallel processing potential of the Hopfield network, the proposed boundary detection can be implemented for real time processing, Experiments on different magnetic resonance imaging (MRI) data sets show the effectiveness of our approach.

  238.   Sandor, S, and Leahy, R, "Surface-based labeling of cortical anatomy using a deformable atlas," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 16, pp. 41-54, 1997.

Abstract:   We describe a computerized method to automatically find and label the cortical surface in three-dimensional (3-D) magnetic resonance (MR) brain images, The approach we take is to model a prelabeled brain atlas as a physical object and give it elastic properties, allowing it to warp itself onto regions in a preprocessed image. Preprocessing consists of boundary-finding and a morphological procedure which automatically extracts the brain and sulci from an MR image and provides a smoothed representation of the brain surface to which the deformable model can rapidly converge, Our deformable models are energy-minimizing elastic surfaces that can accurately locate image features, The models are parameterized with 3-D bicubic B-spline surfaces, We design the energy function such that cortical fissure (sulci) points on the model are attracted to fissure points on the image and the remaining model points are attracted to the brain surface, A conjugate gradient method minimizes the energy function, allowing the model to automatically converge to the smoothed brain surface, Finally, labels are propagated from the deformed atlas onto the high-resolution brain surface.

  239.   Sato, Y, Chen, J, Zoroofi, RA, Harada, N, Tamura, S, and Shiga, T, "Automatic extraction and measurement of leukocyte motion in microvessels using spatiotemporal image analysis," IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 44, pp. 225-236, 1997.

Abstract:   This paper describes a computer vision system for the automatic extraction and velocity measurement of moving leukocytes that adhere to microvessel walls from a sequence of images, The motion of these leukocytes can be visualized as motion along the wall contours, We use the constraint that the leukocytes move along the vessel wall contours to generate a spatiotemporal image, and the leukocyte motion is then extracted using the methods of spatiotemporal image analysis, The generated spatiotemporal image is processed by a special-purpose orientation-selective filter and a subsequent grouping process newly developed for this application. The orientation-selective filter is designed by considering the particular properties of the spatiotemporal image in this application in order to enhance only the traces of leukocytes. In the subsequent grouping process, leukocyte trace segments are selected and grouped among all the segments obtained by simple thresholding and skeletonizing operations, We show experimentally that the proposed method can stably extract leukocyte motion.

  240.   Matsumoto, S, Asato, R, Okada, T, and Konishi, J, "Intracranial contour extraction with active contour models," JMRI-JOURNAL OF MAGNETIC RESONANCE IMAGING, vol. 7, pp. 353-360, 1997.

Abstract:   A novel image processing scheme for extracting the intracranial contours in axial magnetic resonance data sets is proposed. The scheme incorporates the method of active contour models, a recently introduced paradigm for contour extraction. Its performance is nearly ideal for T2-weighted images. Although the performances for proton-density-weighted images and T1-weighted images drop slightly, qualitatively satisfactory extraction still can be obtained for T1-weighted images. Due to high degree of automation, the scheme should help speed up some image processing applications that require the presegmentation of the intracranial cavity.

  241.   Tek, H, and Kimia, BB, "Volumetric segmentation of medical images by three-dimensional bubbles," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 65, pp. 246-258, 1997.

Abstract:   The segmentation of structure from images is an inherently difficult problem in computer vision and a bottleneck to its widespread application, e.g., in medical imaging, This paper presents an approach for integrating local evidence such as regional homogeneity and edge response to form global structure for figure-ground segmentation. This approach is motivated by a shock-based morphogenetic language, where the growth of four types of shocks results in a complete description of shape, Specifically, objects are randomly hypothesized in the form of fourth-order shocks (seeds) which then grow, merge, split, shrink, and, in general, deform under physically motivated ''forces,'' but slow down and come to a halt near differential structures. Two major issues arise in the segmentation of 3D images using this approach. First, it is shown that the segmentation of 3D images by 3D bubbles is superior to a slice-by-slice segmentation by 2D bubbles or by ''21/2D bubbles'' which are inherently 2D but use 3D information for their deformation. Specifically, the advantages lie in an intrinsic treatment of the underlying geometry and accuracy of reconstruction. Second, gaps and weak edges, which frequently present a significant problem for 2D and 3D segmentation, are regularized by curvature-dependent curve and surface deformations which constitute diffusion processes, The 3D bubbles evolving in the 3D reaction-diffusion space are a powerful tool in the segmentation of medical and other images, as illustrated for several realistic examples. (C) 1997 Academic Press.

  242.   Neuenschwander, W, Fua, P, Szekely, G, and Kubler, O, "Velcro surfaces: Fast initialization of deformable models," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 65, pp. 237-245, 1997.

Abstract:   Even though methods based on the use of deformable models have become prevalent, the quality of their output depends critically on the model's initial state, The issue of initializing such models, however, has not received much attention even though it is often key to the implementation of a truly useful system. We therefore present a new approach to segmentation of three-dimensional (3-D) shapes that initializes and then optimizes a 3-D surface model given only the data and a very small number of 3-D seed points and corresponding surface normals. This is a valuable capability for medical, robotic, and cartographic applications where such seed points can be naturally supplied, In effect, the surface model is clamped onto the object boundary in a manner reminiscent of Velcro being closed. Applications of the developed method to stereo imagery and to volumetric medical data are demonstrated. (C) 1997 Academic Press.

  243.   Brand, M, "Physics-based visual understanding," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 65, pp. 192-205, 1997.

Abstract:   An understanding of a scene's causal physics-how scene elements interact and respond to forces-is a precondition to reasoning about how the scene came to be, how it may evolve in time, and how it will respond to manipulation. We propose a computationally inexpensive method for recovering causal structure from images, in which a scene model is built incrementally through interleaved sensing and analysis. Reasoning uses generic qualitative knowledge about rigid-body interactions, reusable between domains and similar to concepts thought to be acquired or activated during child development. Causal constraint propagation reveals anomalous degrees of freedom in the scene model; prediction yields sensory plans to resolve them, Sensing operations are highly directed and local in scope, e.g., visual routines and proprioception. Inference depth and the number of pixels ''touched'' are bounded by the complexity of the scene. We present algorithms and semantics that have been successfully reused in several domains of highly structured scenes; in particular we detail a vision system that reverse-engineers machines. (C) 1997 Academic Press.

  244.   Nastar, C, Moghaddam, B, and Pentland, A, "Flexible images: Matching and recognition using learned deformations," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 65, pp. 179-191, 1997.

Abstract:   We describe a novel technique for matching and recognition based on deformable intensity surfaces which incorporates both the shape (x, y) and the texture (I(x, y)) components of a 2D image. Specifically, the intensity surface is modeled as a deformable 3D mesh in (x, y, I(x, y)) space which obeys Lagrangian dynamics. Using an efficient technique for matching two surfaces (in terms of the analytic modes of vibration), we can obtain a dense correspondence field (or 3D warp) between two images, Furthermore, we use explicit statistical learning of the class of valid deformations in order to provide a priori knowledge about object-specific deformations, The resulting formulation leads to a compact representation based on the physically-based modes of deformation as well as the statistical modes of variation observed in actual training data. We demonstrate the power of this approach with experiments utilizing image matching, interpolation of missing data, and image retrieval in a large face database. (C) 1997 Academic Press.

  245.   Faugeras, O, and Keriven, R, "Level set methods and the stereo problem," SCALE-SPACE THEORY IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1252, pp. 272-283, 1997.

Abstract:   We present a novel geometric approach for solving the stereo problem for an arbitrary number of images (greater than or equal to 2). It is based upon the definition of a variational principle that must be satisfied by the surfaces of the objects in the scene and their images. The Euler-Lagrange equations which are deduced from the variational principle provide a set of PDE's which are used to deform an initial set of surfaces which then move towards the objects to be detected. The level set implementation of these PDE's potentially provides an efficient and robust way of achieving the surface evolution and to deal automatically with changes in the surface topology during the deformation, i.e. to deal with multiple objects. Results of a two dimensional implementation of our theory are presented on synthetic and real images.

  246.   Pardo, JM, Cabello, D, and Heras, J, "A snake for model-based segmentation of biomedical images," PATTERN RECOGNITION LETTERS, vol. 18, pp. 1529-1538, 1997.

Abstract:   In this work we present a snake based approach for the segmentation of images of computerized tomography (CT) scans, We introduce a new term for the internal energy and another one for external energy which solve common problems associated with classical snakes in this type of images. A simplified minimizing method is also presented. (C) 1997 Elsevier Science B.V.

  247.   Jones, TN, and Metaxas, DN, "Automated 3D segmentation using deformable models and fuzzy affinity," INFORMATION PROCESSING IN MEDICAL IMAGING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1230, pp. 113-126, 1997.

Abstract:   We have developed an algorithm for segmenting objects with closed, non-intersecting boundaries, such as the heart and the lungs, that is independent of the imaging modality used (e.g., MRI, CT, echocardiography). Our method is automatic and requires as initialization a single pixel/voxel within the boundaries of the object. Existing segmentation techniques either require much more information during initialization, such as an approximation to the object's boundary, or are not robust to the types of noisy data encountered in the medical domain. By integrating region-based and physics-based modeling techniques we have devised a hybrid design that overcomes these limitations. In our experiments we demonstrate across imaging modalities, that this integration automates and significantly improves the object boundary detection results. This paper focuses on the application of our method to 3D datasets.

  248.   Vaillant, M, and Davatzikos, C, "Mapping the cerebral sulci: Application to morphological analysis of the cortex and to non-rigid registration," INFORMATION PROCESSING IN MEDICAL IMAGING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1230, pp. 141-154, 1997.

Abstract:   We propose a methodology for extracting parametric representations of the cerebral sulci from magnetic resonance images, and we consider its application to two medical imaging problems: quantitative morphological analysis and spatial normalization and registration of brain images. Our methodology is based on deformable models utilizing characteristics of the cortical shape. Specifically, a parametric representation of a sulcus is determined by the motion of an active contour along the medial surface of the corresponding cortical fold. The active contour is initialized along the outer boundary of the brain, and deforms toward the deep edge of a sulcus under the influence of an external force field restricting it to lie along the medial surface of the particular cortical fold. A parametric representation of the surface is obtained as the active contour traverses the sulcus. In this paper we present results of this methodology and its applications.

  249.   Sjogreen, K, Ljungberg, M, Erlandsson, K, Floreby, L, and Strand, SE, "Registration of abdominal CT and SPECT images using Compton scatter data," INFORMATION PROCESSING IN MEDICAL IMAGING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1230, pp. 232-244, 1997.

Abstract:   The present study investigates the possibility to utilize Compton scatter data for registration of abdominal SPECT images. A method for registration to CT is presented, based on principal component analysis and cross-correlation of binary images representing the interior of the patient. Segmentation of scatter images is performed with two methods, thresholding and a deformable contour method. To achieve similarity of organ positions between scans, a positioning device is applied to the patient. Evaluation of the registration accuracy is performed with a) a I-131 phantom study, b) a Monte Carlo simulation study of an anthropomorphic phantom, and c) a I-123 patient trial. For a) r.m.s. distances between positions that should be equal in CT and SPECT are obtained to 1.0+/-0.7 mm, which thus for a rigid object is at sub pixel level. From b) results show that r.m.s. distances depend on the slice activity distribution. With a symmetrical distribution deviations are in the order of 5 mm. In c) distances between markers on the patient boundary an at the maximum 16 mm and on an average 10 mm. It is concluded that by utilizing the available Compton scatter data, valuable positioning information is achieved. that can be used for registration of SPECT images.

  250.   Vilarino, DL, Cabello, D, Mosquera, A, and Pardo, JM, "Application of a multilayer discrete-time CNN to deformable models," BIOLOGICAL AND ARTIFICIAL COMPUTATION: FROM NEUROSCIENCE TO TECHNOLOGY, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1240, pp. 1193-1202, 1997.

Abstract:   In this work Cellular Neural Networks are applied to image analysis techniques as a deformable models. To this end the problem is considered based on a discrete-time CNN with cyclic templates and time-variant external inputs. The appropriateness for a VLSI implementation and massively parallel computing of CNNs will permit a considerable improvement in processing speed with respect to the clasical active contours approaches.

 
1998

  251.   Finet, G, Maurincomme, E, Reiber, JHC, Savalle, L, Magnin, I, and Beaune, J, "Evaluation of an automatic intraluminal edge detection technique for intravascular ultrasound images," JAPANESE CIRCULATION JOURNAL-ENGLISH EDITION, vol. 62, pp. 115-121, 1998.

Abstract:   Intravascular ultrasound (IVUS) imaging enables detailed analysis and precise measurements of vascular cross-sections. However, to achieve a reduction in the existing level of observer variability requires the development of quantitative IVUS. We have developed a fully automatic intraluminal edge detection technique, based on adaptive active contour models and called ADDER (adaptive damping dependent on echographic regions) that allows the quantitation of the intraluminal cross-sectional area (ICSA). Using a 30-MHz mechanically rotated transducer mounted at the tip of a 3.5-F catheter, 58 normal and pathologic arterial segments (from coronary, renal, splenic, iliac, and carotid arteries) were imaged in vitro. These images were analyzed by 2 experts, E1 and E2, who manually traced the intraluminal contour twice for each image, as well as with ADDER. Intra-observer variabilities for ICSAs were found to be excellent (-1.454+/-3.51% for E1, 0.96+/-5.4% for E2). The inter-observer variability was 2.1+/-4.3%. The success factor for ADDER was 89%. Its intra-observer variability was null, as the method always finds a unique contour. The correlation between the automatically detected ICSA and the manual ICSA was: r=0.99 (y=1.03x+0.89 mm(2)). Morphometric variations between manually and automatically traced contours, analyzed by the centerline method, were 100+/-140 mm on average. In conclusion, the ADDER automatic contour detection applied to IVUS images is robust and characterized by small systematic and random errors; therefore, quantitative IVUS is a useful tool in clinical research trials.

  252.   Faugeras, O, and Keriven, R, "Variational principles, surface evolution, PDE's, level set methods, and the stereo problem," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 7, pp. 336-344, 1998.

Abstract:   We present a novel geometric approach for solving the stereo problem for an arbitrary number of images (greater than or equal to 2). It is based upon the definition of a variational principle that must be satisfied by the surfaces of the objects in the scene and their images, The Euler-Lagrange equations that are deduced from the variational principle provide a set of partial differential equations (PDE's) that are used to deform an initial set of surfaces which then move toward the objects to he detected, The level set implementation of these PDE's potentially provides an efficient and robust way of achieving the surface evolution and to deal automatically with changes in the surface topology during the deformations, i.e., to deal with multiple objects, Results of an implementation of our theory also dealing with occlusion and vilility are presented on sydnthetic and real images.+

  253.   Caselles, V, Morel, JM, Sapiro, G, and Tannenbaum, A, "Introduction to the special issue on partial differential equations and geometry-driven diffusion in image processing and analysis," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 7, pp. 269-273, 1998.

Abstract:   We propose a method for the 3D segmentation and representation of cortical folds with a special emphasis on the cortical sulci. These cortical structures are represented using "active ribbons". Active ribbons are built from active surfaces, which represent the median surface of a particular sulcus filled by CSF. Sulci modeling is obtained from MRI acquisitions (usually T1 images). The segmentation is performed using an automatic labeling procedure to separate gyri from sulci based on curvature analysis of the different iso-intensity surfaces of the original MRI volume. The outer parts of the sulci are used to initialize the convergence of the active ribbon from the outer parts of the brain to the interior. This procedure has two advantages: first, it permits the labeling of voxels belonging to sulci on the external part of the brain as well as on the inside (which is often the hardest point) and secondly, this segmentation allows 3D visualization of the sulci in the MRI volumetric environment as well as showing the sophisticated shapes of the cortical structures by means of isolated surfaces. Active ribbons can be used to study the complicated shape of the cortical anatomy, to model the variability of these structures in shape and position, to assist nonlinear registrations of human brains by locally controlling the warping procedure, to map brain neurophysiological functions into morphology or even to select the trajectory of an intra-sulci (virtual) endoscope.

  254.   Trinder, JC, and Wang, YD, "Automatic road extraction from aerial images," DIGITAL SIGNAL PROCESSING, vol. 8, pp. 215-224, 1998.

Abstract:   The paper presents a knowledge-based method for automatic road extraction from aerial photography and high-resolution remotely sensed images. The method is based on Marr's theory of vision, which consists of low-level image processing for edge detection and linking, mid-level processing for the formation of road structure, and high-level processing for the recognition of roads. It uses a combined control strategy in which hypotheses are generated in a bottom-up mode and a top-down process is applied to predict the missing road segments. To describe road structures a generalized antiparallel pair is proposed. The hypotheses of road segments are generated based on the knowledge of their geometric and radiometric properties, which are expressed as rules in Prolog. They are verified using part-whole relationships between roads in high-resolution images and roads in low-resolution images and spatial relationships between verified road segments. Some results are presented in this paper. (C) 1998 Academic Press.

  255.   Xu, CY, Pham, DL, Prince, JL, Etemad, ME, and Yu, DN, "Reconstruction of the central layer of the human cerebral cortex from MR images," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 481-488, 1998.

Abstract:   Reconstruction of the human cerebral cortex from MR images is a fundamental step in human brain mapping and in applications such as surgical path planning. In a previous paper, we described a method for obtaining a surface representation of the central layer of the human cerebral cortex using fuzzy segmentation and a deformable surface model. This method, however, suffers from several problems. In this paper, we significantly improve upon the previous method by using a fuzzy segmentation algorithm robust to intensity inhomogeneities, and using a deformable surface model specifically designed for capturing convoluted sulci or gyri. We demonstrate the improvement over the previous method both qualitatively and quantitatively, and show the result of its application to six subjects. We also experimentally validate the convergence of the deformable surface initialization algorithm.

  256.   Garrido, A, and De la Blanca, NP, "Physically-based active shape models: Initialization and optimization," PATTERN RECOGNITION, vol. 31, pp. 1003-1017, 1998.

Abstract:   In this paper we describe a new approach for 2-D object segmentation using an automatic method applied on images with problems as partial information, overlapping objects, many objects in a single scene, severe noise conditions and locating objects with a very high degree of deformation. We use a physically-based shape model to obtain a deformable template, which is defined on a canonical orthogonal coordinate system. The proposed methodology works starting from the output of an edge detector, which is processed to automatically obtain an approximation of the shape. The final estimation of the shapes is obtained fitting a deformable template model, which is defined on a learned surface of deformation. Results from biological images are presented. (C) 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  257.   Falcao, AX, Udupa, JK, Samarasekera, S, Sharma, S, Hirsch, BE, and Lotufo, RDA, "User-steered image segmentation paradigms: Live wire and live lane," GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 60, pp. 233-260, 1998.

Abstract:   In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide effective control to the user on the segmentation process while it is being executed, and (ii) to minimize the total user's time required in the process. With these goals in mind, we present in this paper two paradigms, referred to as live wire and live lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its "boundariness," and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor, If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (live-wire segments) are usually adequate to segment the whole 2D boundary. in live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes as a function of the speed and acceleration of cursor motion. Live-wire segments are generated and displayed in real time between successive points. The users get the feeling that the curve snaps onto the boundary as and while they roughly mark in the vicinity of the boundary. We describe formal evaluation studies to compare the utility of the new methods with that of manual tracing based on speed and repeatability of tracing and on data taken from a large ongoing application. The studies indicate that the new methods are statistically significantly more repeatable and 1.5-2.5 times faster than manual tracing. (C) 1998 Academic Press.

  258.   Bardinet, E, Cohen, LD, and Ayache, N, "A parametric deformable model to fit unstructured 3D data," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 71, pp. 39-54, 1998.

Abstract:   In many computer vision and image understanding problems, it is important to find a smooth surface that fits a set of given unstructured 3D data. Although approaches based on general deformable models give satisfactory results, in particular a local description of the surface, they involve large linear systems to solve when dealing with high resolution 3D images. The advantage of parametric deformable templates like superquadrics is their small number of parameters to describe a shape. However, the set of shapes described by superquadrics is too limited to approximate precisely complex surfaces. This is why hybrid models have been introduced to refine the initial approximation. This article introduces a deformable superquadric model based on a superquadric fit followed by a free-form deformation (FFD) to fit unstructured 3D points. At the expense of a reasonable number of additional parameters, free-form deformations provide a much closer fit and a volumetric deformation field. We first present the mathematical and algorithmic details of the method. Then, since we are mainly concerned with applications for medical images, we present a medical application consisting in the reconstruction of the left ventricle of the heart from a number of various 3D cardiac images. The extension of the method to track anatomical structures in spatio-temporal images (4D data) is presented in a companion article [9]. (C) 1998 Academic Press.

  259.   Snel, JG, Venema, HW, and Grimbergen, CA, "Detection of the carpal bone contours from 3-D MR images of the wrist using a planar radial scale-space snake," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 17, pp. 1063-1072, 1998.

Abstract:   In this paper we consider the problems encountered when applying snake models to detect the contours of the carpal bones in 3-D MR images of the wrist, In order to improve the performance of the original snake model introduced by Kass [1], we propose a new image force based on one-dimensional (1-D) second-order Gaussian filtering and contrast equalization, The improved snake is less sensitive to model initialization and has no tendency to cut off contour sections of high curvature, because 1-D radial scale-space relaxation is used, Contour orientation is used to minimize the influence of neighboring image structures. Due to 1-D contrast equalization an intensity insensitive measure of external energy is obtained. As a consequence a good balance between internal and external energetic contributions of the snake is established, which also improves convergence. By incorporating this new image force into the snake model, we succeed in accurate contour detection, even when relatively high noise levels are present and when the contrast varies along the contours of the bones.

  260.   Lam, KM, and Yan, H, "An analytic-to-holistic approach for face recognition based on a single frontal view," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 20, pp. 673-686, 1998.

Abstract:   In this paper, we propose an analytic-to-holistic approach which can identify faces at different perspective variations. The database for the test consists of 40 frontal-view faces. The first step is to locate 15 feature points on a face. A head model is proposed, and the rotation of the face can be estimated using geometrical measurements. The positions of the feature points are adjusted so that their corresponding positions for the frontal view are approximated. These feature points are then compared with the feature points of the faces in a database using a similarity transform. In the second step, we set up windows for the eyes, nose, and mouth. These feature windows are compared with those in the database by correlation. Results show that this approach can achieve a similar level of performance from different viewing directions of a face. Under different perspective variations, the overall recognition rates are over 84 percent and 96 percent for the first and the first three likely matched faces, respectively.

  261.   Cheng, D, Mercer, RE, Barron, JL, and Joe, P, "Tracking severe weather storms in Doppler radar images," INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, vol. 9, pp. 201-213, 1998.

Abstract:   We describe an automatic storm-tracking system to help with the forecasting of severe storms. in this article, we present the concepts of fuzzy point, fuzzy vector, fuzzy length of a fuzzy vector, and fuzzy angle between two nonzero fuzzy vectors, that are used in our tracking algorithm. These concepts are used to overcome some of the limitations of our previous work, where fixed center-of-mass storm centers did not provide smooth tracks over time, while at the same time, their detection was very threshold sensitive. Our algorithm uses region splitting with dynamic thresholding to determine storm masses in Doppler radar intensity images. We represent the center of a hypothesized storm using a fuzzy point. These fuzzy storm centers are then tracked over time using an incremental relaxation algorithm. We have developed a visualization program using the X11 Athena toolkit for our storm visualization tool. The algorithm was tested on seven real radar image sequences obtained from the Atmospheric Environment Service radar station at King City, Ontario, Canada. We can obtain storm tracks that are long and smooth and which closely match an expert meteorologist's perception. (C) 1998 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 9, 201-213, 1998.

  262.   Grenander, U, and Miller, MI, "Computational anatomy: An emerging discipline," QUARTERLY OF APPLIED MATHEMATICS, vol. 56, pp. 617-694, 1998.

Abstract:   This paper studies mathematical methods in the emerging new discipline of Computational Anatomy. Herein we formalize the Brown/Washington University model of anatomy following the global pattern theory introduced in [1, 2], in which anatomies are represented as deformable templates, collections of 0, 1, 2, 3-dimensional manifolds. Typical structure is carried by the template with the variabilities accommodated via the application of random transformations to the background manifolds. The anatomical model is a quadruple (Omega, H, I, P), the background space Omega = boolean ORalpha M-alpha of 0, 1, 2, 3-dimensional manifolds, the set of diffeomorphic transformations on the background space H : Omega <-> Omega, the space of idealized medical imagery I, and P the family of probability measures on H. The group of diffeomorphic transformations H is chosen to be rich enough so that a large family of shapes may be generated with the topologies of the template maintained. For normal anatomy one deformable template is studied, with (Omega, H, I) corresponding to a homogeneous space [3], in that it can be completely generated from one of its elements, I = HItemp,I-temp is an element of I. For disease, a family of templates boolean ORalphaItempalpha are introduced of perhaps varying dimensional transformation classes. The complete anatomy is a collection of homogeneous spaces I-total = boolean ORalpha(I-alpha,H-alpha). There are three principal components to computational anatomy studied herein. (1) Computation of large deformation maps: Given any two elements I, I' is an element of I in the same homogeneous anatomy (Omega, H, I), compute diffeomorphisms h from one anatomy to the other I (h-1)reversible arrow(h) I'. This is the principal method by which anatomical structures are understood, transferring the emphasis from the images I is an element of I to the structural transformations h is an element of H that generate them. (2) Computation of empirical probability laws: Given populations of anatomical imagery and diffeomorphisms between them I h(n-1)reversible arrow(hn) I-n, n = 1, . . . , N, generate probability laws P is an element of P on H that represent the anatomical variation reflected by the observed population of diffeomorphisms h(n), n = 1,..., N. (3) Inference and disease testing: Within the anatomy (Omega, H, I, P), perform Bayesian classification and testing for disease and anomaly.

  263.   Long, Q, Xu, XY, Collins, MW, Bourne, M, and Griffith, TM, "Magnetic resonance image processing and structured grid generation of a human abdominal bifurcation," COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, vol. 56, pp. 249-259, 1998.

Abstract:   Magnetic resonance angiography (MRA) offers a non-invasive approach to the acquisition of anatomically accurate human arterial structure. Combining the latest computational fluid dynamics (CFD) techniques with clinical data from MRA, the detailed haemodynamics information in the human circulation system can be obtained. In this paper, a novel computer method is presented, which generates automatically a computational grid for a human abdominal bifurcation from a set of conventional MRA images. The method covers the complete sequence from MR image segmentation, 3-D model construction, grid generation, to grid quality evaluation. Results demonstrate that the computer program developed is capable of generating a good quality grid for human arterial bifurcations from MRA images with minimum user. input. The resultant grid can be used directly for further computer simulation of the flow. (C) 1998 Elsevier Science Ireland Ltd. All rights reserved.

  264.   Morrow-Tesch, J, Dailey, JW, and Jiang, H, "A video data base system for studying animal behavior," JOURNAL OF ANIMAL SCIENCE, vol. 76, pp. 2605-2608, 1998.

Abstract:   Classification of farm animal behavior is based on oral or written descriptions of the activity in which the animal is engaged. The quantification of animal behavior for research requires that individuals recognize and code the behavior of the animal under study. The classification of these behaviors can be subjective and may differ among observers. Illustrated guides to animal behavior do not convey the motion associated with most behaviors. Video-based guides offer a method of quantifying behaviors with real-time demonstrations of the components that make up a behavior. An animal behavior encyclopedia has been developed to allow searching and viewing of defined (video-recorded) behaviors on the Internet. This video data base is being developed to initiate a system that automatically extracts animal motion information from an input animal activity video clip using a multiobject tracking and reasoning system. Eventually, the extracted information will be analyzed and described using standard animal behavior definitions (the behavior encyclopedia). The intended applications of the behavior encyclopedia and video tracking system are 1) an accessible data base for defining and illustrating behaviors for both research and teaching and 2) to further automate the collection of animal behavior data.

  265.   Hu, YL, Rogers, WJ, Coast, DA, Kramer, CM, and Reichek, N, "Vessel boundary extraction based on a global and local deformable physical model with variable stiffness," MAGNETIC RESONANCE IMAGING, vol. 16, pp. 943-951, 1998.

Abstract:   Reliable and efficient vessel cross-sectional boundary extraction is very important for many medical magnetic resonance (MR) image studies. General purpose edge detection algorithms often fail for medical MR images processing due to fuzzy boundaries, inconsistent image contrast, missing edge features, and the complicated background of MR images. In this regard, we present a vessel cross-sectional boundary extraction algorithm based on a global and local deformable model with variable stiffness. With the global model, the algorithm can handle relatively large vessel position shifts and size changes. The local deformation with variable stiffness parameters enable the model to stay right on edge points at the location where edge features are strong and at the same time, fit a smooth contour at the location where edge features are missing, Directional gradient information is used to help the model to pick correct edge segments. The algorithm was used to process MR cine phase-contrast images of the aorta from 20 volunteers (over 500 images) with excellent results. (C) 1998 Elsevier Science Inc.

  266.   Wolberg, G, "Image morphing: a survey," VISUAL COMPUTER, vol. 14, pp. 360-372, 1998.

Abstract:   Image morphing has received much attention in recent years. It has proven to be a powerful tool for visual effects in film and television, enabling the fluid transformation of one digital image into another. This paper surveys the growth of this field and describes recent advances in image morphing in terms of feature specification, warp generation methods, and transition control. These areas relate to the ease of use and quality of results. We describe the role of radial basis functions, thin plate splines, energy minimization, and multilevel free-form deformations in advancing the state-of-the-art in image morphing. Recent work on a generalized framework for morphing among multiple images is described.

  267.   Gao, PS, and Sederberg, TW, "A work minimization approach to image morphing," VISUAL COMPUTER, vol. 14, pp. 390-400, 1998.

Abstract:   We present an algorithm for morphing two images, often with little or no user interaction. For two similar images (such as different faces against a neutral background), the algorithm generally can create a pleasing morph completely automatically. The algorithm seeks to minimize the work needed to deform one image into the other. Work is defined as a function of the amount of warping and recoloration. We invoke a hierarchical method for finding a minimal work solution. Anchor point constraints are satisfied by penalties imposed on deformations that disobey these constraints. Good results can be obtained in less than 10 s for 256x256 images.

  268.   Tang, CK, Medioni, G, and Duret, F, "Automatic, accurate surface model inference for dental CAD/CAM," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 732-742, 1998.

Abstract:   Dental CAD/CAM offers the prospects of drastically reducing the time to provide service to patients, with no compromise on quality. Given the state-of-the-art in sensing, design, and machining, an attractive approach is to have a technician generate a restorative design in wax, which can then be milled by a machine in porcelain or titanium. The difficulty stems from the inherent outlier noise in the measurement phase. Traditional techniques remove noise at the cost of smoothing, degrading discontinuities such as anatomical lines which require accuracy up to 5 to 10 mu m to avoid artifacts. This paper presents an efficient method for the automatic and accurate data validation and 3-D shape inference from noisy digital dental measurements. The input consists of 3-D points with spurious samples, as obtained from a variety of sources such as a laser scanner or a stylus probe. The system produces faithful smooth surface approximations while preserving critical curve features such as grooves and preparation lines. To this end, we introduce the Tensor Voting technique, which efficiently ignores noise, infers smooth structures, and preserves underlying discontinuities. This method is non-iterative, does not require initial guess, and degrades gracefully with spurious noise, missing and erroneous data. We show results on real and complex data.

  269.   Jiang, HT, and Elmagarmid, AK, "Spatial and temporal content-based access to hypervideo databases," VLDB JOURNAL, vol. 7, pp. 226-238, 1998.

Abstract:   Providing content-based video query, retrieval and browsing is the most important goal of a video database management system (VDBRIS). Video data is unique not only in terms of its spatial and temporal characteristics, but also in the semantic associations manifested by the entities present in the video. This paper introduces a novel video data model called Logical Hypervideo Data,Model. In addition to multilevel video abstractions, the model is capable of representing video entities that users are interested in (defined as hot objects) and their semantic associations with other logical video abstractions, including hot objects themselves. The semantic associations are modeled as video hyperlinks and video data with such property are called hypervideo. Video hyperlinks provide a flexible and effective way of browsing video data. Based on the proposed model, video queries can be specified with both temporal and spatial constraints, as well as with semantic descriptions of the video data. The characteristics of hot objects' spatial and temporal relations and efficient evaluation of them are also discussed. Some query examples are given to demonstrate the expressiveness of the video data model and query language. Finally, we describe a modular video database system architecture that our web-based prototype is based on.

  270.   MacDonald, D, Avis, D, and Evans, AC, "Proximity constraints in deformable models for cortical surface identification," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 650-659, 1998.

Abstract:   Automatic computer processing of large multi-dimensional images such as those produced by magnetic resonance imaging (MRI) is greatly aided by deformable models. A general method of deforming polyhedra is presented here, with two novel features. Firstly, explicit prevention of non-simple (self-intersecting) surface geometries is provided, unlike conventional deformable models which merely discourage such behaviour. Secondly, simultaneous deformation of multiple surfaces with inter-surface proximity constraints provides a greater facility for incorporating model-based constraints into the process of image recognition. These two features are used advantageously to automatically identify the total surface of the cerebral cortical gray matter from normal human MR images, accurately locating the depths of the sulci even where under-sampling in the image obscures the visibility of sulci. A large number of individual surfaces (N=151) are created and a spatial map of the mean and standard deviation of the cerebral cortex and the thickness of cortical gray matter are generated. Ideas for further work are outlined.

  271.   Wink, O, Niessen, WJ, and Viergever, MA, "Fast quantification of abdominal aortic aneurysms from CTA volumes," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 138-145, 1998.

Abstract:   A method is presented which aids the clinician in obtaining quantitative measurements of an abdominal aortic aneurysm from a CTA volume. These measurements are needed in the preoperative evaluation of candidates for minimally invasive aneurysmal repair. The user initializes starting points in the iliac artery, Subsequently, an iterative tracking procedure outlines the central lumen line in the aorta and the iliac arteries. Quantitative measurements on vessel morphology are performed in the planes perpendicular to the vessel axis. The entire process is performed in less than one minute on a standard workstation. In addition to the presentation of the calculated measures, a 3D view of the vessels is generated. This allows for interactive inspection of the vasculature and the tortuosity of the vessels.

  272.   Positano, V, Santarelli, MF, Landini, L, and Benassi, A, "Fast and quantitative analysis of 4D cardiac images using a SMP architecture," APPLIED PARALLEL COMPUTING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1541, pp. 447-451, 1998.

Abstract:   In the present research a parallel algorithm for medical image processing has been proposed, which allows 3D quantitative analysis of left ventricular cardiac wall motion in real time. It is a fundamental task in evaluating a lot of indexes useful to perform diagnosis of important diseases. However, such analysis involves expensive tasks in terms of computational time: tridimensional segmentation and an accurate cavity contour detection during the entire cardiac cycle. In this paper an implementation of a dynamic quantitative analysis algorithm on low-cost Shared Memory Processor machine is described. In order to test the developed system in actual environment, a dynamic sequence of 3D data volume, derived from Magnetic Resonance (MR) cardiac images, has been processed.

  273.   Lotjonen, J, Magnin, IE, Reissman, PJ, Nenonen, J, and Katila, T, "Segmentation of magnetic resonance images using 3D deformable models," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 1213-1221, 1998.

Abstract:   A new method to segment MR volumes has been developed. The method matches elastically a 3D deformable prior model, describing the structures of interest, to the MR volume of a patient. The deformation is done using a deformation grid. Oriented distance maps are utilized to guide the deformation process. Two alternative restrictions are used to preserve the geometrical prior knowledge of the model. The method is applied to extract the body, the lungs and the heart. The segmentation is needed to build individualized boundary element models for bioelectromagnetic inverse problem. The method is fast, automatic and accurate. Good results have been achieved for four MR volumes tested so far.

  274.   Lorigo, LM, Faugeras, O, Grimson, WEL, Keriven, R, and Kikinis, R, "Segmentation of bone in clinical knee MRI using texture-based geodesic active contours," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 1195-1204, 1998.

Abstract:   This paper presents a method for automatic segmentation of the tibia and femur in clinical magnetic resonance images of knees. Texture information is incorporated into an active contours framework through the use of vector-valued geodesic snakes with local variance as a second value at each pixel, in addition to intensity. This additional information enables the system to better handle noise and the non-uniform intensities found within the structures to be segmented. It currently operates independently on 2D images (slices of a volumetric image) where the initial contour must be within the structure but not necessarily near the boundary. These separate segmentations are stacked to display the performance on the entire 3D structure.

  275.   Sebastian, TB, Tek, H, Crisco, JJ, Wolfe, SW, and Kimia, BB, "Segmentation of carpal bones from 3D CT images using skeletally coupled deformable models," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 1184-1194, 1998.

Abstract:   The in vivo investigation of joint kinematics in normal and injured wrist requires the segmentation of carpal bones from 3D (CT) images and their registration over time. The non-uniformity of bone tissue, ranging from dense cortical bone to textured spongy bone, the irregular, small shape of closely packed carpel bones which move with respect to one another, and with respect to CT resolution, augmented with the presence of blood vessels, and the inherent blurring of CT imaging renders the segmentation of carpal bones a challenging task. Specifically, four characteristic difficulties are prominent: (i) gaps or weak edges in the carpal bone surfaces. (ii) diffused edges, (iii) textured regions, and, (iv) extremely narrow inter-bone regions. We review the performance of statistical classification! deformable models, region growing, and morphological operations for this application. We then propose a model which. combines several of these approaches in a single framework. Specifically, initialized seeds grow in a curve evolution implementation of active contours, but where growth is modulated by a skeletally-mediated competition between neighboring regions, thus combining the advantages of local and global region growing methods, region competition and active contours. This approach effectively deals with many of the difficulties presented above as illustrated by numerous examples.

  276.   Poupon, F, Mangin, JF, Hasboun, D, Poupon, C, Magnin, I, and Frouin, V, "Multi-object deformable templates dedicated to the segmentation of brain deep structures," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI'98, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1496, pp. 1134-1143, 1998.

Abstract:   We propose a new way of embedding shape distributions in a topological deformable template. These distributions rely on global shape descriptors corresponding to the 3D moment invariants. In opposition to usual Fourier-like descriptors, they can be updated during deformations at a relatively low cost. The moment-based distributions are included in a framework allowing the management of several simultaneously deforming objects. This framework is dedicated to the segmentation of brain deep nuclei in 3D MR images. The paper focuses on the learning of the shape distributions, on the initialization of the topological model and on the multi-resolution energy minimization process. Results are presented showing the segmentation of twelve brain deep structures.

  277.   Xu, CY, and Prince, JL, "Generalized gradient vector flow external forces for active contours," SIGNAL PROCESSING, vol. 71, pp. 131-139, 1998.

Abstract:   Active contours, or snakes, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. A new type of external force for active contours, called gi adient vector flow (GVF) was introduced recently to address problems associated with initialization and poor convergence to boundary concavities. GVF is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. In this paper, we generalize the GVF formulation to include two spatially varying weighting functions. This improves active contour convergence to long, thin boundary indentations, while maintaining other desirable properties of GVF, such as an extended capture range. The original GVF is a special case of this new generalized GVF (GGVF) model. An error analysis for active contour results on simulated test images is also presented. (C) 1998 Elsevier Science B.V. All rights reserved.

  278.   Montagnat, J, and Delingette, H, "Globally constrained deformable models for 3D object reconstruction," SIGNAL PROCESSING, vol. 71, pp. 173-186, 1998.

Abstract:   To achieve geometric reconstruction from 3D datasets two complementary approaches have been widely used. On one hand, the deformable model framework locally applies forces to fit the data. On the other hand, the non-rigid registration framework computes a global transformation minimizing the distance between a template and the data. We first show that applying a global transformation on a surface template, is equivalent to applying certain global forces on a deformable model. Second, we propose a scheme which combines the registration and free-form deformation. This globally constrained deformation scheme allows us to control the amount of deformation from the reference shape with a single parameter. Finally, we propose a general algorithm for performing model-based reconstruction in a robust and accurate manner. Examples on both range data and medical images are used to illustrate and validate the globally constrained deformation framework. (C) 1998 Elsevier Science B.V. All rights reserved.

  279.   Pavlidis, I, Papanikolopoulos, N, and Mavuduru, R, "Signature identification through the use of deformable structures," SIGNAL PROCESSING, vol. 71, pp. 187-201, 1998.

Abstract:   Automatic signature verification is a well-established and active research area with numerous applications. In contrast, automatic signature identification has been given little attention, although there is a vast array of potential applications that could use the signature as an identification tool. This paper presents a novel approach to the problem of signature identification. We introduce the use of the revolving active deformable model as a powerful way of capturing the unique characteristics of the overall structure of a signature. Experimental evidence as well as intuition support the idea that the overall structure of a signature uniquely determines the signature in the majority of cases. Our revolving active deformable model originates from the snakes introduced in computer vision by Kass et al., but its implementation has been tailored to the task at hand. This computer-generated model interacts with the virtual gravity field created by the image gradient. Ideally, the uniqueness of this interaction mirrors the uniqueness of the signature's overall structure. The proposed method obviates the use of a computationally expensive segmentation approach and is parallelizable. The experiments performed with a signature database show that the proposed method is promising. (C) 1998 Elsevier Science B.V. All rights reserved.

  280.   Hinshaw, KP, and Brinkley, JF, "Incorporating constraint-based shape models into an interactive system for functional brain mapping," JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, vol. 71, pp. 921-925, 1998.

Abstract:   Through intraoperative electrical stimulation mapping, it is possible to identify sites on the surface of the brain that are essential for language function. Interesting correlations have been found between the distribution of these sites and behavioral traits such as verbal IQ. In previous work, tools were developed for building a reconstruction of a patient's cortical surface and using it to recover coordinates of essential language sites. However, considerable expertise was required to produce good reconstructions. This paper describes an improved version of the mapping procedure, in which segmentation is driven by a 3-D shape model. The model-based approach provides more intuitive control over the system, allowing a trained user to complete a surface reconstruction and mapping in about two hours. This level of performance makes it feasible to gather language maps for a large number of patients, which hopefully will lead to significant new findings about language organization in the brain.

  281.   Gupta, K, "Motion planning for flexible shapes (systems with many degrees of freedom): a survey," VISUAL COMPUTER, vol. 14, pp. 288-302, 1998.

Abstract:   This article provides a brief tutorial-cum-overview of motion planning for "flexible" shapes. The article takes the point of view that motion planning for flexible shapes, in a broad sense, essentially amounts to motion planning for systems with many degrees of freedom (dofs), a well-studied problem in robotics. We start with the basics of motion planning including an introduction to some key concepts, survey a number of recent approaches to solve the motion planning for systems with many dofs, discuss the application of some of these approaches to motion planning for flexible shapes, and report, on some recent work in this area.

  282.   Brandtberg, T, and Walter, F, "Automated delineation of individual tree crowns in high spatial resolution aerial images by multiple-scale analysis," MACHINE VISION AND APPLICATIONS, vol. 11, pp. 64-73, 1998.

Abstract:   This paper presents an automatic multiple-scale algorithm for delineation of individual tree crowns in high spatial resolution infrared colour aerial images. The tree crown contours were identified as zero-crossings, with convex grey-level curvature, which were computed on the intensity image for each image scale. A modified centre of curvature was estimated for every edge segment pixel. For each segment, these centre points formed a swarm which was modelled as a primal sketch using an ellipse extended with the mean circle of curvature. The model described the region of the derived tree crown based on the edge segment at the current scale. The sketch was rescaled with a significance value and accumulated for a scale interval. In the accumulated sketch, a tree crown segment was grown, starting at local peaks, under the condition that it was inside the area of healthy vegetation in the aerial image and did not trespass into a neighbouring crown segment. The method was evaluated by comparison with manual delineation and with ground truth on 43 randomly selected sample plots. It was concluded that the performance of the method is almost equivalent to visual interpretation. On the average, seven out of ten tree crowns were the same. Furthermore, ground truth indicated a large number of hidden trees. The proposed technique could be used as a basic tool in forest surveys.

  283.   Ip, HHS, and Shen, DG, "An affine-invariant active contour model (AI-snake) for model-based segmentation," IMAGE AND VISION COMPUTING, vol. 16, pp. 135-146, 1998.

Abstract:   In this paper, we show that existing shaped-based active contour models are not affine-invariant and we addressed the problem by presenting an affine-invariant snake model (AI-snake) such that its energy function are defined in terms local and global affine-invariant features. The main characteristic of the AI-snake is that, during the process of object extraction, the pose of the model contour is dynamically adjusted such that it is in alignment with the current snake contour by solving the snake-prototype correspondence problem and determining the required affine transformation. In addition, we formulate the correspondence matching between the snake and the object prototype as an error minimization process between two feature vectors which capture both local and global deformation information. We show that the technique is robust against object deformations and complex scenes. (C) 1998 Elsevier Science B.V.

  284.   Tupin, F, Maitre, H, Mangin, JF, Nicolas, JM, and Pechersky, E, "Detection of linear features in SAR images: Application to road network extraction," IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 36, pp. 434-453, 1998.

Abstract:   We propose a two-step algorithm for almost unsupervised detection of linear structures, in particular, main axes in road networks, as seen in synthetic aperture radar (SAR) images. The first step is local and is used to extract linear features from the speckle radar image, which are treated as road-segment candidates. We present two local line detectors as well as a method for fusing information from these detectors, In the second global step, we identify the real roads among the segment candidates by defining a Markov random field (MRF) on a set of segments, which introduces contextual knowledge about the shape of road objects, The influence of the parameters on the road detection is studied and results are presented for various real radar images.

  285.   Calabi, E, Olver, PJ, Shakiban, C, Tannenbaum, A, and Haker, S, "Differential and numerically invariant signature curves applied to object recognition," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 26, pp. 107-135, 1998.

Abstract:   We introduce a new paradigm, the differential invariant signature curve or manifold, for the invariant recognition of visual objects. A general theorem of E. Cartan implies that two curves are related by a group transformation if and only if their signature curves are identical. The important examples of the Euclidean and equi-affine groups are discussed in detail. Secondly, we show how a new approach to the numerical approximation of differential invariants, based on suitable combination of joint invariants of the underlying group action, allows one to numerically compute differential invariant signatures in a fully group-invariant manner. Applications to a variety of fundamental issues in vision, including detection of symmetries, visual tracking, and reconstruction of occlusions, are discussed.

  286.   Younes, L, "Computable elastic distances between shapes," SIAM JOURNAL ON APPLIED MATHEMATICS, vol. 58, pp. 565-586, 1998.

Abstract:   We define distances between geometric curves by the square root of the minimal energy required to transform one curve into the other. The energy is formally defined from a left invariant Riemannian distance on an infinite dimensional group acting on the curves, which can be explicitly computed. The obtained distance boils down to a variational problem for which an optimal matching between the curves has to be computed. An analysis of the distance when the curves are polygonal leads to a numerical procedure for the solution of the variational problem, which can efficiently be implemented, as illustrated by experiments.

  287.   Fua, P, "Fast, accurate and consistent modeling of drainage and surrounding terrain," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 26, pp. 215-234, 1998.

Abstract:   We propose an automated approach to modeling drainage channels-and, more generally, linear features that lie on the terrain-from multiple images. It produces models of the features and of the surrounding terrain that are accurate and consistent and requires only minimal human intervention. We take advantage of geometric constraints and photommetric knowledge. First, rivers flow downhill and lie at the bottom of valleys whose floors tend to be either V- or U-shaped. Second, the drainage pattern appears in gray-level images as a network of linear features that can be visually detected. Many approaches have explored individual facets of this problem. Ours unifies these elements in a common framework. We accurately model terrain and features as 3-dimensional objects from several information sources that may be in error and inconsistent with one another. This approach allows us to generate models that are faithful to sensor data, internally consistent and consistent with physical constraints. We have proposed generic models that have been applied to the specific task at hand. We show that the constraints can be expressed in a computationally effective way and, therefore, enforced while initializing the models and then fitting them to the data. Furthermore, these techniques are general enough to work on other features that are constrained by predictable forces.

  288.   Noble, JA, Gupta, R, Mundy, J, Schmitz, A, and Hartley, RI, "High precision X-ray stereo for automated 3-D CAD-based inspection," IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, vol. 14, pp. 292-302, 1998.

Abstract:   An important challenge in industrial metrology is to provide rapid measurement of critical three-dimensional (3-D) internal object geometry for either inspecting high volume parts or controlling a machining process, Existing metrological techniques are typically too slow to meet this need or can not measure small features with high precision. In this paper, we present a new method that achieves fast, accurate, internal 3-D geometry measurement based on 3-D reconstruction from a few X-ray views of a part, Our approach utilizes an accurate camera model for the X-ray sensor, calibration using in situ ground truth and geometry-guided X-ray feature extraction to achieve this goal and has been fully implemented in a prototype 3-D measurement system, We describe a novel application of the system to CAD-based verification of drilled hole positioning. Experimental results are given to illustrate the precision of the system and 3-D measurement on real industrial parts.

  289.   Chesnaud, C, Page, V, and Refregier, P, "Improvement in robustness of the statistically independent region snake-based segmentation method of target-shape tracking," OPTICS LETTERS, vol. 23, pp. 488-490, 1998.

Abstract:   We propose a technique to increase the robustness of a snake-based segmentation method originally introduced to track the shape of a target with random white Gaussian intensity upon a random white Gaussian background. Because these statistical conditions are not always fulfilled with optronic images, we describe two improvements that increase the field of application of this approach. We first show that regularized whitening preprocessing allows one to apply the original method successfully for a target with a correlated texture upon a correlated background. We then introduce a simple multiscale approach that increases the robustness of the segmentation against the initialization of the snake (i.e., the initial shape used for the segmentation). These results provide a robust and practical method for determination of the reference image for correlation techniques. (C) 1998 Optical Society of America.

  290.   Casadei, S, and Mitter, S, "Hierarchical image segmentation - Part I: Detection of regular curves in a vector graph," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 27, pp. 71-100, 1998.

Abstract:   The problem of edge detection is viewed as a hierarchy of detection problems where the geometric objects to be detected (e.g., edge points, curves, regions) have increasing complexity and spatial extent. An early stage of the proposed hierarchy consists in detecting the regular portions of the visible edges. The input to this stage is given by a graph whose vertices are tangent vectors representing local and uncertain information about the edges. A model relating the input vector graph to the curves to be detected is proposed. An algorithm with linear time complexity is described which solves the corresponding detection problem in a worst-case scenario. The stability of curve reconstruction in the presence of uncertain information and multiple responses to the same edge is analyzed and addressed explicitly by the proposed algorithm.

  291.   Malassiotis, S, and Strintzis, MG, "Tracking textured deformable objects using a finite-element mesh," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 8, pp. 756-774, 1998.

Abstract:   This paper presents an algorithm for the estimation of the motion of textured objects undergoing nonrigid deformations over a sequence of images. An active mesh model, which is a finite-element deformable membrane, is introduced in order to achieve efficient representation of global and local deformations, The mesh is constructed using an adaptive triangulation procedure that places more triangles over high detail areas. Through robust least squares techniques and modal analysis, efficient estimation of global object deformations is achieved, based on a set of sparse displacement measurements. A local warping procedure is then applied to minimize the intensity matching error between subsequent images, and thus estimate local deformations, Among the major contributions of this paper are novel techniques developed to acquire knowledge of the object dynamics and structure directly from the image sequence, even in the absence of prior intelligence regarding the scene, Specifically, a coarse-to-fine estimation scheme is first developed, which adapts the model to locally deforming features. Subsequently, principal components modal analysis is used to accumulate knowledge of the object dynamics. This knowledge is finally exploited to constrain the object deformation. The problem of tracking the model over time is addressed, and a novel motion-compensated prediction approach is proposed to facilitate this. A novel method for the determination of the dynamical principal axes of deformation is developed, The experimental results demonstrate the efficiency and robustness of the proposed scheme, which has many potential applications in the areas of image coding, image analysis, and computer graphics.

  292.   Luan, JA, Stander, J, and Wright, D, "On shape detection in noisy images with particular reference to ultrasonography," STATISTICS AND COMPUTING, vol. 8, pp. 377-389, 1998.

Abstract:   We discuss the detection of a connected shape in a noisy image. Two types of image are considered: in the first a degraded outline of the shape is visible, while in the second the data are a corrupted version of the shape itself. In the first type the shape is defined by a thin outline of pixels with records that are different from those at pixels inside and outside the shape, while in the second type the shape is defined by its edge and pixels inside and outside the shape have different records. Our motivation is the identification of cross-sectional head shapes in ultrasound images of human fetuses. We describe and discuss a new approach to detecting shapes in images of the first type that uses a specially designed filter function that iteratively identifies the outline pixels of the head. We then suggest a way based on the cascade algorithm introduced by Jubb and Jennison (1991) of improving and considerably increasing the speed of a method proposed by Storvik (1994) for detecting edges in images of the second type.

  293.   Park, JS, and Han, JH, "Contour motion estimation from image sequences using curvature information," PATTERN RECOGNITION, vol. 31, pp. 31-39, 1998.

Abstract:   This paper presents a novel method of velocity field estimation for the points on moving contours in a 2-D image sequence. The method determines the corresponding point in a next image frame by considering the curvature change of a given point on the contour. In traditional methods, there are errors in optical flow estimation for the points which have low curvature variations since those methods compute solutions by approximating normal optical flow. The proposed method computes optical flow vectors of contour points minimizing the curvature changes. As a first step, snakes are used to locate smooth curves in 2-D imagery. Thereafter, the extracted curves are tracked continuously. Each point on a contour has a unique corresponding point on the contour in the next frame whenever the curvature distribution of the contour varies smoothly. The experimental results showed that the proposed method computes accurate optical flow vectors for various moving contours. (C) 1997 Pattern Recognition Society. Published by Elsevier Science Ltd.

  294.   Gunn, SR, and Nixon, MS, "Global and local active contours for head boundary extraction," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 30, pp. 43-54, 1998.

Abstract:   Active contours are an attractive choice to extract the head boundary, for deployment within a face recognition or model-based coding scenario. However, conventional snake approaches can suffer difficulty in initialisation and parameterisation. A dual active contour configuration using dynamic programming has been developed to resolve these difficulties by using a global energy minimisation technique and a simplified parameterisation, to enable a global solution to be obtained. The merits of conventional gradient descent based snake (local) approaches, and search-based (global) approaches are discussed. In application to find head and face boundaries in front-view face images, the new technique employing dynamic programming is deployed to extract the inner face boundary, along with a conventional normal-driven contour to extract the outer (head) boundary. The extracted contours appear to offer sufficient discriminatory capability for inclusion within an automatic face recognition system.

  295.   Jain, AK, Zhong, Y, and Dubuisson-Jolly, MP, "Deformable template models: A review," SIGNAL PROCESSING, vol. 71, pp. 109-129, 1998.

Abstract:   In this paper, we review the recently published work on deformable models. We have chosen to concentrate on 2D deformable models and relate the energy minimization approaches to the Bayesian formulations. We categorize the various active contour systems according to the definition of the deformable model. We also present in detail one particular formulation for deformable templates which combines edge, texture, color and region information for the external energy and model deformations using wavelets, splines or Fourier descriptors. We explain how these models can be used for segmentation, image retrieval in a large database and object tracking in a video sequence. (C) 1998 Elsevier Science B.V. All rights reserved.

  296.   Schultz, N, and Conradsen, K, "2D vector-cycle deformable templates," SIGNAL PROCESSING, vol. 71, pp. 141-153, 1998.

Abstract:   In this paper the theory of deformable templates as a vector cycle in 2D is described. The deformable template model originated in (Grenander, 1983) and was further investigated in (Grenander st al., 1991). A template vector distribution is induced by parameter distributions from transformation matrices applied to the vector cycle. An approximation in the parameter distribution is introduced. The main advantage by using the deformable template model is the ability to simulate a wide range of objects constrained by e.g. their biological variations, and thereby improve restoration, segmentation and classification tasks. For the segmentation the Metropolis algorithm and simulated annealing are used in a Bayesian scheme to obtain a maximum a posteriori estimator. Different energy functions are introduced and applied to different tasks in a case study. The energy functions are local mean, local gradient and probability measurement. The case study concerns estimation of meat percent in pork carcasses. Given two cross-sectional images - one at the front and one near the ham of the carcass - the areas of lean and fat and a muscle in the lean area are measured automatically by the deformable templates. (C) 1998 Elsevier Science B.V. All rights reserved.

  297.   Elmoataz, A, Schupp, S, Clouard, R, Herlin, P, and Bloyet, D, "Using active contours and mathematical morphology tools for quantification of immunohistochemical images," SIGNAL PROCESSING, vol. 71, pp. 215-226, 1998.

Abstract:   An image segmentation method is proposed, which combines mathematical morphology tools and active contours in two stages. First, contours are coarsely approximated by means of morphological operators. Second, these initial contours evolve under the influence of geometric and grey-level information, owing to the model of active contours. The performance of the method is evaluated according to the noise and is compared to the watershed algorithm. Then an application is finally presented for biomedical images of tumour tissue. (C) 1998 Elsevier Science B.V. All rights reserved.

  298.   Basu, S, Oliver, N, and Pentland, A, "3D lip shapes from video: A combined physical-statistical model," SPEECH COMMUNICATION, vol. 26, pp. 131-148, 1998.

Abstract:   Tracking human lips in video is an important but notoriously difficult task. To accurately recover their motions in 3D from any head pose is an even more challenging task, though still necessary for natural interactions. Our approach is to build and train 3D models of lip motion to make up for the information we cannot always observe when tracking. We use physical models as a prior and combine them with statistical models, showing how the two can be smoothly and naturally integrated into a synthesis method and a MAP estimation framework for tracking. We have found that this approach allows us to accurately and robustly track and synthesize the 3D shape of the lips from arbitrary head poses in a 2D video stream. We demonstrate this with numerical results on reconstruction accuracy, examples of static fits, and audio-visual sequences. (C) 1998 Elsevier Science B.V. All rights reserved.

  299.   Aboul-Ella, H, Karam, H, and Nakajima, M, "Image metamorphosis transformation of facial images based on elastic body splines," SIGNAL PROCESSING, vol. 70, pp. 129-137, 1998.

Abstract:   In this paper, we propose a new image metamorphosis algorithm which uses elastic body splines to generate warp functions for interpolating scattered data points. The spline is based on a partial differential equation proposed by Navier that describes the equilibrium displacement of an elastic body subjected to forces. The spline maps can be expressed as the linear combination of an affine transformation and a Navier spline. The proposed algorithm generates a smooth warp that reflects feature point correspondences. It is efficient in time complexity and smoothly interpolated morphed images with only a remarkably small number of specified feature points. The algorithm allows each feature point in the source image to be mapped to the corresponding feature point in the destination image. Once the images are warped to align the positions of features and their shapes, the in-between facial animation from two given facial images can be defined by cross dissolving the positions of correspondence features and their shapes and colors. We describe an efficient cross-dissolve algorithm for generating the in-between images. (C) 1998 Published by Elsevier Science B.V. All rights reserved.

  300.   Chen, CW, Luo, JB, and Parker, KJ, "Image segmentation via adaptive K-mean clustering and knowledge-based morphological operations with biomedical applications," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 7, pp. 1673-1683, 1998.

Abstract:   Image segmentation remains one of the major challenges in image analysis, since image analysis tasks are often constrained by how web previous segmentation is accomplished. In particular, many existing image segmentation algorithms fail to provide satisfactory results when the boundaries of the desired objects are not clearly defined by the image-intensity information. In medical applications, skilled operators are usually employed to extract the desired regions that may be anatomically separate but statistically indistinguishable. Such manual processing is subject to operator errors and biases, is extremely time consuming, and has poor reproducibility. We propose a robust algorithm for the segmentation of three-dimensional (3-D) image data based on a novel combination of adaptive K-mean clustering and knowledge-based morphological operations. The proposed adaptive K-mean clustering algorithm is capable of segmenting the regions of smoothly varying intensity distributions. Spatial constraints are incorporated in the clustering algorithm through the modeling of the regions by Gibbs random fields, Knowledge-based morphological operations are then applied to the segmented regions to identify the desired regions according to the a priori anatomical knowledge of the region-of-interest. This proposed technique has been successfully applied to a sequence of cardiac CT volumetric images to generate the volumes of left ventricle chambers at 16 consecutive temporal frames. Our final segmentation results compare favorably with the results obtained using manual outlining. Extensions of this approach to other applications can be readily made when a priori knowledge of a given object is available.

  301.   Tsap, LV, Goldgof, DB, Sarkar, S, and Powers, PS, "A vision-based technique for objective assessment of burn scars," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 17, pp. 620-633, 1998.

Abstract:   In this paper a method for the objective assessment of burn scars is proposed. The quantitative measures developed in this research provide an objective way to calculate elastic properties of burn scars relative to the surrounding areas, The approach combines range data and the mechanics and motion dynamics of human tissues. Active contours are employed to locate regions of interest and to find displacements of feature points using automatically established correspondences, Changes in strain distribution over time are evaluated, Given images at two time instances and their corresponding features, the finite element method Is used to synthesize strain distributions of the underlying tissues, This results in a physically based framework for motion and strain analysis. Relative elasticity of the burn scar is then recovered using iterative descent search for the best nonlinear finite element model that approximates stretching behavior of the region containing the burn scar, The results from the skin elasticity experiments illustrate the ability to objectively detect differences in elasticity between normal and abnormal tissue, These estimated differences in elasticity are correlated against the subjective judgments of physicians that are presently the practice.

  302.   Niessen, WJ, Romeny, BMT, and Viergever, MA, "Geodesic deformable models for medical image analysis," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 17, pp. 634-641, 1998.

Abstract:   In this paper implicit representations of deformable models for medical image enhancement and segmentation are considered. The advantage of implicit models over classical explicit models is that their topology can he naturally adapted to objects in the scene, A geodesic formulation of implicit deformable models is especially attractive since it has the energy minimizing properties of classical models, The aim of this pager is twofold, First, a modification to the customary geodesic deformable model approach is introduced by considering all the level sets in the image as energy minimizing contours. This approach is used to segment multiple objects simultaneously and for enhancing and segmenting cardiac computed tomography (CT) and magnetic resonance images. Second, the approach is used to effectively compare implicit and explicit models for specific tasks. This shows the complementary character of implicit models since in case of poor contrast boundaries or gaps in boundaries e.g. due to partial volume effects, noise, or motion artifacts, they do not perform well, since the approach is completely data-driven.

  303.   Marescaux, J, Clement, JM, Tassetti, V, Koehl, C, Cotin, S, Russier, Y, Mutter, D, Delingette, H, and Ayache, N, "Virtual reality applied to hepatic surgery simulation: The next revolution," ANNALS OF SURGERY, vol. 228, pp. 627-634, 1998.

Abstract:   Objective This article describes a preliminary work on virtual reality applied to liver surgery and discusses the repercussions of assisted surgical strategy and surgical simulation on tomorrow's surgery. Summary Background Data Liver surgery is considered difficult because of the complexity and variability of the organ. Common generic tools for presurgical medical image visualization do not fulfill the requirements for the liver, restricting comprehension of a patient's specific liver anatomy. Methods Using data from the National Library of Medicine, a realistic three-dimensional image was created, including the envelope and the four internal arborescences. A computer interface was developed to manipulate the organ and to define surgical resection planes according to internal anatomy. The first step of surgical simulation was implemented, providing the organ with real-time deformation computation. Results The three-dimensional anatomy of the liver could be clearly visualized. The virtual organ could be manipulated and a resection defined depending on the anatomic relations between the arborescences, the tumor, and the external envelope. The resulting parts could also be visualized and manipulated. The simulation allowed the deformation of a liver model in real time by means of a realistic laparoscopic tool. Conclusions Three-dimensional visualization of the organ in relation to the pathology is of great help to appreciate the complex anatomy of the liver. Using virtual reality concepts (navigation, interaction, and immersion), surgical planning, training, and teaching for this complex surgical procedure may be possible. The ability to practice a given gesture repeatedly will revolutionize surgical training, and the combination of surgical planning and simulation will improve the efficiency of intervention, leading to optimal care delivery.

  304.   Tang, CK, and Medioni, G, "Inference of integrated surface, curve, and junction descriptions from sparse 3D data," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 20, pp. 1206-1223, 1998.

Abstract:   We are interested in descriptions of 3D data sets, as obtained from stereo or a 3D digitizer. We therefore consider as input a sparse set of points, possibly associated with certain orientation information. In this paper, we address the problem of inferring integrated high-level descriptions such as surfaces, So curves, and junctions from a sparse point set. While the method proposed by Guy and Medioni provides excellent results for smooth structures, it only detects surface orientation discontinuities but does not localize them. For precise localization, we propose a noniterative cooperative algorithm in which surfaces, curves, and junctions work together: Initial estimates are computed based on the work by Guy and Medioni, where each point in the given sparse and possibly noisy point set is convolved with a predefined vector mask to produce dense saliency maps. These maps serve as input to our novel extremal surface and curve algorithms for initial surface and curve extraction. These initial features are refined and integrated by using excitatory and inhibitory fields. Consequently, intersecting surfaces (resp. curves) are fused precisely at their intersection curves (resp. junctions). Results on several synthetic as well as real data sets are presented.

  305.   Whitaker, RT, "A level-set approach to 3D reconstruction from range data," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 29, pp. 203-231, 1998.

Abstract:   This paper presents a method that uses the level sets of volumes to reconstruct the shapes of 3D objects from range data. The strategy is to formulate 3D reconstruction as a statistical problem: find that surface which is mostly likely, given the data and some prior knowledge about the application domain. The resulting optimization problem is solved by an incremental process of deformation. We represent a deformable surface as the level set of a discretely sampled scalar function of three dimensions, i.e., a volume. Such level-set models have been shown to mimic conventional deformable surface models by encoding surface movements as changes in the greyscale values of the volume. The result is a voxel-based modeling technology that offers several advantages over conventional parametric models, including flexible topology, no need for reparameterization, concise descriptions of differential structure, and a natural scale space for hierarchical representations. This paper builds on previous work in both 3D reconstruction and level-set modeling. It presents a fundamental result in surface estimation from range data: an analytical characterization of the surface that maximizes the posterior probability. It also presents a novel computational technique for level-set modeling, called the sparse-field algorithm, which combines the advantages of a level-set approach with the computational efficiency and accuracy of a parametric representation. The sparse-field algorithm is more efficient than other approaches, and because it assigns the level set to a specific set of grid points, it positions the level-set model more accurately than the grid itself. These properties, computational efficiency and subcell accuracy, are essential when trying to reconstruct the shapes of 3D objects. Results are shown for the reconstruction objects from sets of noisy and overlapping range maps.

  306.   Ong, KC, Teh, HC, and Tan, TS, "Resolving occlusion in image sequence made easy," VISUAL COMPUTER, vol. 14, pp. 153-165, 1998.

Abstract:   While the task of seamlessly merging computer-generated 3D objects into an image sequence can be done manually, such effort often lacks consistency across the images. It is also time consuming and prone to error. This paper proposes a framework that solves the occlusion problem without assuming a priori computer models from the input scene. It includes a new algorithm to derive approximate 3D models about the real scene based on recovered geometry information and user-supplied segmentation results. The framework has been implemented, and it works for amateur home videos. The result is an easy-to-use system for applications like the visualization of new architectures in a real environment.

  307.   Ivins, J, and Porrill, J, "Constrained active region models for fast tracking in color image sequences," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 72, pp. 54-71, 1998.

Abstract:   Image segmentation is a fundamental problem in computer vision, for which deformable models offer a partial solution. Most deformable models work by performing some kind of edge detection; complementary region growing methods have not often been used. As a result, deformable models that track regions rather than edges have yet to be developed to a great extent. Active region models are a relatively new type of deformable model driven by a region energy that is a function of the statistical characteristics of an image. This paper describes the use of constrained active region models for frame-rate tracking in color video images on widely available computer hardware. Two of the many color representations now in use are reviewed for this purpose: the intensity-based RGB space and the more intuitive HSV space. Normalized RGB, which is essentially a measure of hue and saturation, emerges as the preferred representation because it is invariant to illumination changes and can be obtained from many frame-grabbers via a simple fast software transformation. Three types of motion are examined for constraining deformable models: rigid models can only translate and rotate to fit image features; conformal models can also change size; affine models exhibit two kinds of shearing in addition to the other components. Two methods are described for producing affine motion, given the desired unconstrained motion calculated by searching for local energy minima lying perpendicular to the model boundary. An existing method, based on iterative gradient descent, computes translating, rotating, scaling, and shearing forces which can be combined to produce affine and other types of motion. A faster, more accurate method uses least-squares minimization to approximate the desired motion; with this method it is also possible to derive specific equations for rigid and conformal motion and to correct for the aperture problem associated with the perpendicular search method. The advantages of the new least-squares method are illustrated by using it to drive an active region model via an affine transformation which tracks the movements of a robot arm at frame rate in color video images, (C) 1998 Academic Press.

  308.   Marescaux, J, Clementi, JM, Russier, Y, Tassetti, V, Mutter, D, Cotin, S, and Ayache, N, "A new concept in digestive surgery: the computer assisted surgical procedure, from virtual reality to telemanipulation.," ANNALES DE GASTROENTEROLOGIE ET D HEPATOLOGIE, vol. 34, pp. 126-131, 1998.

Abstract:   Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery. The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world:virtual reality. Virtual reality extends the perception of ourfive senses by representing more than the real state of things by the means of computer sciences and robotics. it consists of three concepts. immersion, navigation and interaction, Three reasons have led us to develop this simulator: the first is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial let pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists bi a sophisticated simulator which has to includefive requirements, visual fidelity, interactivity, physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described, as are force feedback and real-time interaction.

  309.   Bonciu, C, Leger, C, and Thiel, J, "A Fourier-Shannon approach to closed contours modelling," BIOIMAGING, vol. 6, pp. 111-125, 1998.

Abstract:   This paper describes a modelling method for continuous closed contours. The initial input data set consists of two-dimensional (2-D) points, which may be represented as a discrete function in a polar coordinate system. The method uses the Shannon interpolation between these data points to obtain the global continuous contour model. A minimal description of the contour is obtained using the link between the Shannon interpolation kernel and the Fourier series of polar development (FSPD) for periodic functions. The Shannon interpolation kernel allows the direct interpretation of the contour smoothness in terms of both samples and Fourier frequency domains. In order to deal with deformation point sources, often encountered in active modelling techniques, a method of local deformation is proposed. Each local deformation is performed in an angular sector centred on the deformation point source. All the neighbouring characteristic samples are displaced in order to minimize the oscillations of the newly created model outside the deformation sector. This deformation technique preserves the frequency characteristics of the contour, regardless of the number and the intensity of deformation sources. In this way, the technique induces a frequency modelling constraint, which may be subsequently used in an active detection and modelling environment. Experiments on synthetic and real data prove the efficiency of the proposed technique. The method is currently used to model contours of the left ventricle of the heart obtained from ultrasound apical images. This work is part of a larger project, the aim of which is to analyse the space and time deformations of the left ventricle. The 2-D Fourier-Shannon model is used as a basis for more complex three-dimensional and four-dimensional Fourier models, able to recover automatically the movement and deformation of the left ventricle of the heart during a cardiac cycle.

  310.   Wong, YY, Yuen, PC, and Tong, CS, "Segmented snake for contour detection," PATTERN RECOGNITION, vol. 31, pp. 1669-1679, 1998.

Abstract:   The active contour model, called snake, has been proved to be an effective method in contour detection. This method has been successfully employed in the areas of object recognition, computer vision, computer graphics and biomedical images. However, this model suffers from a great limitation, that is, it is difficult to locate concave parts of an object. In view of such a limitation, a segmented snake is designed and proposed in this paper. The basic idea of the proposed method is to convert the global optimization of a closed snake curve into local optimization on a number of open snake curves. The segmented snake algorithm consists of two steps. In the first step, the original snake model is adopted to locate the initial contour near the object boundary. In the second step, a recursive split-and-merge procedure is developed to determine the final object contour. The proposed method is able to locate all convex, concave and high curvature parts of an object accurately. A number of images are selected to evaluate the capability of the proposed algorithm and the results are encouraging. (C) 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  311.   Cunningham, GS, Hanson, KM, and Battle, XL, "Three-dimensional reconstructions from low-count SPECT data using deformable models," OPTICS EXPRESS, vol. 2, pp. 227-236, 1998.

Abstract:   We demonstrate the reconstruction of a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest Total Artificial Heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution is a time-varying closed surface parameterized by 162 vertices that are connected to make 960 triangles, with uniform intensity of radiotracer inside. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework. MAP estimates for the vertices, interior intensity and background count level are produced for diastolic and systolic frames, the only two frames analyzed. The strength of the prior is determined by finding the corner of the L-curve. The results indicate that qualitatively pleasing results are possible even with as few as 1780 counts per time frame (total after summing over all 24 detectors). Quantitative estimates of ejection fraction and wall motion should be possible if certain restrictions in the model are removed, e.g., the spatial homogeneity of the radiotracer intensity within the volume defined by the triangulated surface, and smoothness of the surface at the tube/ventricle join. (C) 1998 Optical Society of America.

  312.   Ghanei, A, Soltanian-Zadeh, H, and Windham, JP, "A 3D deformable surface model for segmentation of objects from volumetric data in medical images," COMPUTERS IN BIOLOGY AND MEDICINE, vol. 28, pp. 239-253, 1998.

Abstract:   In this paper we present a new 3D discrete dynamic surface model. The model consists of vertices and edges, which connect adjacent vertices. Basic geometry of the model surface is generated by triangle patches. The model deforms by internal and external forces. Internal forces are obtained from local geometry of the model and are related to the local curvature of the surface.;External forces, on the other hand, are based on the image data and are calculated from desired image features. We also present a method for generating an initial volume for the model from a stack of initial contours, drawn by the user on cross sections of the volumetric data. (C) 1998 Elsevier Science Ltd. All rights reserved.

  313.   Ghanei, A, Soltanian-Zadeh, H, and Windham, JP, "Segmentation of the hippocampus from brain MRI using deformable contours," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 22, pp. 203-216, 1998.

Abstract:   The application of a discrete dynamic contour model for segmentation of the hippocampus from brain MRT has been investigated. Solutions to several common problems of dynamic contours in this case and similar cases have been developed. A new method for extracting the discontinuous boundary of a structure with multiple edges near the structure has been developed. The method is based on detecting and following edges by external forces. The reliability of the final contour and the model stability have been improved by using a continuous mapping of the external energy and limiting movements of the contour. The problem of optimizing the internal force weight has been overcome by making it dependent on the amount of the external force. Finally, the results of applying the proposed algorithm, which implements the above modifications, to multiple applications have been evaluated. (C) 1998 Elsevier Science Ltd. All rights reserved.

  314.   Mortensen, EN, and Barrett, WA, "Interactive segmentation with intelligent scissors," GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 60, pp. 349-384, 1998.

Abstract:   We present a new, interactive tool called Intelligent Scissors which we use for image segmentation. Fully automated segmentation is an unsolved problem, while manual tracing is inaccurate and laboriously unacceptable. However, Intelligent Scissors allow objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse. When the gestured mouse position comes in proximity to an object edge, a live-wire boundary "snaps" to, and wraps around the object of interest. Live-wire boundary detection formulates boundary detection as an optimal path search in a weighted graph. Optimal graph searching provides mathematically piece-wise optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures. Robustness is further enhanced with on-the-fly training which causes the boundary to adhere to the specific type of edge currently being followed, rather than simply the strongest edge in the neighborhood. Boundary cooling automatically freezes unchanging segments and automates input of additional seed points. Cooling also allows the user to be much more free with the gesture path, thereby increasing the efficiency and finesse with which boundaries can be extracted. (C) 1998 Academic Press.

  315.   Siddiqi, K, Lauziere, YB, Tannenbaum, A, and Zucker, SW, "Area and length minimizing flows for shape segmentation," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 7, pp. 433-443, 1998.

Abstract:   A number of active contour models have been proposed that unify the curve evolution framework with classical energy minimization techniques for segmentation, such as snakes, The essential idea is to evolve a curve (in two dimensions) or a surface (in three dimensions) under constraints from image forces so that it clings to features of interest in an intensity image, Recently, the evolution equation has been derived from first principles as the gradient dow that minimizes a modified length functional, tailored to features such as edges, However, because the how may be slow to converge in practice, a constant (hyperbolic) term is added to keep the curve/surface moving in the desired direction, In this paper, we derive a modification of this term based on the gradient how derived from a weighted area functional, with image dependent weighting factor, When combined with the earlier modified Length gradient dow, we obtain a partial differential equation (PDE) that offers a number of advantages, as illustrated by several examples of shape segmentation on medical images. In many cases the weighted area how may be used on its own, with significant computational savings.

  316.   Vilarino, DL, Brea, VM, Cabello, D, and Pardo, JM, "Discrete-time CNN for image segmentation by active contours," PATTERN RECOGNITION LETTERS, vol. 19, pp. 721-734, 1998.

Abstract:   In this work we present a new image segmentation strategy which operates by means of active contours implemented on a multilayer cellular neural network. The approach consists of an expanding and thinning process, guided by external information from a contour which evolves until it reaches the final desired position in the image processed. (C) 1998 Elsevier Science B.V. All rights reserved.

  317.   Klemencic, A, Kovacic, S, and Pernus, F, "Automated segmentation of muscle fiber images using active contour models," CYTOMETRY, vol. 32, pp. 317-326, 1998.

Abstract:   The cross-sectional area of different fiber types is an important anatomic feature in studying the structure and function of healthy and diseased human skeletal muscles. However, such studies are hampered by the thousands of fibers involved when manual segmentation has to be used. We have developed a semiautomatic segmentation method that uses computational geometry and recent computer vision techniques to significantly reduce the time required to accurately segment the fibers in a sample. The segmentation is achieved by simply pointing to the approximate centroid of each fiber. The set of centroids is then used to automatically construct the Voronoi polygons, which correspond to individual fibers. Each Voronoi polygon represents the initial shape of one active contour model, called a snake. In the energy minimization process, which is executed in several stages, different external forces and problem-specific knowledge are used to guide the snakes to converge to fiber boundaries. Our results indicate that this approach for segmenting muscle fiber images is fast, accurate, and reproducible compared with manual segmentation performed by experts. Cytometry 32:317-326, 1998. (C) 1998 Wiley-Liss, Inc.

  318.   Basri, R, Costa, L, Geiger, D, and Jacobs, D, "Determining the similarity of deformable shapes," VISION RESEARCH, vol. 38, pp. 2365-2385, 1998.

Abstract:   Determining the similarity of two shapes is a significant task in both machine and human vision systems that must recognize or classify objects. The exact properties of human shape similarity judgements are not well understood yet, and this task is particularly difficult in domains where the shapes are not related by rigid transformations. In this paper we identify a number of possibly desirable properties of a shape similarity method, and determine the extent to which these properties can be captured by approaches that compare local properties of the contours of the shapes, through elastic matching. Special attention is devoted to objects that possess articulations, i.e. articulated parts. Elastic matching evaluates the similarity of two shapes as the sum of local deformations needed to change one shape into another. We show that similarities of part structure can be captured by such an approach, without the explicit computation of part structure. This may be of importance, since although parts appear to play a significant role in visual recognition, it is difficult to stably determine part structure. We also show novel results about how one can evaluate smooth and polyhedral shapes with the same method. Finally, we describe shape similarity effects that cannot be handled by current approaches. (C) 1998 Elsevier Science Ltd. All rights reserved.

  319.   Tao, C, Li, RX, and Chapman, MA, "Automatic reconstruction of road centerlines from mobile mapping image sequences," PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, vol. 64, pp. 709-716, 1998.

Abstract:   An automatic approach to road centerline reconstruction from stereo image sequences acquired by a mobile mapping system is introduced. The road centerline reconstruction is treated as an inverse problem and solved by global optimization techniques. The centerlines are described by a physical curve model, which is composed of an abstract material and deforms according to external and internal forces applied. The external forces, generated from the centerline information extracted from the image sequences, controls the local characteristics of the model. The internal forces, arising from a priori knowledge of the road shape, contribute to the global shape of the model. Unique constraints that exist only in mobile mapping image sequences are utilized. The developed system has been used for processing a large number of mobile mapping image sequences. Road centerlines of the images under different conditions have been reconstructed successfully. The research results also make a contribution to the general field of structure from motion and stereo.

  320.   Rougon, N, and Preteux, F, "Directional adaptive deformable models for segmentation," JOURNAL OF ELECTRONIC IMAGING, vol. 7, pp. 231-256, 1998.

Abstract:   We address the problem of adapting the functions controlling the material properties of 2-D snakes, and show how introducing oriented smoothness constraints results in a novel class of active contour models for segmentation, which extends standard isotropic inhomogeneous membrane/thin-plate stabilizers. These constraints, expressed as adaptive L-2 matrix norms, are defined by two second-order symmetric and positive definite tensors that are invariant with respect to rigid motions in the image plane. These tensors, equivalent to directional adaptive stretching and bending densities, are quadratic with respect to first- and second-order derivatives of the image luminance, respectively. A representation theorem specifying their canonical form is established and a geometrical interpretation of their effects is developed. Within this framework, it is shown that by achieving a directional control of regularization such nonisotropic constraints consistently relate the differential properties (metric and curvature) of the deformable model with those of the underlying luminance surface, yielding a satisfying preservation of image contour characteristics. In particular, this model adapts to nonstationary curvature variations along image contours to be segmented, thus providing a consistent solution to curvature underestimation problems encountered near high curvature contour points by classical snakes evolving with constant material parameters. Optimization of the model within continuous and discrete frameworks is discussed in detail. Finally, accuracy and robustness of the model are established on synthetic images. Its efficacy is further demonstrated on 2-D MRI sequences for which comparisons with segmentations obtained using classical snakes are provided. (C) 1998 SPIE and IS&T. [S1017-9909(98)02101-1].

  321.   Gold, S, Rangarajan, A, Lu, CP, Pappu, S, and Mjolsness, E, "New algorithms for 2D and 3D point matching: Pose estimation and correspondence," PATTERN RECOGNITION, vol. 31, pp. 1019-1031, 1998.

Abstract:   A fundamental open problem in computer vision-determining pose and correspondence between two sets of points in space-is solved with a novel, fast, robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by non-rigid transformations. Using a combination of optimization techniques such as deterministic annealing and the softassign, which have recently emerged out of the recurrent neural network/statistical physics framework, analog objective functions describing the problems are minimized. Over thirty thousand experiments, on randomly generated points sets with varying amounts of noise and missing and spurious points, and on hand-written character sets demonstrate the robustness of the algorithm. (C) 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  322.   Li, ZP, "A neural model of contour integration in the primary visual cortex," NEURAL COMPUTATION, vol. 10, pp. 903-940, 1998.

Abstract:   Experimental observations suggest that contour integration may take place in V1. However, there has yet to be a model of contour integration that uses only known V1 elements, operations, and connection patterns. This article introduces such a model, using orientation selective cells, local cortical circuits, and horizontal intracortical connections. The model is composed of recurrently connected excitatory neurons and inhibitory interneurons, receiving visual input via oriented receptive fields resembling those found in primary visual cortex. Intracortical interactions modify initial activity patterns from input, selectively amplifying the activities of edges that form smooth contours in the image. The neural activities produced by such interactions are oscillatory and edge segments within a contour oscillate in synchrony. It is shown analytically and empirically that the extent of contour enhancement and neural synchrony increases with the smoothness, length, and closure of contours, as observed in experiments on some of these phenomena. In addition, the model incorporates a feedback mechanism that allows higher visual centers selectively to enhance or suppress sensitivities to given contours, effectively segmenting one from another. The model makes the testable prediction that the horizontal cortical connections are more likely to target excitatory (or inhibitory) cells when the two linked cells have their preferred orientation aligned with (or orthogonal to) their relative receptive field center displacements.

  323.   Glasbey, CA, and Mardia, KV, "A review of image-warping methods," JOURNAL OF APPLIED STATISTICS, vol. 25, pp. 155-171, 1998.

Abstract:   Image warping is a transformation which maps all positions in one image plane to positions in a second plane. It arises in many image analysis problems, whether in order to remove optical distortions introduced by a camera or a particular viewing perspective, to register art image with a map or template, or to align two or more images. The choice of warp is a compromise between a smooth distortion and one which achieves a good match. Smoothness can be ensured by assuming a parametric form for the warp or by constraining it using differential equations. Matching can be specified by points to be brought into alignment, by local measures of correlation between images, or by the coincidence of edges. Parametric and non-parametric approaches to warping, and matching criteria, are reviewed.

  324.   Gao, LM, Heath, DG, and Fishman, EK, "Abdominal image segmentation using three-dimensional deformable models," INVESTIGATIVE RADIOLOGY, vol. 33, pp. 348-355, 1998.

Abstract:   RATIONALE AND OBJECTIVES. The authors develop a three-dimensional (3-D) deformable surface model-based segmentation scheme for abdominal computed tomography (CT) image segmentation. METHODS. A parameterized 3-D surface model was developed to represent the human abdominal organs. An energy function defined on the direction of the image gradient and the surface normal of the deformable model was introduced to measure the match between the model and image data. A conjugate gradient algorithm was adapted to the minimization of the energy function. RESULTS. Test results for synthetic images showed that the incorporation of surface directional information improved the results over those using only the magnitude of the image gradient. The algorithm was tested on 21 CT datasets. Of the 21 cases tested, 11 were evaluated visually by a radiologist and the results were judged to be without noticeable error. The other 10 were evaluated over a distance function. The average distance was less than 1 voxel. CONCLUSIONS. The deformable model-based segmentation scheme produces robust and acceptable outputs on abdominal CT images.

  325.   Xu, CY, and Prince, JL, "Snakes, shapes, and gradient vector flow," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 7, pp. 359-369, 1998.

Abstract:   Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries, problems associated with initialization and poor convergence to boundary concavities, however, have limited their utility, This paper presents a new external force for active contours, largely solving both problems. This external forte, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. It differs fundamentally from traditional snake external forces in that it cannot be written as the negative gradient of a potential function, and the corresponding snake is formulated directly from a force balance condition rather than a variational formulation. Using several two-dimensional (2-D) examples and one three-dimensional (3-D) example, we show that GVF has a large capture range and is able to move snakes into boundary concavities.

  326.   Hager, GD, and Toyama, K, "X vision: A portable substrate for real-time vision applications," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 69, pp. 23-37, 1998.

Abstract:   In the past several years, the speed of standard processors has reached the point where interesting problems requiring visual tracking can be carried out on standard workstations. However, relatively little attention has been devoted to developing visual tracking technology in its own right. In this article, we describe X Vision, a modular, portable framework for visual tracking. X Vision is designed to be a programming environment for real-time vision which provides high performance on standard workstations outfitted with a simple digitizer. X Vision consists of a small set of image-level tracking primitives, and a framework for combining tracking primitives to form complex tracking systems. Efficiency and robustness are achieved by propagating geometric and temporal constraints to the feature detection level, where image warping and specialized image processing are combined to perform feature detection quickly and robustly. Over the past several years, we have used X Vision to construct several vision-based systems. We present some of these applications as an illustration of how useful, robust tracking systems can be constructed by simple combinations of a few basic primitives combined with the appropriate task-specific constraints. (C) 1998 Academic Press.

  327.   Piccioni, M, Scarlatti, S, and Trouve, A, "A variational problem arising from speech recognition," SIAM JOURNAL ON APPLIED MATHEMATICS, vol. 58, pp. 753-771, 1998.

Abstract:   By following the general approach of deformable templates, the problem of recognizing a single word, independently of the speaker, is shown to lead to the computation of the minimum value of some particular functional. More precisely, this allows us to recover, for each possible word in a prespecified set, the best matching with the recorded signal; by selecting the minimum value, the recognition problem can be solved. In this paper we are concerned with the detailed study of the variational problem associated with this sort of functional, namely, the existence of a minimum point and the features of such a minimum. Finally, we discuss the convergence of a discretized finite-dimensional approximation, suggested by the engineering literature on this subject.

  328.   Park, JS, and Han, JH, "Contour matching: a curvature-based approach," IMAGE AND VISION COMPUTING, vol. 16, pp. 181-189, 1998.

Abstract:   The lack of information about tangential velocity makes velocity estimation erroneous in contour matching. Classical methods use the normal velocity, together with some smoothness constraints, since the tangential velocity cannot be recovered. This paper presents a contour matching method that computes displacements with a criteria of minimum curvature differences. The first derivative of tangential velocity is available from the image intensities and is related to the contour curvature. We compute the velocities using the curvature as well as the normal component. Consequently, the estimation error due to the tangential component is reduced substantially. A contour having occluding parts leads to mismatching. Our method determines occluding parts before the contour matching by analyzing the change of curvature distribution. Experimental results showed that the proposed method computes accurate velocity vectors for various moving contours. (C) 1998 Elsevier Science B.V.

  329.   Chandran, S, and Potty, AK, "Energy minimization of contours using boundary conditions," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 20, pp. 546-549, 1998.

Abstract:   Reconstruction of objects from a scene may be viewed as a data fitting problem using energy minimizing splines as the basic shape. The process of obtaining the minimum to construct the "best" shape can sometimes be important. Some of the potential problems in the Euler-Lagrangian variational solution proposed in the original formulation [1], were brought to light in [2], and a dynamic programming (DP) method was also suggested. In this paper we further develop the DP solution. We show that in certain cases, the discrete form of the solution in [2], and adopted subsequently [3], [4], [5], [6] may also produce local minima, and develop a strategy to avoid this. We provide a stronger form of the conditions necessary to derive a solution when the energy depends on the second derivative, as in the case of "active contours.".

  330.   Sakalli, M, and Yan, H, "Feature-based compression of human face images," OPTICAL ENGINEERING, vol. 37, pp. 1520-1529, 1998.

Abstract:   A method is developed for feature-based coding of human face images. Deformable templates, wavelet decomposition, and residual vector quantization (RVQ) form three consecutive stages of the proposed method, which aims for recognition-based very low bit rate coding. Deformable templates are employed in localization of facial features and biorthogonal spline filters are used for the decomposition of segmented and normalized face images. Wavelet coefficients are zonal truncated before being vector quantized to generate multiresolution codebooks. Classified multiresolution codebooks are also generated for residual eye and mouth images to improve subjective quality of salient face features. (C) 1998 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(98)02105-9].

  331.   Lefebvre, F, Berger, G, and Laugier, P, "Automatic detection of the boundary of the calcaneus from ultrasound parametric images using an active contour model; Clinical assessment," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 17, pp. 45-52, 1998.

Abstract:   This paper presents a computerized method for automated detection of the boundary of the os calcis on in vivo ultrasound parametric images, using an active dynamic contour model. The initial contour, defined without user interaction, is an iso-contour extracted from the textural feature space. The contour is deformed through the action of internal and external forces, until stability is reached. The external forces, which characterize image features, are a combination of gray-level information and second-order textural features arising from local cooccurrence matrices. The broadband ultrasound attenuation (BUA) value is then averaged within the contour obtained. The method was applied to 381 clinical images. The contour was correctly detected in the great majority of the cases, For the short-term reproducibility study, the mean coefficient of variation was equal to 1.81% for BUA values and 4.95% for areas in the detected region. Women with osteoporosis had a lower BUA than age-matched controls (p = 0.0005). In healthy women, the age-related decline was -0.45 dB/MHz/yr. In the group of healthy post-menopausal women, years since menopause, weight and age were significant predictors of BUA, These results are comparable to those obtained when averaging BUA values in a small region of interest.

  332.   Atkins, MS, and Mackiewich, BT, "Fully automatic segmentation of the brain in MRI," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 17, pp. 98-107, 1998.

Abstract:   A robust fully automatic method for segmenting the brain from head magnetic resonance (MR) images has been developed, which works even in the presence of radio frequency (RF) inhomogeneities. It has been successful in segmenting the brain in every slice from head images acquired from several different MRI scanners, using different-resolution images and different echo sequences. The method uses an integrated approach which employs image processing techniques based on anisotropic filters and "snakes" contouring techniques, and a priori knowledge, which is used to remove the eyes, which are tricky to remove based on image intensity alone, It is a multistage process, involving first removal of the background noise leaving a head mask, then finding a rough outline of the brain, then refinement of the rough brain outline to a final mask. The paper describes the main features of the method, and gives results for some brain studies.

  333.   Tong, AWK, Qureshi, R, Li, X, and Sather, AP, "A system for ultrasound image segmentation for loin eye measurements in swine," CANADIAN AGRICULTURAL ENGINEERING, vol. 40, pp. 47-53, 1998.

Abstract:   An image segmentation system was developed for detecting the muscle longissimus thoraces (LT) in ultrasonic images of live pigs. The images have a low contrast, a high level of noise, and a high degree of variance in terms of texture and shape. The segmentation algorithm starts with a region growing process, which provides a rough approximation of the LT. Morphological operations and curve fitting eliminate unwanted noise. Finally, an active contour process refines the shape of the resulting region. This system takes several segmentation techniques and builds a flow of information between them but does not rely on specific a priori information of the texture or the contrast. This is a first step towards automating the loin detection in ultrasonic images of live pigs. initial experiments provided encouraging results. It is a modular system so that different region growing and refinement algorithms can be easily substituted into the current design. This makes for a general system that can be adapted to other segmentation tasks involving low contrast images. A series of three ultrasound images was made along the dorsal surface of 30 live pigs. Using the images to estimate loin volume, 64 and 70% of the variation in commercial loin weight and lean yield of loin were predicted. Augmenting the model with backfat measurements, the R-2 increased to 79 and 89%, respectively. These values compare to 76 and 79%, respectively, from measurements made on the carcass with the Hennessey Grading Probe.

  334.   Thalmann, NM, Kalra, P, and Escher, M, "Face to virtual face," PROCEEDINGS OF THE IEEE, vol. 86, pp. 870-883, 1998.

Abstract:   The first virtual humans appeared in the early 1980's in such films as Dreamflight (1982) and The Juggler (1982). Pioneering work in the ensuing period focused on realistic appearance in the simulation of virtual humans. In the 1990's, the emphasis has shifted to real-time animation and interaction in virtual worlds. Virtual humans halle begun to inhabit virtual worlds and so have we. To prepare our place in the virtual world, we first develop techniques for the automatic representation of a human face capable of being animated in real time using both video and audio input. The objective is for one's representative to look, talk, and behave like oneself in the virtual world. Furthermore, the virtual inhabitants of this world should be able to see our avatars and to react to what we say and to the emotions we convey. This paper sketches an overview of the problems related to the analysis and synthesis of face-to-virtual-face communication in a virtual world. We describe different components of our system for real-time interaction and communication between a cloned face representing a real person and an autonomous virtual face. It provides an insight into the various problems and gives particular solutions adopted in reconstructing a virtual clone capable of reproducing the shape and movements of the real person's face. It includes the analysis of the facial expression and speech of the cloned face, which can be used to elicit a response from the autonomous virtual human with both verbal and nonverbal facial movements synchronized with the audio voice. We believe that such a system can be exploited in many applications such as natural and intelligent human-machine interfaces, virtual collaboration work, virtual learning and teaching, and so on.

  335.   Memin, E, and Perez, P, "Dense estimation and object-based segmentation of the optical flow with robust techniques," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 7, pp. 703-719, 1998.

Abstract:   In this paper, we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.

  336.   Steiner, A, Kimmel, R, and Bruckstein, AM, "Planar shape enhancement and exaggeration," GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 60, pp. 112-124, 1998.

Abstract:   A local smoothing operator applied in the reverse direction is used to obtain planar shape enhancement and exaggeration. Inversion of a smoothing operator is an inherently unstable operation. Therefore, a stable numerical scheme simulating the inverse smoothing effect is introduced. Enhancement is obtained for short time spans of evolution. Carrying the evolution further yields shape exaggeration or caricaturization effect. Introducing attraction forces between the evolving shape and the initial one yields an enhancement process that converges to a steady state. These forces depend on the distance of the evolving curve from the original one and on local properties. Results of applying the unrestrained and restrained evolution on planar shapes, based on a stabilized inverse geometric heat equation, are presented showing enhancement and caricaturization effects. (C) 1998 Academic Press.

  337.   Tsap, LV, Goldgof, DB, Sarkar, S, and Huang, WC, "Efficient nonlinear finite element modeling of nonrigid objects via optimization of mesh models," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 69, pp. 330-350, 1998.

Abstract:   In this paper we propose a new general framework for the application of the nonlinear finite element method (FEM) to nonrigid motion analysis, We construct the models by integrating image data and prior knowledge, using well-established techniques from computer vision, structural mechanics, and computer-aided design (CAD). These techniques guide the process of optimization of mesh models. Linear FEM proved to be a successful physically based modeling tool in solving limited types of nonrigid motion problems, However, linear FEM cannot handle nonlinear materials or large deformations. Application of nonlinear FEM to nonrigid motion analysis has been restricted by difficulties with high computational complexity and noise sensitivity. We tackle the problems associated with nonlinear FEM by changing the parametric description of the object to allow easy automatic control of the model, using physically motivated analysis of the possible displacements to address the worst effects of the noise, applying mesh control strategies, and utilizing multiscale methods. The combination of these methods represents a new systematic approach to a class of nonrigid motion applications for which sufficiently precise and flexible FEM models can be built, The results from the skin elasticity experiments demonstrate the success of the proposed method. The model allows us to objectively detect the differences in elasticity between normal and abnormal skin, Our work, demonstrates the possibility of accurate computation of point correspondences and force recovery from range image sequences containing nonrigid objects and large motion. (C) 1998 Academic Press.

  338.   Wong, YY, Yuen, PC, and Tong, CS, "Contour length terminating criterion for snake model," PATTERN RECOGNITION, vol. 31, pp. 597-606, 1998.

Abstract:   The snake model, that involves a recursive scheme for contour searching, is widely employed in object contour detection. In a recursive algorithm, a terminating criterion is essential to terminate the process. However, existing terminating criteria for snake cannot acquire good results for contour detection. Two commonly employed terminating criteria for snake are addressed and their limitations on stability and reliability are discussed. A novel terminating criterion, named as contour length criterion (CL-criterion), is developed and reported in this paper. This criterion measures the normalized total length of the contour at each iteration. A number of images are selected to evaluate the effect of applying the proposed terminating criterion on the snake model and the results are encouraging. Compared to existing criteria, the proposed method is more stable and reliable. (C) 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

 
1999

  339.   Hassanien, AE, and Nakajima, M, "Feature-specification algorithm based on snake model for facial image morphing," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E82D, pp. 439-446, 1999.

Abstract:   In this paper a new snake model for image morphing with semiautomated delineation which depends on Hermite's interpolation theory, is presented. The snake model will be used to specify the correspondence between features in two given images. It allows a user to extract a contour that defines a facial feature such as the lips, mouth, and profile, by only specifying the endpoints of the contour around the feature which we wish to define. We assume that the user can specify the endpoints of a curve around the features that serve as the extremities of a contour. The proposed method automatically computes the image information around these endpoints which provides the boundary conditions. Then the contour is optimized by taking this information into account near its extremities. During the iterative optimization process, the image forces are turned on progressively from the contour extremities toward the center to define the exact position of the feature. The proposed algorithm helps the user to easily define the exact position of a feature. It may also reduce the time required to establish the features of an image.

  340.   Peterfreund, N, "The velocity snake: Deformable contour for tracking in spatio-velocity space," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 73, pp. 346-356, 1999.

Abstract:   We present a new active contour model for boundary tracking and position prediction of nonrigid objects, which results from applying a velocity control to the class of elastodynamical contour models, known as snakes, The proposed control term minimizes an energy dissipation function which measures the difference between the contour velocity and the apparent velocity of the image. Treating the image video-sequence as continuous measurements along time, it is shown that the proposed control results in robust tracking. This is in contrast to the original snake model which is proven to have tracking errors relative to image (object) velocity, thus resulting in high sensitivity to image clutter. The motion estimation further allows for position prediction of nonrigid boundaries. Based on the proposed central approach, we propose a new class of real time tracking contours, varying from models with batch-mode control estimation to models with real time adaptive controllers. (C) 1999 Academic Press.

  341.   Astrom, K, and Kahl, F, "Motion estimation in image sequences using the deformation of apparent contours," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 21, pp. 114-127, 1999.

Abstract:   The problem of determining the camera motion from apparent contours or silhouettes of a priori unknown curved three-dimensional surfaces is considered. In a sequence of images, it is shown how to use the generalized epipolar constraint an apparent contours. One such constraint is obtained for each epipolar tangency point in each image pair. An accurate algorithm for computing the motion is presented based on a maximum likelihood estimate. It is shown how to generate initial estimates on the camera motion using only the tracked contours. It is also shown that in theory the motion can be calculated from the deformation of a single contour. The algorithm has been tested on several real image sequences, for both Euclidean and protective reconstruction. The resulting motion estimate is compared to motion estimates calculated independently using standard feature-based methods. The motion estimate is also used to classify the silhouettes as curves or apparent contours. This is a strong indication that the motion estimate is of good quality. The statistical evaluation shows that the technique gives accurate and stable results.

  342.   Gabrani, M, and Tretiak, OJ, "Surface-based matching using elastic transformations," PATTERN RECOGNITION, vol. 32, pp. 87-97, 1999.

Abstract:   We introduce a methodology for the alignment of multidimensional data, such as brain scans. The proposed approach does not require fiducial-point correspondence; correspondence of surfaces provides sufficient data for registration. We extend multidimensional interpolation theory by using a more general form of energy functional, which leads to basis functions that have different orders at zero and infinity. This allows flexibility in the design of the interpolation solution. The problem is transformed into a linear algebra problem. Two techniques for better conditioning of the system matrix are described. Experimental results on two- and three-dimensional alignment of brain data used in neurochemistry research are shown. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  343.   Denney, TS, "Estimation and detection of myocardial tags in MR image without user-defined myocardial contours," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 330-344, 1999.

Abstract:   Magnetic resonance (MR) tagging has been Shown to, be a useful technique for noninvasively measuring the deformation of an in vivo heart. An important step in analyzing tagged images is the identification of tag lines in each image of a cine sequence. Most existing tag identification algorithms require user defined myocardial contours. Contour identification, however, is time consuming and requires a considerable amount of user intervention. In this paper, a new method for identifying tag lines, which me call the ML/MAP method, is presented that does not require user defined myocardial contours. The ML/MAP method is composed of three stages. First, a set of candidate tag line centers is estimated across the entire region-of-interest (ROI) with a snake algorithm based on a maximum-likelihood (ML) estimate of the tag center. Next, a maximum a posteriori (MAP) hypothesis test is used to detect the candidate tag centers that are actually part of a tag line. Finally, a pruning algorithm is used to remove any detected tag line centers that do not meet a spatio-temporal continuity criterion. The ML/MAP method is demonstrated on data from ten in vivo human hearts.

  344.   Gu, YH, and Tjahjadi, T, "Efficient planar object tracking and parameter estimation using compactly represented cubic B-Spline curves," IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, vol. 29, pp. 358-367, 1999.

Abstract:   In this paper, we consider the problem of matching tno-dimensional (2-D) planar object curves from a database, and tracking moving object curves through an image sequence, The first part of the paper describes a curve data compression method using B-spline curve approximation. We present a new constrained active B-spline curie model based on the minimum mean square error (MMSE) criterion, and an iterative algorithm for selecting the "best" segment border points for each B-spline curve, The second part of the paper describes a method for simultaneous object tracking and affine parameter estimation using the approximate curves and profiles, We propose a novel B-spline point assignment algorithm which incorporates the significant corners for interpolating corresponding paints on the two curves to be compared. A gradient-based algorithm is presented for simultaneously tracking object curl es, and estimating the associated translation, rotation and scaling parameters. The performance of each proposed method is evaluated using still images and image sequences containing simple objects.

  345.   Kang, DJ, "A fast and stable snake algorithm for medical images," PATTERN RECOGNITION LETTERS, vol. 20, pp. 507-512, 1999.

Abstract:   A discrete dynamic model for defining and tracking contours in 2-D medical images is presented. An active contour in this objective is optimized by a dynamic programming algorithm, for which a new constraint that has fast and stable properties is introduced. The internal energy of the model depends on local behavior of the contour, while the external energy is derived from image features. The algorithm is able to rapidly detect convex and concave objects even when the image quality is poor. (C) 1999 Elsevier Science B.V. All rights reserved.

  346.   Zhu, Y, Chen, JX, Xiao, S, and Mac Mahon, EB, "3D knee modeling and biomechanical simulation," COMPUTING IN SCIENCE & ENGINEERING, vol. 1, pp. 82-87, 1999.

Abstract:   This paper considers the problem of generating various calligraphy from some sample fonts. Our method is based on the deformable contour model g-snake. By representing the outline of each stroke of a character with a g-snake, we cast the generation problem into global and local deformation of g-snake under different control parameters, where the local deformation obeys the energy minimization principle of regularization technique. The base values of the control parameters are learned from given sample fonts. The experimental results on alphabet and Japanese characters Hiragana show such processing as a reasonable method for generating calligraphy.

  347.   Wang, LS, He, LF, Nakamura, T, Mutoh, A, and Itoh, H, "Calligraphy generation using deformable contours," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E82D, pp. 1066-1073, 1999.

Abstract:   This paper considers the problem of generating various calligraphy from some sample fonts. Our method is based on the deformable contour model g-snake. By representing the outline of each stroke of a character with a g-snake, we cast the generation problem into global and local deformation of g-snake under different control parameters, where the local deformation obeys the energy minimization principle of regularization technique. The base values of the control parameters are learned from given sample fonts. The experimental results on alphabet and Japanese characters Hiragana show such processing as a reasonable method for generating calligraphy.

  348.   Hu, JM, Yan, H, and Sakalli, M, "Locating head and face boundaries for head-shoulder images," PATTERN RECOGNITION, vol. 32, pp. 1317-1333, 1999.

Abstract:   This paper presents a model-based approach to locate head and face boundaries in a head-shoulder image with plain background. Three models are constructed for the images, where the head boundary is divided into left/right sub-boundaries and the face boundary is divided into left/right and top/bottom sub-boundaries. The left/right head boundaries are located from two thresholded images and the final result is the combination of them. After the head boundary is located, the four face sub-boundaries are located from the grey edge image. The algorithm is carried out iteratively by detecting low-level edges and then organizing/verifying them using high-level knowledge of the general shape of a head. The experimental results using a database of 300 images show that this approach is promising. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  349.   Baumgartner, A, Steger, C, Mayer, H, Eckstein, W, and Ebner, H, "Automatic road extraction based on multi-scale, grouping, and context," PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, vol. 65, pp. 777-785, 1999.

Abstract:   An approach for the automatic extraction of roads from digital aerial imagery is proposed. It makes use of several versions of the same aerial image with different resolutions. Roads are modeled as a network of intersections and links between these intersections, and are found by a grouping process. The context of roads is hierarchically structured into a global and a local level. The automatic segmentation of the aerial image into different global contexts, i.e., rural, forest, and urban area, is used to focus the extraction to the most promising regions. For the actual extraction of the roads, edges are extracted in the original high resolution image (0.2 to 0.5 m) and lines are extracted in an image of reduced resolution. Using both resolution levels and explicit knowledge about roads, hypotheses for road segments are generated. They are grouped iteratively into larger segments. in addition to the grouping algorithms, knowledge about the local context, e.g., shadows cast by a tree onto a road segment, is used to bridge gaps. To construct the road network, finally intersections are extracted. Examples and results of an evaluation based on manually plotted reference data are given, indicating the potential of the approach.

  350.   Kang, DJ, "Stable snake algorithm for convex tracking of MRI sequences," ELECTRONICS LETTERS, vol. 35, pp. 1070-1071, 1999.

Abstract:   A snake model for convex tracking contours in 2D medical images is presented. By modelling the local behaviour of the contour, a new constraint that has fast and stable properties is obtained with optimisation by a dynamic programming algorithm.

  351.   Mardia, KV, Walder, AN, Berry, E, Sharples, D, Millner, PA, and Dickson, RA, "Assessing spinal shape," JOURNAL OF APPLIED STATISTICS, vol. 26, pp. 735-745, 1999.

Abstract:   Idiopathic scoliosis is the most common spinal deformity, affecting perhaps as many as 5% of children. Early recognition of the condition is essential for optimal treatment. A widely used technique for identification is based on a somewhat crude angle measurement from a frontal spinal X-ray. Here, we provide a technique and new summary statistical measures for classifying spinal shape, and present results obtained from clinical X-rays.

  352.   Sinthanayothin, C, Boyce, JF, Cook, HL, and Williamson, TH, "Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images," BRITISH JOURNAL OF OPHTHALMOLOGY, vol. 83, pp. 902-910, 1999.

Abstract:   Aim-To recognise automatically the main components of the fundus on digital colour images. Methods-The main features of a fundus retinal image were defined as the optic disc, fovea, and blood vessels. Methods are described for their automatic recognition and location. 112 retinal images were preprocessed via adaptive, local, contrast enhancement. The optic discs were located by identifying the area with the highest variation in intensity of adjacent pixels. Blood vessels were identified by means of a multilayer perceptron neural net, for which the inputs were derived from a principal component analysis (PCA) of the image and edge detection of the first component of PCA. The foveas were identified using matching correlation together with characteristics typical of a fovea-for example, darkest area in the neighbourhood of the optic disc. The main components of the image were identified by an experienced ophthalmologist for comparison with computerised methods. Results-The sensitivity and specificity of the recognition of each retinal main component was as follows: 99.1% and 99.1% for the optic disc; 83.3% and 91.0% for blood vessels; 80.4% and 99.1% for the fovea. Conclusions-In this study the optic disc, blood vessels, and fovea were accurately detected. The identification of the normal components of the retinal image will aid the future detection of diseases in these regions. In diabetic retinopathy, for example, an image could be analysed for retinopathy with reference to sight threatening complications such as disc neovascularisation, vascular changes, or foveal exudation.

  353.   Zhao, BS, Reeves, AP, Yankelevitz, DF, and Henschke, CI, "Three-dimensional multicriterion automatic segmentation of pulmonary nodules of helical computed tomography images," OPTICAL ENGINEERING, vol. 38, pp. 1340-1347, 1999.

Abstract:   A 3-D multicriterion automatic segmentation algorithm is developed to improve accuracy of delineation of pulmonary nodules on helical computed tomography (CT) images by removing their adjacent structures. The algorithm applies multiple gray-value thresholds to a nodule region of interest (ROI). At each threshold level, the nodule candidate in the ROI is automatically detected by labeling 3-D connected components, followed by a 3-D morphologic opening operation. Once the nodule candidate is found, its two specific parameters, gradient strength of the nodule surface and a 3-D shape compactness factor, can be computed. The optimal threshold can be determined by analyzing these parameters. Our experiments with in vivo nodules demonstrate the feasibility of employing this algorithm to improve the accuracy of nodule delineation, especially for small nodules less than 1 cm in diameter. This discloses the potential of the algorithm for accurate characterizations of nodules (e.g., volume, change in volume over time) at an early stage, which can help to provide valuable guidance for further clinical management. (C) 1999 Society of Photo-Optical Instrumentation Engineers.

  354.   Stammberger, T, Eckstein, F, Michaelis, M, Englmeier, KH, and Reiser, M, "Interobserver reproducibility of quantitative cartilage measurements: Comparison of B-spline snakes and manual segmentation," MAGNETIC RESONANCE IMAGING, vol. 17, pp. 1033-1042, 1999.

Abstract:   The objective of this work was to develop a segmentation technique for thickness measurements of the articular cartilage in MR images and to assess the interobserver reproducibility of the method in comparison with manual segmentation. The algorithm is based on a B-spline snakes approach and is able to delineate the cartilage boundaries in real time and with minimal user interaction. The interobserver reproducibility of the method, ranging from 3.3 to 13.6% for various section orientations and joint surfaces, proved to be significantly superior to manual segmentation. (C) 1999 Elsevier Science Inc.

  355.   Shyu, CR, Brodley, CE, Kak, AC, Kosaka, A, Aisen, AM, and Broderick, LS, "ASSERT: A physician-in-the-loop content-based retrieval system for HRCT image databases," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 75, pp. 111-132, 1999.

Abstract:   It is now recognized in many domains that content-based image retrieval from a database of images cannot be carried out by using completely automated approaches. One such domain is medical radiology for which the clinically useful information in an image typically consists of gray level variations in highly localized regions of the image. Currently, it is not possible to extract these regions by automatic image segmentation techniques. To address this problem, we have implemented a human-in-the-loop (a physician-in-the-loop, more specifically) approach in which the human delineates the pathology bearing regions (PBR) and a set of anatomical landmarks in the image when the image is entered into the database. To the regions thus marked, our approach applies low-level computer vision and image processing algorithms to extract attributes related to the variations in gray scale, texture, shape, etc. In addition, the system records attributes that capture relational information such as the position of a PER with respect to certain anatomical landmarks. An overall multidimensional index is assigned to each image based on these attribute values. (C) 1999 Academic Press.

  356.   Koss, JE, Newman, FD, Johnson, TK, and Kirch, DL, "Abdominal organ segmentation using texture transforms and a Hopfield neural network," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 640-648, 1999.

Abstract:   Abdominal organ segmentation is highly desirable but difficult, due to large differences between patients and to overlapping grey-scale values of the various tissue types. The first step in automating this process is to cluster together the pixels within each organ or tissue type. We propose to Form images based on second-order statistical texture transforms (Haralick transforms) of a CT or MRI scan. The original scan plus the suite of texture transforms are then input into a Hopfield neural network (HNN). The network is constructed to solve an optimization problem, where the best solution is the minima of a Lyapunov energy function. On a sample abdominal CT scan, this process successfully clustered 79-100% of the pixels of seven abdominal organs. It is envisioned that this Is the first step to automate segmentation. Active contouring (e.g., SNAKE's) or a back-propagation neural network can then be used to assign names to the clusters and fill in the incorrectly clustered pixels.

  357.   Chang, MW, Lin, E, and Hwang, JN, "Contour tracking using a knowledge-based snake algorithm to construct three-dimensional pharyngeal bolus movement," DYSPHAGIA, vol. 14, pp. 219-227, 1999.

Abstract:   Videofluorography (VFG) using a barium-mixed bolus is in wide clinical use for assessing patients with swallowing disorders. VFG is usually done with both lateral (LA) and anterior-posterior (AP) views, most commonly in two separate sittings. A real-time, three-dimensional (3-D) representation of the evolution of a pharyngeal bolus and its volumetric information can potentially help clinicians analyze and visualize the kinematics of swallowing, dysphagia, and compensatory therapeutic strategies. Active contour models, also known as "Snakes," have been used to solve various image analysis and computer vision problems. We applied a Snake algorithm to automate in part the contour tracking and reconstruction of VFG images to visualize and quantitatively analyze the 3-D evolution of a pharyngeal bolus. To improve the accuracy of the Snake search, we provided the additional "knowledge" of the pharyngeal image itself, which served as an extra constraint to push the Snake curve toward the desired contour. VFG of pharyngeal bolus transport in a normal subject was recorded by using barium-mixed boluses (viscosity: 185 centipoise, density: 2.84 g/cc) with volumes of 5, 10, and 20 ml, The resulting LA and AP video images were digitally captured and matched frame by frame. The knowledge-based Snake search algorithm was used to generate Snake points to satisfy both internal (i.e., smoothness) and external (i.e., boundary fitting) constraints. Using these Snake points, we traced the 3-D bolus movement at each time instant, assuming elliptic geometry in the cross-section of the pharyngeal bolus. By concatenating the 3-D images for each time instant, we developed a 3-D movie representing pharyngeal bolus movement. The efficiency, reproducibility, and accuracy of this algorithm in tracing pharyngeal bolus boundaries and estimating front/tail velocities were assessed and found satisfactory. We conclude that 3-D pharyngeal bolus movement can be traced both accurately and efficiently by using a knowledge-based Snake search algorithm.

  358.   Kobbelt, LP, Vorsatz, J, Labsik, U, and Seidel, HP, "A shrink wrapping approach to remeshing polygonal surfaces," COMPUTER GRAPHICS FORUM, vol. 18, pp. C119-+, 1999.

Abstract:   Due to their simplicity and flexibility, polygonal meshes are about to become the standard representation for surface geometry in computer graphics applications. Some algorithms in the context of multiresolution representation and modeling can be performed much more efficiently and robustly if the underlying surface tesselations have the special subdivision connectivity In this paper we propose a new algorithm for converting a given unstructured triangle mesh into one having subdivision connectivity. The basic idea is to simulate the shrink wrapping process by adapting the deformable surface technique known from image processing. The resulting algorithm generates subdivision connectivity meshes whose base meshes only have a very small number of triangles. The iterative optimization process that distributes the mesh vertices over the given surface geometry guarantees low local distortion of the triangular faces. We show several examples and applications including the progressive transmission of subdivision surfaces.

  359.   Bhandarkar, SM, and Zeng, X, "Evolutionary approaches to figure-ground separation," APPLIED INTELLIGENCE, vol. 11, pp. 187-212, 1999.

Abstract:   The problem of figure-ground separation is tackled from the perspective of combinatorial optimization. Previous attempts have used deterministic optimization techniques based on relaxation and gradient descent-based search, and stochastic optimization techniques based on simulated annealing and microcanonical annealing. A mathematical model encapsulating the figure-ground separation problem that makes explicit the definition of shape in terms of attributes such as cocircularity, smoothness, proximity and contrast is described. The model is based on the formulation of an energy function that incorporates pairwise interactions between local image features in the form of edgels and is shown to be isomorphic to the interacting spin (Ising) system from quantum physics. This paper explores a class of stochastic optimization techniques based on evolutionary algorithms for the problem of figure-ground separation. A class of hybrid evolutionary stochastic optimization algorithms based on a combination of evolutionary algorithms, simulated annealing and microcanonical annealing are shown to exhibit superior performance when compared to their purely evolutionary counterparts and to classical simulated annealing and microcanonical annealing algorithms. Experimental results on synthetic edgel maps and edgel maps derived from gray scale images are presented.

  360.   Ngoi, KP, and Jia, JC, "An active contour model for colour region extraction in natural scenes," IMAGE AND VISION COMPUTING, vol. 17, pp. 955-966, 1999.

Abstract:   The performance of active contours depends on the proper selection of model parameters and initial contours. In natural scenes, active contours often fail to converge to the desired solution because of unconstrained environmental conditions and complex object shapes. This paper presents a new active contour model for contour extraction in natural scenes. The proposed model is able to extract fairly complex object boundaries without the need to retune model parameters and image thresholds. Specific object features and a priori knowledge of the objects' topology are not required. Four schemes are proposed. An attraction/repulsion scheme deforms the active contour towards the object's boundary and makes it less sensitive to initialisation. A positive/negative contour scheme allows closed active contours to change their connectivity by splitting, thereby undergoing topological changes during the deformation process. An image scale scheme and an automatic thresholding scheme dynamically adapt the active contour in natural scenes. The proposed model is found to outperform the original snake model and degrade gracefully in the presence of image blur and Gaussian noise. Object boundaries are reliably extracted from a range of natural images. (C) 1999 Elsevier Science B.V. All rights reserved.

  361.   Huang, PS, Harris, CJ, and Nixon, MS, "Recognising humans by gait via parametric canonical space," ARTIFICIAL INTELLIGENCE IN ENGINEERING, vol. 13, pp. 359-366, 1999.

Abstract:   Based on principal component analysis (PCA), eigenspace transformation (EST) was demonstrated to be a potent metric in automatic face recognition and gait analysis by template matching, but without using data analysis to increase classification capability. Gait is a new biometric aimed to recognise subjects by the way they walk. In this article, we propose a new approach which combines canonical space transformation (CST) based on Canonical Analysis (CA), with EST for feature extraction. This method can be used to reduce data dimensionality and to optimise the class separability of different gait classes simultaneously. Each image template is projected from the high-dimensional image space to a low-dimensional canonical space. Using template matching, recognition of human gait becomes much more accurate and robust in this new space. Experimental results on a small database show how subjects can be recognised with 100% accuracy by their gait, using this method. (C) 1999 Elsevier Science Ltd. All rights reserved.

  362.   Lee, MS, and Medioni, G, "Grouping., -, ->, theta, into regions, curves, and junctions," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 76, pp. 54-69, 1999.

Abstract:   We address the problem of extracting segmented, structured information from noisy data obtained through local processing of images. A unified computational framework is developed for the inference of multiple salient structures such as junctions, curves, regions, and surfaces from any combinations of points, curve elements, and surface patch elements inputs in 2D and 3D. The methodology is grounded in two elements: tensor calculus for representation and nonlinear voting for data communication. Each input site communicates its information (a tensor) to its neighborhood through a predefined (tensor) field and, therefore, casts a (tensor) vote. Each site collects all the votes cast at its location and encodes them into a new tensor. A local, parallel routine such as a modified marching cube/square process then simultaneously detects junctions, curves, regions, and surfaces. The proposed method is noniterative, requires no initial guess or thresholding, can handle the presence of multiple curves, regions, and surfaces in a large amount of noise while it still preserves discontinuities, and the only free parameter is scale. We present results of curve and region inference from a variety of inputs. (C) 1999 Academic Press.

  363.   Tiddeman, B, Rabey, G, and Duffy, N, "Synthesis and transformation of three-dimensional facial images - Extending the principles of face-space transformations by using texture-mapped laser-scanned surface data," IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, vol. 18, pp. 64-69, 1999.

Abstract:   External energies of active contours are often formulated as Euclidean are length integrals. In this paper, we show that such formulations are biased. By this we mean that the minimum of the external energy does not occur at an image edge. In addition, we also show that for certain forms of external energy the active contour is unstable-when initialized at the true edge, the contour drifts away and becomes jagged. Both of these phenomena are due to the use of Euclidean are length integrals. We propose a non-Euclidean are length which eliminates these problems. This requires a reformulation of active contours where a single external energy function is replaced by a sequence of energy functions and the contour evolves as an integral curve of the gradient of these energies. The resulting active contour not only has unbiased external energy, but is also more controllable, Experimental evidence is provided in support of the theoretical claims.

  364.   Wang, KC, Dutton, RW, and Taylor, CA, "Improving geometric model construction for blood flow modeling - Geometric image segmentation and image-based model construction for computational hemodynamics," IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, vol. 18, pp. 33-39, 1999.

Abstract:   External energies of active contours are often formulated as Euclidean are length integrals. In this paper, we show that such formulations are biased. By this we mean that the minimum of the external energy does not occur at an image edge. In addition, we also show that for certain forms of external energy the active contour is unstable-when initialized at the true edge, the contour drifts away and becomes jagged. Both of these phenomena are due to the use of Euclidean are length integrals. We propose a non-Euclidean are length which eliminates these problems. This requires a reformulation of active contours where a single external energy function is replaced by a sequence of energy functions and the contour evolves as an integral curve of the gradient of these energies. The resulting active contour not only has unbiased external energy, but is also more controllable, Experimental evidence is provided in support of the theoretical claims.

  365.   Ma, TY, and Tagare, HD, "Consistency and stability of active contours with Euclidean and non-Euclidean arc lengths," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 8, pp. 1549-1559, 1999.

Abstract:   External energies of active contours are often formulated as Euclidean are length integrals. In this paper, we show that such formulations are biased. By this we mean that the minimum of the external energy does not occur at an image edge. In addition, we also show that for certain forms of external energy the active contour is unstable-when initialized at the true edge, the contour drifts away and becomes jagged. Both of these phenomena are due to the use of Euclidean are length integrals. We propose a non-Euclidean are length which eliminates these problems. This requires a reformulation of active contours where a single external energy function is replaced by a sequence of energy functions and the contour evolves as an integral curve of the gradient of these energies. The resulting active contour not only has unbiased external energy, but is also more controllable, Experimental evidence is provided in support of the theoretical claims.

  366.   Trevelyan, J, "Redefining robotics for the new millennium," INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, vol. 18, pp. 1211-1223, 1999.

Abstract:   This paper argues that the term "robotics" needs to be redefined as "the science of extending human motor capabilities with machines," and uses the author's experience with robotics over the past 25 years to support this argument. The current definition is tied by default to the term "robot," which emerged from science fiction-this tie needs to be broken if robotics research is to be based on reality. The paper reviews the author's research on sheep shearing, vision, calibration, telerobotics, and landmine clearance, and draws some conclusions that point to the need for changing the contemporary view of robotics. A brief survey of subjects addressed by robotics-research journal articles and comments from other robotics researchers support this view. Finally, at a time when many people regard technology, and particularly automation, with considerable skepticism, the proposed definition is easier for ordinary people to understand and support, and it provides more freedom for researchers to find creative approaches.

  367.   Sim, HC, and Damper, RI, "A neural network approach to planar-object recognition in 3D space," PATTERN ANALYSIS AND APPLICATIONS, vol. 2, pp. 143-163, 1999.

Abstract:   Most existing 2D object recognition algorithms are not perspective (or projective) invariant, and hence are not suitable far many real-world applications. By contrast, one of the primary goals of this research is to develop a flat object matching system that can identify and localise an object, even when seen from different viewpoints in 3D space. In addition, we also strive to achieve good scale invariance and robustness against partial occlusion as in any practical 2D object recognition system. The proposed system uses multi-view model representations and objects are recognised by self-organised dynamic link matching. The merit of this approach is that: it offers a compact framework for concurrent assessments of multiple match hypotheses by promoting competitions and/or co-operations among several local mappings of model and test image feature correspondences. Our experiments show that the system is very successful in recognising object to perspective distortion, even in rather cluttered scenes.

  368.   Wolberg, WH, Street, WN, and Mangasarian, OL, "Importance of nuclear morphology in breast cancer prognosis," CLINICAL CANCER RESEARCH, vol. 5, pp. 3542-3548, 1999.

Abstract:   The purpose of this study is to define prognostic relationships between computer-derived nuclear morphological features, lymph node status, and tumor size in breast cancer. Computer-derived nuclear size, shape, and texture features were determined in fine-needle aspirates obtained at the time of diagnosis from 253 consecutive patients with invasive breast cancer. Tumor size and lymph node status were determined at the time of surgery, Median follow-up time was 61.5 months for patients without distant recurrence, In univariate analysis, tumor size, nuclear features, and the number of metastatic nodes were of decreasing significance for distant disease-free survival. Nuclear features, tumor size, and the number of metastatic nodes were of decreasing significance for overall survival. In multivariate analysis, the morphological size feature, largest perimeter, was more predictive of disease-free and overall survival than were either tumor size or the number of axillary lymph node metastases. This morphological feature, when combined with tumor size, identified more patients at both the good and poor ends of the prognostic spectrum than did the combination of tumor size and axillary lymph node status, Our data indicate that computer analysis of nuclear features has the potential to replace axillary lymph node status for staging of breast cancer. If confirmed by others, axillary dissection for breast cancer staging, estimating prognosis, and selecting patients for adjunctive therapy could be eliminated.

  369.   Chesnaud, C, Refregier, P, and Boulet, V, "Statistical region snake-based segmentation adapted to different physical noise models," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 21, pp. 1145-1157, 1999.

Abstract:   Algorithms for object segmentation are crucial in many image processing applications. During past years, active contour models (snakes) have been widely used for finding the contours of objects. This segmentation strategy is classically edge-based in the sense that the snake is driven to fit the maximum of an edge map of the scene. In this paper, we propose a region snake approach and we determine fast algorithms for the segmentation of an object in an image. The algorithms developed in a Maximum Likelihood approach are based on the calculation of the statistics of the inner and the outer regions (defined by the snake). It has thus been possible to develop optimal algorithms adapted to the random fields which describe the gray levels in the input image if we assume that their probability density function family are known. We demonstrate that this approach is still efficient when no boundary's edge exists in the image. We also show that one can obtain fast algorithms by transforming the summations over a region, for the calculation of the statistics, into summations along the boundary of the region. Finally, we will provide numerical simulation results for different physical situations in order to illustrate the efficiency of this approach.

  370.   Gotte, MJW, van Rossum, AC, Marcus, JT, Kuijer, JPA, Axel, L, and Visser, CA, "Recognition of infarct localization by specific changes in intramural myocardial mechanics," AMERICAN HEART JOURNAL, vol. 138, pp. 1038-1045, 1999.

Abstract:   Background After transmural myocardial infarction (MI), changes occur in intramural myocardial function. This has been described in anterior MI only. The aim of this study was to determine the relation between variable infarct locations and intramural deformation in patients with a first MI. Methods Forty patients (33 men and 7 women aged 57 +/- 11 years) with different infarct-related coronary arteries 125 left anterior descending, 7 circumflex, and 8 right coronary) were studied 6 +/- 3 days after infarction with magnetic resonance tissue tagging and 2-dimensional finite element analysis of myocardial deformation. Short-axis tagged images were acquired at base, mid, and apical level. Intramural deformation was measured in 6 circumferential segments per level. Results were compared with 9 age-matched healthy controls. Results Each infarct area demonstrated a-significant reduction of intramural deformation. At mid-ventricular level, segments with maximum impaired intramural function were the anteroseptal segment for left anterior descending-related MI (stretch: 16% vs 33% for controls, P < .001), the posterolateral segment for related MI (stretch: 20% vs 34%, P < .01); and the inferior segment for right coronary artery related MI (stretch: 18% vs 25%, P = .082). In these infarct segments, the intramural regional systolic stretch was more circumferentially oriented compared with radilly oriented stretch in the same segments in controls (P < .05). Conclusion The infarct area can be recognized by a specific spatial pattern of intramural deformation, In infarcted compared with noninfarcted myocardium, deformation is significantly reduced and systolic stretch deviates from the radial direction. Left anterior descending related infarcts were found to have larger regional differences in intramural deformation than circumflex or right coronary artery related MI of enzymatically the same size.

  371.   Zhong, D, and Chang, SF, "An integrated approach for content-based video object segmentation and retrieval," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 9, pp. 1259-1268, 1999.

Abstract:   Object-based video data representations enable unprecedented functionalities of content access and manipulation. In this paper, we present an integrated approach using region-based analysis for semantic video object segmentation and retrieval. We first present an active system that combines low-level region segmentation with user inputs for defining and tracking semantic video objects. The proposed technique is novel in using an integrated feature fusion framework for tracking and segmentation at both region and object levels. Experimental results and extensive performance evaluation show excellent results compared to existing systems. Building upon the segmentation framework, we then present a unique region-based query system for semantic video object. The model facilitates powerful object search, such as spatio-temporal similarity searching at multiple levels.

  372.   Shekhar, R, Cothren, RM, Vince, DG, Chandra, S, Thomas, JD, and Cornhill, JF, "Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 23, pp. 299-309, 1999.

Abstract:   Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66), The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements. (C) 1999 Elsevier Science Ltd. All rights reserved.

  373.   Akgul, YS, Kambhamettu, C, and Stone, M, "Automatic extraction and tracking of the tongue contours," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 1035-1045, 1999.

Abstract:   Computerized analysis of the tongue surface movement can provide valuable information to speech and swallowing research. Ultrasound technology is currently the most attractive modality for the tongue imaging mainly because of its high video frame rate. However, problems with ultrasound imaging, such as noise and echo artifacts, refractions, and unrelated reflections pose significant challenges for computer analysis of the tongue images and hence specific methods must be developed. This paper presents a system that is developed for automatic extraction and tracking of the tongue surface movements from ultrasound image sequences. The ultrasound images are supplied by the head and transducer support system (HATS), which was developed in order to fix the head and support the transducer under the chin in a known position without disturbing speech, In this work, we propose a novel scheme for the analysis of the tongue images using deformable contours. We incorporate novel mechanisms to 1) impose speech related constraints on the deformations; 2) perform spatiotemporal smoothing using a contour postprocessing stage; 3) utilize optical flow techniques to speedup the search process; and 4) propagate user supplied information to the analysis of all image frames. We tested the system's performance qualitatively and quantitatively in consultation with speech scientists. Our system produced contours that are within the range of manual measurement variations. The results of our system are extremely encouraging and the system can be used in practical speech and swallowing research in the field of otolaryngology.

  374.   Hagemann, A, Rohr, K, Stiehl, HS, Spetzger, U, and Gilsbach, JM, "Biomechanical modeling of the human head for physically based, nonrigid image registration," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 875-884, 1999.

Abstract:   The accuracy of image-guided neurosurgery generally suffers from brain deformations due to intraoperative changes. These deformations cause significant changes of the anatomical geometry (organ shape and spatial interorgan relations), thus making intraoperative navigation based on preoperative images error prone. In order to improve the navigation accuracy, we developed a biomechanical model of the human head based on the finite element method, which can be employed for the correction of preoperative images to cope with the deformations occurring during surgical interventions. At the current stage of development, the two-dimensional (2-D) implementation of the model comprises two different materials, though the theory holds for the three-dimensional (3-D) case and is capable of dealing with an arbitrary number of different materials. For the correction of a preoperative image, a set of homologous landmarks must be specified which determine correspondences. These correspondences can be easily integrated into the model and are maintained throughout the computation of the deformation of the preoperative image. The necessary material parameter values have been determined through a comprehensive literature study. Our approach has been tested for the case of synthetic images and yields physically plausible deformation results. Additionally, we carried out registration experiments with a preoperative MR image of the human head and a corresponding postoperative image simulating an intraoperative image. We found that our approach yields good prediction results, even in the case when correspondences are given in a relatively small area of the image only.

  375.   McInerney, T, and Terzopoulos, D, "Topology adaptive deformable surfaces for medical image volume segmentation," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 840-850, 1999.

Abstract:   Deformable models, which include deformable contours (the popular snakes) and deformable Surfaces, are a powerful model-based medical image analysis technique. We develop a new class of deformable models by formulating deformable surfaces in terms of an affine cell image decomposition (ACID). Our approach significantly extends standard deformable surfaces, while retaining their interactivity and other desirable properties. In particular, the ACID induces an efficient reparameterization mechanism that enables parametric deformable surfaces to evolve into complex geometries, even modifying their topology as necessary. We demonstrate that our new ACID-based deformable surfaces, dubbed T-surfaces, can effectively segment complex anatomic structures from medical volume images.

  376.   Kelemen, A, Szekely, G, and Gerig, G, "Elastic model-based segmentation of 3-D neuroradiological data sets," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 828-839, 1999.

Abstract:   This paper presents a new technique for the automatic model-based segmentation of three-dimensional (3-D) objects from volumetric image data. The development closely follows the seminal work of Taylor and Cootes on active shape models, but is based on a hierarchical parametric object description rather than a point distribution model, The segmentation system includes both the building of statistical models and the automatic segmentation of new image data sets via a restricted elastic deformation of shape models, Geometric models are derived from a sample set of image data which have been segmented by experts, The surfaces of these binary objects are converted into parametric surface representations, which are normalized to get an invariant object-centered coordinate system, Surface representations are expanded into series of spherical harmonics which provide parametric descriptions of object shapes. It is shown that invariant object surface parametrization provides a good approximation to automatically determine object homology in terms of sets of corresponding sets of surface points. Gray-level information near object boundaries is represented by 1-D intensity profiles normal to the surface. Considering automatic segmentation of brain structures as our driving application, our choice of coordinates for object alignment was the well-accepted stereotactic coordinate system. Major variation of object shapes around the mean shape, also referred to as shape eigenmodes, are calculated in shape parameter space rather than the feature space of point coordinates, Segmentation makes use of the object shape statistics by restricting possible elastic deformations into the range of the training shapes, The mean shapes are initialized in a new data set by specifying the landmarks of the stereotactic coordinate system, The model elastically deforms, driven by the displacement forces across the object's surface, which are generated by matching local intensity profiles. Elastical deformations are limited by setting bounds for the maximum variations in eigenmode space. The technique has been applied to automatically segment left and right hippocampus, thalamus, putamen, and globus pallidus from volumetric magnetic resonance scans taken from schizophrenia studies. The results have been validated by comparison of automatic segmentation with the results obtained by interactive expert segmentation.

  377.   Olver, PJ, Sapiro, G, and Tannenbaum, A, "Affine invariant detection: Edge maps, anisotropic diffusion, and active contours," ACTA APPLICANDAE MATHEMATICAE, vol. 59, pp. 45-77, 1999.

Abstract:   In this paper we undertake a systematic investigation of affine invariant object detection and image denoising. Edge detection is first presented from the point of view of the affine invariant scale-space obtained by curvature based motion of the image level-sets. In this case, affine invariant maps are derived as a weighted difference of images at different scales. We then introduce the affine gradient as an affine invariant differential function of lowest possible order with qualitative behavior similar to the Euclidean gradient magnitude. These edge detectors are the basis for the extension of the affine invariant scale-space to a complete affine flow for image denoising and simplification, and to define affine invariant active contours for object detection and edge integration. The active contours are obtained as a gradient flow in a conformally Euclidean space defined by the image on which the object is to be detected. That is, we show that objects can be segmented in an affine invariant manner by computing a path of minimal weighted affine distance, the weight being given by functions of the affine edge detectors. The gradient path is computed via an algorithm which allows to simultaneously detect any number of objects independently of the initial curve topology. Based on the same theory of affine invariant gradient flows we show that the affine geometric heat flow is minimizing, in an affine invariant form, the area enclosed by the curve.

  378.   Terzopoulos, D, "Visual modeling for multimedia content," ADVANCED MULTIMEDIA CONTENT PROCESSING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1554, pp. 406-421, 1999.

Abstract:   This paper reviews research that addresses the challenging problem of modeling living systems for multimedia content creation. First, I discuss the modeling of animals in their natural habitats for use in animated virtual worlds. The basic approach is to implement realistic artificial animals (in particular, fish) and to give them the ability to locomote, perceive, and in some sense understand the realistic virtual worlds in which they are situated so that they may achieve both individual and social functionality within these worlds. Second, I discuss the modeling of human faces. The goal is to develop facial models that are capable of synthesizing realistic expressions. At different levels of abstraction, these hierarchical models capture knowledge from psychology, facial anatomy and tissue histology, and continuum biomechanics. The facial models can be "personalized", or made to conform closely to individuals, once facial geometry and photometry information has been captured by a range sensor.

  379.   Mortensen, EN, "Vision-assisted image editing," COMPUTER GRAPHICS-US, vol. 33, pp. 55-57, 1999.

Abstract:   A CNN-based algorithm for image segmentation by active contours is proposed here. The algorithm is based on an iterative process of expansion of the contour and its subsequent thinning guided by external and internal energy. The proposed strategy allows for a high level of control over contour evolution making their topologic transformations easier. Therefore processing of multiple contours for segmenting several objects can be carried out simultaneously.

  380.   Kozek, T, and Vilarino, DL, "An active contour algorithm for continuous-time cellular neural networks," JOURNAL OF VLSI SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, vol. 23, pp. 403-414, 1999.

Abstract:   A CNN-based algorithm for image segmentation by active contours is proposed here. The algorithm is based on an iterative process of expansion of the contour and its subsequent thinning guided by external and internal energy. The proposed strategy allows for a high level of control over contour evolution making their topologic transformations easier. Therefore processing of multiple contours for segmenting several objects can be carried out simultaneously.

  381.   Rekeczky, C, and Chua, LO, "Computing with front propagation: Active contour and skeleton models in continuous-time CNN," JOURNAL OF VLSI SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, vol. 23, pp. 373-402, 1999.

Abstract:   In this paper, a linear CNN template class is studied with a symmetric feedback matrix capable of generating trigger-waves, a special type of binary traveling-wave. The qualitative properties of these waves are examined and some simple control strategies are derived based on modifying the bias and feedback terms in a CNN template. It is shown that a properly controlled wave-front can be efficiently used in segmentation, shape and structure detection/recovery tasks. Shape is represented by the contour of an evolving front. An algorithmic framework is discussed that incorporates bias controlled trigger-waves in tracking the active contour of the objects during rigid and non-rigid motion. The object skeleton (structure) is obtained as a composition of stable annihilation lines formed during the collision of trigger wave-fronts. The shortest path problem in a binary labyrinth is also formulated as a special type of skeletonization task and solved by combined trigger-wave based techniques.

  382.   Velasco, HMG, Aligue, FJL, Orellana, CJG, Macias, MM, and Sotoca, MIA, "Application of ANN techniques to automated identification of bovine livestock," ENGINEERING APPLICATIONS OF BIO-INSPIRED ARTIFICIAL NEURAL NETWORKS, VOL II, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1607, pp. 422-431, 1999.

Abstract:   In this work a classification system is presented that, taking lateral images of cattle as inputs, is able to identify the animals and classify them by breed into previously learnt classes. The system consists of two fundamental parts. In the first one, a deformable-model-based preprocessing of the image is made, in which the contour of the animal in the photograph is sought, extracted, and normalized. Next, a neural classifier is presented that, supplemented with a decision-maker at its output, makes the distribution into classes. In the last part, the results obtained in a real application of this methodology are presented.

  383.   Chella, A, Di Gesu, V, Infantino, I, Intravaia, D, and Valenti, C, "A cooperating strategy for objects recognition," SHAPE, CONTOUR AND GROUPING IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1681, pp. 264-274, 1999.

Abstract:   The paper describes an object recognition system, based on the co-operation of several visual modules (early vision, object detector, and object recognizer). The system is active because the behavior of each module is tuned on the results given by other modules and by the internal models. This solution allows to detect inconsistencies and to generate a feedback process. The proposed strategy has shown good performance especially in case of complex scene analysis, and it has been included in the visual system of the DAISY robotics system. Experimental results on real data are also reported.

  384.   Doucette, P, Agouris, P, Musavi, M, and Stefanidis, A, "Automated extraction of linear features from aerial imagery using Kohonen learning and GIS data," INTEGRATED SPATIAL DATABASES: DIGITAL IMAGES AND GIS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1737, pp. 20-33, 1999.

Abstract:   An approach to semi-automated linear feature extraction from aerial imagery is introduced in which Kohonen's self-organizing map (SOM) algorithm is integrated with existing GIS data. The SOM belongs to a distinct class of neural networks which is characterized by competitive and unsupervised learning. Using radiometrically classified image pixels as input, appropriate SOM network topologies are modeled to extract underlying spatial structures contained in the input patterns. Coarse-resolution GIS vector data is used for network weight and topology initialization when extracting specific feature components. The Kohonen learning rule updates the synaptic weight vectors of winning neural units that represent 2-D vector shape vertices. Experiments with high-resolution hyperspectral imagery demonstrate a robust ability to extract centerline information when presented with coarse input.

  385.   Laading, JK, McCulloch, C, Johnson, VE, Gilland, DR, and Jaszczak, RJ, "A hierarchical feature based deformation model applied to 4D cardiac SPECT data," INFORMATION PROCESSING IN MEDICAL IMAGING, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1613, pp. 266-279, 1999.

Abstract:   In this paper we describe a statistical model for the observation of labeled points in gated cardiac single photon emission computed tomography (SPECT) images. The model has two major parts: one based on shape correspondence between the image for evaluation and a reference image, and a second based on the match in image features. While the statistical deformation model is applicable to a broad range of image objects, the addition of a contraction mechanism to the baseline model provides particularly convincing results in gated cardiac SPECT. The model is applied to clinical data and provides marked improvement in the quality of summary images for the time series. Estimates of heart deformation and contraction parameters are also obtained.

  386.   Chung, DH, and Sapiro, G, "A windows-based user friendly system for image analysis with partial differential equations," SCALE-SPACE THEORIES IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1682, pp. 453-458, 1999.

Abstract:   In this paper we present and briefly describe a Windows user-friendly system designed to assist with the analysis of images in general, and biomedical images in particular. The system, which is being made publicly available to the research community, implements basic 2D image analysis operations based on partial differential equations (PDE's). The system is under continuous development, and already includes a large number of image enhancement and segmentation routines that have been tested for several applications.

  387.   Sifakis, E, and Tziritas, G, "Fast marching to moving object location," SCALE-SPACE THEORIES IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1682, pp. 447-452, 1999.

Abstract:   In this paper we address two important problems in motion analysis: the detection of moving objects and their localization. Statistical and level set approaches are adopted in order to formulate these problems. For the change detection problem, the inter-frame difference is modeled by a mixture of two zero-mean Laplacian distributions. At first, statistical tests using criteria with negligible error probability axe used for labeling as many as possible sites as changed or unchanged. All the connected components of the labeled sites are seed regions, which give the initial level sets, for which velocity fields for label propagation are provided. We introduce a new multi-label fast marching algorithm for expanding competitive regions. The solution of the localization problem is based on the map of changed pixels previously extracted. The boundary of the moving object is determined by a level set algorithm, which is initialized by two curves evolving in converging opposite directions. The sites of curve contact determine the position of the object boundary. For illustrating the efficiency of the proposed approach, experimental results are presented using real video sequences.

  388.   Chan, T, and Vese, L, "An active contour model without edges," SCALE-SPACE THEORIES IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1682, pp. 141-151, 1999.

Abstract:   In this paper, we propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by gradient. The model is a combination between more classical active contour models using mean curvature motion techniques, and the Mumford-Shah model for segmentation. We minimize an energy which can be seen as a particular case of the so-called minimal partition problem. In the level set formulation, the problem becomes a "mean-curvature flow" -like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. Finally, we will present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable.

  389.   Gomes, J, and Faugeras, O, "Reconciling distance functions and level sets," SCALE-SPACE THEORIES IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1682, pp. 70-81, 1999.

Abstract:   This paper is concerned with the simulation of the Partial Differential Equation (PDE) driven evolution of a closed surface by means of an implicit representation. In most applications, the natural choice for the implicit representation is the signed distance function to the closed surface. Osher and Sethian propose to evolve the distance function with a Hamilton-Jacobi equation. Unfortunately the solution to this equation is not a distance function. As a consequence, the practical application of the level set method is plagued with such questions as when do we have to "reinitialize" the distance function? How do we "reinitialize" the distance function? Etc... which reveal a disagreement between the theory and its implementation. This paper proposes an alternative to the use of Hamilton-Jacobi equations which eliminates this contradiction: in our method the implicit representation always remains a distance function by construction, and the implementation does not differ from the theory anymore. This is achieved through the introduction of a new equation. Besides its theoretical advantages, the proposed method also has several practical advantages which we demonstrate in two applications: (i) the segmentation of the human cortex surfaces from MRI images using two coupled surfaces [26], (ii) the construction of a hierarchy of Euclidean skeletons of a 3D surface.

  390.   Bertalmio, M, Sapiro, G, and Randall, G, "Morphing active contours," SCALE-SPACE THEORIES IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1682, pp. 46-57, 1999.

Abstract:   A method for deforming curves in a given image to a desired position in a second image is introduced in this paper. The algorithm is based on deforming the first image toward the second one via a partial differential equation, while tracking the deformation of the curves of interest in the first image with an additional, coupled, partial differential equation. The tracking is performed by projecting the velocities of the first equation into the second one. In contrast with previous PDE based approaches, both the images and the curves on the frames/slices of interest axe used for tracking. The technique can be applied to object tracking and sequential segmentation. The topology of the deforming curve can change, without any special topology handling procedures added to the scheme. This permits for example the automatic tracking of scenes where, due to occlusions, the topology of the objects of interest changes from frame to frame. In addition, this work introduces the concept of projecting velocities to obtain systems of coupled partial differential equations for image analysis applications. We show examples for object tracking and segmentation of electronic microscopy. We also briefly discuss possible uses of this framework for three dimensional morphing.

  391.   Goldenberg, R, Kimmel, R, Rivlin, E, and Rudzsky, M, "Fast geodesic active contours," SCALE-SPACE THEORIES IN COMPUTER VISION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1682, pp. 34-45, 1999.

Abstract:   We use an unconditionally stable numerical scheme to implement a fast version of the geodesic active contour model. The proposed scheme is useful for object segmentation in images, like tracking moving objects in a sequence of images. The method is based on the Weickert-Romeney-Viergever [33] AOS scheme. It is applied at small regions, motivated by Adalsteinsson-Sethian [1] level set narrow band approach, and uses Sethian's fast marching method [26] for re-initialization. Experimental results demonstrate the power of the new method for tracking in color movies.

  392.   Katahara, S, and Aoki, M, "Face parts extraction window based on bilateral symmetry of gradient direction," COMPUTER ANALYSIS OF IMAGES AND PATTERNS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1689, pp. 489-497, 1999.

Abstract:   We propose a simple algorithm to determine face parts extraction window in face image. We utilize bilateral symmetries between and within face parts. We also use knowledge about size and locationship of face parts. First, we examine bilateral symmetries around vertical orientation edge, then obtain symmetry measures. The symmetry measures are projected onto y-axis to produce histogram of the measures. We estimate height of face parts regions by frequency of the histogram. Face parts region, which contains maximum frequency of the histogram, becomes a candidate of face parts region that includes eyes and eyebrows. Secondly, the measures that exist within the height of the face parts region are projected onto x-axis to estimate width of face parts region. We determine face parts extraction windows by the estimated height and width. Finally. we detect irises in the candidate of face parts region that includes eyes and eyebrows, using circular mask.

  393.   Klemencic, A, Pernus, F, and Kovacic, S, "Modeling morphological changes during contraction of muscle fibres by active contours," COMPUTER ANALYSIS OF IMAGES AND PATTERNS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1689, pp. 134-141, 1999.

Abstract:   An active contour model with expansion "balloon" forces was used as a tool to simulate the changes in shape and increase in cross-sectional area, which occur during the contraction of isolated muscle fiber. A polygon, imitating the boundaries of the relaxed muscle fiber cross-section. represented the initial position of the active contour model. This contour was then expanded in order to increase the cross-sectional area and at the same time intrinsic elastic properties smoothed the contour. The process of expansion was terminated, when the area of the inflated contour surpassed the preset value. The equations that we give, lead to a controlled expansion of the active contour model.

  394.   Berger, MO, Winterfeldt, G, and Lethor, JP, "Contour tracking in echo cardiographic sequences without learning stage: Application to the 3D reconstruction of the beating left ventricule," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1679, pp. 508-515, 1999.

Abstract:   In this paper we present a contour tracker oil echographic image sequences. To do this, we use a hierarchical approach: we first compute a global estimation of the ventricular motion. Then we use a fine tuning algorithm to adjust the detection of the ventricular wall. The global estimation is based oil a parametric motion model with a small number of parameters. This allows us to compute the motion in a robust way from the velocity computed at each point of the contour. Results are presented demonstrating tracking on various echographic sequences. We conclude by discussing some of our current research efforts.

  395.   Montagnat, J, Delingette, H, and Malandain, G, "Cylindrical echocardiographic image segmentation based on 3D deformable models," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1679, pp. 168-175, 1999.

Abstract:   This paper presents a 3D echocardiographic image segmentation procedure based on deformable surfaces. We first propose to adapt filtering techniques to the cylindrical geometry of several 3D ultrasound image devices. Then we compare the effect of different external forces on a surface template deformation inside volumetric echocardiographic images. An original method involving region grey-level analysis along the model normal directions is described. We rely on an a priori knowledge of the cardiac left ventricle shape and on region grey-level values to perform a robust segmentation. During the deformation process the allowable surface deformation is modified. Finally, we show experimental results on very challenging sparse and noisy images and quantitative measurements of the left ventricle volume.

  396.   Liang, JM, McInerney, T, and Terzopoulos, D, "Interactive medical image segmentation with United Snakes," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1679, pp. 116-127, 1999.

Abstract:   Snakes have become a standard image analysis technique with several variants now in common use. We have developed a software package called "United Snakes". It unifies the most important snake variants, including finite difference, B-spline, and Hermite polynomial snakes, within the framework of a general finite element formulation with a choice of shape functions. Furthermore, we have incorporated into united snakes a recently proposed snake-like technique known as "livewire", via a method for imposing hard constraints on snakes. Here, we demonstrate that the combination of techniques in united snakes yields generality, accuracy, ease of use, and robustness in several medical image analysis applications, including the segmentation of neuronal dendrites in EM images, dynamic chest image analysis, and the quantification of growth plates.

  397.   Hug, J, Brechbuhler, C, and Szekely, G, "Tamed Snake: A particle system for robust semi-automatic segmentation," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1679, pp. 106-115, 1999.

Abstract:   Semi-automatic segmentation approaches tend to overlook the problems caused by missing or incomplete image information. In such situations, powerful control mechanisms and intuitive modelling metaphors should be provided in order to make the methods practically applicable. Taking this problem into account, the usage of subdivision curves in combination with the simulation of edge attracted mass points is proposed as a novel way towards a more robust interactive segmentation methodology. Subdivision curves provide a hierarchical and smooth representation of a shape which call be modified on coarse and on fine scales as well. Furthermore, local adaptive subdivision gives the required flexibility when dealing with a discrete curve representation. III order to incorporate image information, the control vertices of a curve are considered mass points, attracted by edges in the local neighbourhood of the image. This so-called Tamed Snake framework is illustrated by means of the segmentation of two medical data sets and the results are compared with those achieved by traditional Snakes.

  398.   Bajaj, CL, Chen, JD, Holt, RJ, and Netravali, AN, "Energy formulations of A-splines," COMPUTER AIDED GEOMETRIC DESIGN, vol. 16, pp. 39-59, 1999.

Abstract:   A-splines are implicit real algebraic curves in Bernstein-Bezier (BB) form that are smooth. We develop A-spline curve models using various energy formulations, incorporating bending and stretching energy, based on the theory of elasticity. The attempt to find true energy minimizing curves usually leads to complicated integrals which can only be solved numerically, we introduce a simplified energy formulation which is much faster to compute yet still provides reasonably accurate results. Several examples for C-1-continuous quadratic A-splines using the true and simplified energy models are then presented. (C) 1999 Elsevier Science B.V. All rights reserved.

  399.   Liao, CW, and Medioni, G, "Simultaneous surface approximation and segmentation of complex objects," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 73, pp. 43-63, 1999.

Abstract:   Deformable models represent a useful approach to approximate objects from collected data points. We propose to augment the basic approaches designed to handle mostly compact objects or objects of known topology. Our approach can fit simultaneously more than one curve or surface to approximate multiple topologically complex objects by using (1) the residual data points, (2) the badly fitting parts of the approximating surface, and (3) appropriate Boolean operations. In 2-D, B-snakes [3] are used to approximate each object (pattern). In 3-D, an analytical surface representation, based on the elements detected, is presented. The global representation of a 3-D object, in terms of elements and their connection, takes the form of B-spline and Bezier surfaces. A Bezier surface is used to connect different elements, and the connecting surface itself conforms to the data points nearby through energy minimization. This way, a G(1) continuity surface is achieved for the underlying 3-D object. We present experiments on synthetic and real data in 2-D and 3-D. In these experiments, multiple complex patterns and objects with through holes are segmented. The system proceeds automatically without human interaction or any prior knowledge of the topology of the underlying object. (C) 1999 Academic Press.

  400.   Yuan, C, Lin, E, Millard, J, and Hwang, JN, "Closed contour edge detection of blood vessel lumen and outer wall boundaries in black-blood MR images," MAGNETIC RESONANCE IMAGING, vol. 17, pp. 257-266, 1999.

Abstract:   Quantitative measurements of the blood vessel wall area may provide useful information of atherosclerotic plaque burden, progression and/or regression. Magnetic resonance imaging is a promising technique for identifying both luminal and outer wall boundaries of the human blood vessels. Currently these boundaries are primarily defined manually, a process viewed as labor intensive and subject to significant operator bias. Fully automated post-processing techniques used for identifying the lumen and wall boundaries, on the other hand, are also problematic due to the complexity of signal features in the vicinity of the blood vessels. The goals of this study were to develop a robust, automated closed contour edge detection algorithm, apply this algorithm to high resolution human carotid artery images, and assess its accuracy, and reproducibility, Our algorithm has proven to be sensitive to various contrast situations and is reasonably accurate and highly reproducible. (C) 1999 Elsevier Science Inc.

  401.   Scott, CH, Sutton, MS, Gusani, N, Fayad, Z, Kraitchman, D, Keane, MG, Axel, L, and Ferrari, VA, "Effect of dobutamine on regional left ventricular function measured by tagged magnetic resonance imaging in normal subjects," AMERICAN JOURNAL OF CARDIOLOGY, vol. 83, pp. 412-417, 1999.

Abstract:   The effect of inotropic stimulation on the pattern and magnitude of regional left ventricular contraction was studied using tagged magnetic resonance imaging to assess whether dobutamine exacerbates variation in regional contraction at rest. Dobutamine stress testing defines a normal response as a homogeneous increase in regional wall motion. in 8 normal subjects, 4 equally spaced left ventricular short-axis levels were imaged through systole using tagged magnetic resonance imaging. The baseline imaging sequence was repeated with 5-, 10-, 15-, and 20-mu g/kg/min dobutamine infusion. Regional myocardial displacement redial thickening, and circumferential shortening were measured. The left ventricle was analyzed by level (base to apex) and wall (septum, inferior, lateral, anterior), Dobutamine did not alter baseline regional functional heterogeneity. Dobutamine infusion resulted in a uniform increase in displacement, radial thickening, and circumferential shortening from baseline to 10-mu g/kg/min infusion without additional increases at higher doses. (C) 1999 by Excerpta Medica, Inc.

  402.   Gee, JC, "On matching brain volumes," PATTERN RECOGNITION, vol. 32, pp. 99-111, 1999.

Abstract:   To characterize the complex morphological variations that occur naturally in human neuroanatomy so that their confounding effect can be minimized in the identification of brain structures in medical images, a computational framework has evolved in which individual anatomies are modeled as warped versions of a canonical representation of the anatomy, known as an atlas. To realize this framework, the method of elastic matching was invented for determining the spatial mapping between a three-dimensional image pair in which one image volume is modeled as an elastic continuum that is deformed to match the appearance of the second volume. In this paper, we review the seminal ideas underlying the elastic matching technique, consider the practical implications of an integral formation of the approach, and explore a more general Bayesian interpretation of the method in order to address issues that are less naturally resolved within a continuum mechanical setting, such as the examination of a solution's reliability or the incorporation of empirical information that may be available about the spatial mappings into the analysis. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  403.   Marchant, JA, Schofield, CP, and White, RP, "Pig growth and conformation monitoring using image analysis," ANIMAL SCIENCE, vol. 68, pp. 141-150, 1999.

Abstract:   Machine vision can be used to collect images of pigs and analyse them to identify and measure specific areas and dimensions related to their growth, shape and hence conformation. This information could improve the stockman's ability to maximize production efficiency and also to monitor health by defecting abnormalities in growth rates. This work introduces fully automated algorithms which find the plan view outline of animals in a normal housing situation, divide the outline into major body components and measure specified dimensions and areas. Special attention is paid to determining whether the results are sufficiently repeatable to be useful in estimating these parameters. Problems in compensating for changes in the optical geometry are outlined and methods proposed to deal with them. The repeatability of the image analysis process coupled with the subsequent signal processing for outlier rejection gives s.e. values on areas of < 0.005 and on linear dimensions of < 0.0025. For example the plan view area less head and neck (A4) can be used to predict the weight of the group of pigs at 34 kg, 66 kg and 98 kg with standard errors of 0.25 kg, 0.17 kg and 0.39 kg respectively when using manual weighing results to calibrate the system. If an individual pig is weighed once at 75 days (e.g. 34 kg) to calibrate the A4-to-weight relationship, subsequent A4 measurements can be used to predict its weight when 125 days old (approx. 80 kg) to within l kg. This matches the accuracy of the manual weighing system used in the trials. The effect of pig gender on the area to weight relationships is not significant (P = 0.074), but there is a small yet significant gender effect with the linear dimensions.

  404.   Post, FH, de Leeuw, WC, Sadarjoen, IA, Reinders, F, and van Walsum, T, "Global, geometric, and feature-based techniques for vector field visualization," FUTURE GENERATION COMPUTER SYSTEMS, vol. 15, pp. 87-98, 1999.

Abstract:   Vector field visualization techniques are subdivided into three categories: global, geometric, and feature-based techniques. We describe each category, and we present some related work and an example in each category from our own recent research. Spot noise is a texture synthesis technique for global visualization of vector fields on 2D surfaces. Deformable surfaces is a generic technique for extraction and Visualization of geometric objects (surfaces or volumes) in 3D data fields. Selective and iconic visualization is an approach that extracts important regions or structures from large data sets, calculates high-level attributes, and visualizes the features using parameterized iconic objects. It is argued that for vector fields a range of Visualization techniques are needed to fulfill the needs of the application. (C) 1999 Published by Elsevier Science B.V. All rights reserved.

  405.   Davatzikos, C, and Prince, JL, "Convexity analysis of active contour problems," IMAGE AND VISION COMPUTING, vol. 17, pp. 27-36, 1999.

Abstract:   A general active contour formulation is considered and a convexity analysis of its energy function is presented. Conditions under which this formulation has a unique solution are derived; these conditions involve both the active contour energy potential and the regularization parameters. This analysis is then applied to four particular active contour formulations, revealing important characteristics about their convexity, and suggesting that external potentials involving center-of-mass computations may be better behaved than the usual potentials based on image gradients. Our analysis also provides an explanation for the poor convergence behavior at concave boundaries and suggests an alternate algorithm for approaching these types of boundaries. (C) 1999 Elsevier Science B.V. All rights reserved.

  406.   Lee, MK, Drangova, M, Holdsworth, DW, and Fenster, A, "Application of dynamic computed tomography for measurements of local aortic elastic modulus," MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, vol. 37, pp. 13-24, 1999.

Abstract:   A novel computed tomographic (CT) technique used for the instantaneous measurement of the dynamic elastic modulus of intact excised porcine aortic vessels subjected to physiological pressure waveforms is described. This system was comprised of a high resolution X-ray image intensifier based computed tomographic system with limiting spatial resolution of 3.2 mm(-1) (for a 40 mm field of view) and a computer-controlled flow simulator. Utilising cardiac gating and computer control, a time-resolved sequence of I mm thick axial tomographic slices was obtained for porcine aortic specimens during one simulated cardiac cycle. With an image acquisition sampling interval of 16.5 ms, the time sequences of CT slices were able to quantify the expansion and contraction of the aortic wall during each phase of the cardiac cycle. Through superficial tagging of the adventitial surface of the specimens with wire markers, measurement of wall strain in specific circumferential sectors and subsequent calculations of localised dynamic elastic modulus were possible. The precision of circumferential measurements made from the CT images utilising a cluster-growing segmentation technique was approximately +/- 0.25 mm and allowed determination of the dynamic elastic modulus (E-dyn) with a precision of +/- 8 kPa. Dynamic elastic modulus was resolved as a function of the harmonics of the physiological pressure waveform and as a function of the angular position around the vessel circumference. Application of this dynamic CT (DCT) technique to seven porcine thoracic aortic specimens produced a circumferential average (over all frequency components) E-dyn of 373 +/- 29 kPa. This value was not statistically different (p < 0.05) from the values of 430 +/- 77 and 390 +/- 47 kPa obtained by uniaxial tensile testing and volumetric measurements respectively.

  407.   Yuen, PC, Feng, GC, and Zhou, JP, "A contour detection method: Initialization and contour model," PATTERN RECOGNITION LETTERS, vol. 20, pp. 141-148, 1999.

Abstract:   In this paper, a new contour detection method based on the snake model is developed and reported. The proposed method consists of two steps. The first step is to locate the initial snake contour and a novel initialization algorithm has been developed. In the second step, an improved snake algorithm is developed to locate the final contour(s). Images with single and multiple objects are selected to evaluate the capability of the proposed method and the results are encouraging. (C) 1999 Elsevier Science B.V. All rights reserved.

  408.   Shareef, N, Wang, DL, and Yagel, R, "Segmentation of medical images using LEGION," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 74-91, 1999.

Abstract:   Advances in visualization technology and specialized graphic workstations allow clinicians to virtually interact with anatomical structures contained within sampled medical-image datasets, A hindrance to the effective use of this technology is the difficult problem of image segmentation. In this paper, we utilize a recently proposed oscillator network called the locally excitatory globally inhibitory oscillator network (LEGION) whose ability tb achieve fast synchrony with local excitation and desynchrony with global inhibition makes it an effective computational framework for grouping similar features and segregating dissimilar ones in an image, We extract an algorithm from LEGION dynamics and propose an adaptive scheme for grouping. We show results of the algorithm to two-dimensional (2-D) and three-dimensional (3-D) (volume) computerized topography (CT) and magnetic resonance imaging (MRI) medical-image datasets, In addition, we compare our algorithm with other algorithms for medical-image segmentation, as well as with manual segmentation. LEGION's computational and architectural properties make it a promising approach for real-time medical-image segmentation.

  409.   Conforti, D, and De Luca, L, "Computer implementation of a medical diagnosis problem by pattern classification," FUTURE GENERATION COMPUTER SYSTEMS, vol. 15, pp. 287-292, 1999.

Abstract:   In this paper we present a software system which can aid the medical diagnostician for the diagnosis of breast cancers. The system has been developed on a "Windows 95" platform and provides a user friendly interface, made up of windows and visualization tools. An interesting and innovative feature is represented by the telemedicine configuration of the software system, which can be run in a remote fashion, exploiting, from some remote regions, the expertize and the clinical database available in advanced medical centers. A prototype version of the software system, named CAMD (computer aided medical diagnosis) is currently being tested and validated with the collaboration of the Cytopathology Department of the Cosenza General Hospital (Calabria, Italy). (C) 1999 Elsevier Science B.V. All rights reserved.

  410.   Kervrann, C, and Heitz, F, "Statistical deformable model-based segmentation of image motion," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 8, pp. 583-588, 1999.

Abstract:   We present a statistical method for the motion-based segmentation of deformable structures undergoing nonrigid movements. The proposed approach relies on vivo models describing the shape of interest, its variability, and its movement. The first model corresponds to a statistical deformable template that constrains the shape and its deformations. The second model is introduced to represent the optical flow field inside the deformable template. These two models are combined within a single probability distribution, which enables to derive shape and motion estimates using a maximum likelihood approach. The method requires no manual initialization and is demonstrated on synthetic data and on a medical X-ray image sequence.

  411.   Casadei, S, and Mitter, S, "An efficient and provably correct algorithm for the multiscale estimation of image contours by means of polygonal lines," IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 45, pp. 939-954, 1999.

Abstract:   A large portion of image contours is characterized by local properties such as sharp variations of the image intensity across the contour, The integration of local image descriptors estimated by using these local properties into curvilinear descriptors is a difficult problem from a theoretical viewpoint because of the combinatorially large number of possible curvilinear descriptors. To deal with this difficulty, the notion of compressible graphs is introduced and a contour data model is defined leading to an efficient linear-time algorithm which provably recovers contours with an upper bound on the approximation error.

  412.   Yabuki, N, Matsuda, Y, Kimura, H, Fukui, Y, and Miki, S, "Region extraction using color feature and active net model in color image," IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, vol. E82A, pp. 466-472, 1999.

Abstract:   In this paper, we propose a method to detect a road sign from a road scene image in the daytime. In order to utilize color feature of sign efficiently, color distribution of sign is examined, and then color similarity map is constructed. Additionally, color similarity shown on the map is incorporated into image energy of an active net model. A road sign is extracted as if it is wrapped up in an active net. Some experimental results obtained by applying an active net to images are presented.

  413.   Sakaue, K, Amano, A, and Yokoya, N, "Optimisation approaches in computer vision and image processing," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E82D, pp. 534-547, 1999.

Abstract:   In this paper, the authors present general views of computer vision and image processing based on optimization. Relaxation and regularization in both broad and narrow senses are used in various fields and problems of computer vision and image processing, and they are currently being combined with general-purpose optimization algorithms. The principle and case examples of relaxation and regularization are discussed; the application of optimization to shape description that is a particularly important problem in the field is described; and the use of a genetic algorithm (GA) as a method of optimization is introduced.

  414.   Berger, MO, Wrobel-Dautcourt, B, Petitjean, S, and Simon, G, "Mixing synthetic and video images of an outdoor urban environment," MACHINE VISION AND APPLICATIONS, vol. 11, pp. 145-159, 1999.

Abstract:   Mixing video and computer-generated images is a new and promising area of research for enhancing reality. It can be used in all the situations when a complete simulation would not be easy to implement. Past work on the subject has relied for a large part on human intervention at key moments of the composition. In this paper, we show that if enough geometric information about the environment is available, then efficient tools developed in the computer vision literature can be used to build a highly automated augmented reality loop. We focus on outdoor urban environments and present an application for the visual assessment of a new lighting project of the bridges of Paris. We present a fully augmented 300-image sequence of a specific bridge, the Pont Neuf Emphasis is put on the robust calculation of the camera position. We also detail the techniques used for matching 2D and 3D primitives and for tracking features over the sequence. Our system overcomes two major difficulties. First, it is capable of handling poor-quality images, resulting from the fact that images were shot at night since the goal was to simulate a new lighting system. Second, it can deal with important changes in viewpoint position and in appearance along the sequence. Throughout the paper, many results are shown to illustrate the different steps and difficulties encountered.

  415.   Denzler, J, and Niemann, H, "Active rays: Polar-transformed active contours for real-time contour tracking," REAL-TIME IMAGING, vol. 5, pp. 203-213, 1999.

Abstract:   In this paper we describe a new approach to contour extraction and tracking, which is based on the principles of active contour models and overcomes its shortcomings. We formally introduce active rays, describe the contour extraction as an energy minimization problem and discuss what active contours and active rays have in common. The main difference is that for active rays a unique ordering of the contour elements in the 2D image plane is given,which cannot be found for active contours. This is advantageous for predicting the contour elements' position and prevents crossings in the contour. Furthermore, another advantage is that instead of an energy minimization in the 2D image plane the minimization is reduced to a 1D search problem. The approach also shows any-time behavior, which is important with respect to real-time applications. Finally, the method allows for the management of multiple hypotheses of the object's boundary. This is an important aspect if concave contours are to be tracked. Results on real image sequences (tracking a toy train in a laboratory scene, tracking pedestrians in an outdoor scene) show the suitability of this approach for real-time object tracking in a closed loop between image acquisition and camera movement. The contour tracking can be done within the image frame rate (25 fps) on standard Unix workstations (HP 735) without any specialized hardware. (C) 1999 Academic Press.

  416.   Cotin, S, Delingette, H, and Ayache, N, "Real-time elastic deformations of soft tissues for surgery simulation," IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, vol. 5, pp. 62-73, 1999.

Abstract:   In this paper, we describe a new method for surgery simulation including a volumetric model built from medical images and an elastic modeling of the deformations. The physical model is based on elasticity theory which suitably links the shape of deformable bodies and the forces associated with the deformation. A real-time computation of the deformation is possible thanks to a preprocessing of elementary deformations derived from a finite element method. This method has been implemented in a system including a force feedback device and a collision detection algorithm. The simulator works in real-time with a high resolution liver model.

  417.   Peckar, W, Schnorr, C, Rohr, K, and Stiehl, HS, "Parameter-free elastic deformation approach for 2D and 3D registration using prescribed displacements," JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 10, pp. 143-162, 1999.

Abstract:   A parameter-free approach for non-rigid image registration based on elasticity theory is presented. In contrast to traditional physically-based numerical registration methods, no forces have to be computed from image data to drive the elastic deformation. Instead, displacements obtained with the help of mapping boundary structures in the source and target image are incorporated as hard constraints into elastic image deformation. As a consequence, our approach does not contain any parameters of the deformation model such as elastic constants. The approach guarantees the exact correspondence of boundary structures in the images assuming that correct input data are available. The implemented incremental method allows to cope with large deformations. The theoretical background, the finite element discretization of the elastic model, and experimental results for 2D and 3D synthetic as well as real medical images are presented.

  418.   Oztop, E, Mulayim, AY, Atalay, V, and Yarman-Vural, F, "Repulsive attractive network for baseline extraction on document images," SIGNAL PROCESSING, vol. 75, pp. 1-10, 1999.

Abstract:   This paper describes a new framework, called repulsive attractive (RA) network for baseline extraction on document images. The RA network is an energy minimizing dynamical system, which interacts with the document text image through the attractive and repulsive forces defined over the network components and the document image. Experimental results indicate that the network can successfully extract the baselines under heavy noise and overlaps between the ascending and descending portions of the characters of adjacent lines. The proposed framework is applicable to a wide range of image processing applications, such as curve fitting, segmentation and thinning. (C) 1999 Elsevier Science B.V. All rights reserved.

  419.   Le Goualher, G, Procyk, E, Collins, DL, Venugopal, R, Barillot, C, and Evans, AC, "Automated extraction and variability analysis of sulcal neuroanatomy," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 206-217, 1999.

Abstract:   Systematic mapping of the variability in cortical sulcal anatomy is an area of increasing interest which presents numerous methodological challenges, To address these issues, we have implemented sulcal extraction and assisted labeling (SEAL) to automatically extract the two-dimensional (2-D) surface ribbons that represent the median axis of cerebral sulci and to neuroanatomically label these entities, To encode the extracted three-dimensional (3-D) cortical sulcal schematic topography (CSST) we define a relational graph structure composed of two main features: vertices (representing sulci) and arcs (representing the relationships between sulci), Vertices contain a parametric representation of the surface ribbon buried within the sulcus, Points on this surface are expressed in stereotaxic coordinates (i.e., with respect to a standardized brain coordinate system). For each of these vertices, we store length, depth, and orientation as well as anatomical attributes (e.g., hemisphere, lobe, sulcus type, etc.). Each are stores the 3-D location of the junction between sulci as well as a list of its connecting sulci, Sulcal labeling is performed semiautomatically by selecting a sulcal entity in the CSST and selecting from a menu of candidate sulcus names. In order to help the user in the labeling task, the menu is restricted to the most likely candidates by using priors for the expected sulcal spatial distribution, These priors, i.e., sulcal probabilistic maps, were created from the spatial distribution of 34 sulci traced manually on 36 different subjects, Given these spatial probability maps, the user is provided with the likelihood that the selected entity belongs to a particular sulcus, The cortical structure representation obtained by SEAL is suitable to extract statistical information about both the spatial and the structural composition of the cerebral cortical topography, This methodology allows for the iterative construction of a successively more complete statistical models of the cerebral topography containing spatial distributions of the most important structures, their morphometrics, and their structural components.

  420.   Gurcan, MN, Koyuturk, M, Yildiz, HS, Cetin-Atalay, R, and Cetin, AE, "Identification of relative protein bands in polyacrylamide gel electrophoresis (PAGE) using a multi-resolution snake algorithm," BIOTECHNIQUES, vol. 26, pp. 1162-+, 1999.

Abstract:   In polyacrylamide gel electrophoresis (PAGE) image analysis, it is important to determine the percentage of the protein of interest of a protein mixture. This study presents reliable computer software to determine this percentage, The region of interest containing the protein band is detected using the snake algorithm. The iterative snake algorithm is implemented in a; multi-resolutional framework. The snake is initialized on a low-resolution image. Then, the final position of the snake at the low resolution is used as the initial position in the higher-resolution image. Finally, the area of the protein is estimated as the area enclosed by the final position of the snake.

  421.   Izquierdo, ME, "Disparity Segmentation analysis: Matching with an adaptive window and depth-driven segmentation," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 9, pp. 589-607, 1999.

Abstract:   Most of the emerging content-based multimedia technologies are based on efficient methods to solve machine early vision tasks. Among others tasks, object segmentation is perhaps the most important problem in single image processing, whereas pixel-correspondence estimation is the crucial task in multiview image analysis. The solution of these two problems is the keg technology for the development of the majority of leading-edge interactive video communication technologies and telepresence systems. In this paper, we present a robust frame work comprised of joined pixel-correspondence estimation and image segmentation in video sequences taken simultaneously from different perspectives. fin improved concept for stereo-image analysis based on block matching with a local adaptive window is introduced. The size and shape of the reference window is calculated adaptively according to the degree of reliability of disparities estimated previously. Considerable improvements are obtained just within object borders or image areas that become occluded by applying the proposed block-matching model. An initial object segmentation is obtained by merging neighboring sampling positions with disparity vectors of similar size and direction. Starting from this initial segmentation, true object borders are detected using a contour-matching algorithm. In this process, the contour of the initial segmentation is taken as a reference pattern, and the edges extracted from the original images, by applying a multiscale algorithm, are the candidates for the true object contour. The performance of the introduced methods has been verified by computer simulations using synthetic data and several natural stereo sequences.

  422.   Dougherty, L, Asmuth, JC, Blom, AS, Axel, L, and Kumar, R, "Validation of an optical flow method for tag displacement estimation," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 359-363, 1999.

Abstract:   We present a validation study of an optical-flow method for the rapid estimation of myocardial displacement in magnetic resonance tagged cardiac images. This registration and change visualization (RCV) software uses a hierarchical estimation technique to compute the flow field that describes the warping of an image of one cardiac phase into alignment with the next. This method overcomes the requirement of constant pixel intensity in standard optical-how methods by preprocessing the input images to reduce any intensity bias which results from the reduction in stripe contrast throughout the cardiac cycle. To validate the method, SPAMM-tagged images were acquired of a silicon gel phantom with simulated rotational motion. The pixel displacement was estimated with the RCV method and the error in pixel tracking was <4% 1000 ms after application of the tags, and after 30 degrees of rotation. An additional study was performed using a SPAMM-tagged multiphase slice of a canine left ventricle. The true displacement was determined using a previously validated active contour model (snakes). The error between methods was 6.7% at end systole. The RCV method has the advantage of tracking all pixels in the image in a substantially shorter period than the snakes method.

  423.   Rettmann, ME, Xu, CY, Pham, DL, and Prince, JL, "Automated segmentation of sulcal regions," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1679, pp. 158-167, 1999.

Abstract:   Automatic segmentation and identification of cortical sulci play an important role in the study of brain structure and function. In this work, a method is presented for the automatic segmentation of sulcal regions of cortex. Unlike previous methods that extract the sulcal spaces within the cortex, the proposed method extracts actual regions of the cortical surface that surround sulci. Sulcal regions are. segmented from the medial surface as well as the lateral and inferior surfaces. The method first generates a depth map on the surface, computed by measuring the distance between the cortex and ail outer "shrink-wrap" surface. Sulcal regions are then extracted using a hierarchical algorithm that alternates between thresholding and region growing operations. To visualize the buried regions of the segmented cortical surface, ail efficient technique for mapping the surface to a sphere is proposed. Preliminary results are presented on the geometric analysis of sulcal regions for automated identification.

  424.   Frangi, AF, Niessen, WJ, Hoogeveen, RM, van Walsum, T, and Viergever, MA, "Quantitation of vessel morphology from 3D MRA," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1679, pp. 358-367, 1999.

Abstract:   Three dimensional magnetic resonance angiographic images (3D MRA) are routinely inspected using maximum intensity projections (MIP). However, accuracy of stenosis estimates based on projections is limited. Therefore, a method for quantitative 3D MRA is introduced. Linear vessel segments are modeled with a central vessel axis curve coupled to a vessel wall surface. First, the central vessel axis is determined. Subsequently, the vessel wall is segmented using knowledge of the acquisition process. The user interaction to initialize the model is performed in a 3D setting. The method is validated on a carotid bifurcation phantom and also illustrated on patient data.

  425.   Guo, YL, and Vemuri, BC, "Hybrid geometric active models for shape recovery in medical images," INFORMATION PROCESSING IN MEDICAL IMAGING, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1613, pp. 112-125, 1999.

Abstract:   In this paper, we propose extensions to a powerful geometric shape modeling scheme introduced in [14]. The extension allows the model to automatically cope with topological changes and for the first time, introduces the concept of a global shape into geometric/geodesic snake models. The ability to characterize global shape of an object using very few parameters facilitates shape learning and recognition. In this new modeling scheme, object shapes are represented using a parameterized function - called the generator - which accounts for the global shape of an object and the pedal curve/surface of this global shape with respect to a geometric snake to represent any local detail. Traditionally, pedal curves/surfaces are defined as the loci of the feet of perpendiculars to the tangents of the generator from a fixed point called the pedal point. We introduce physics-based control for shaping these geometric models by using distinct pedal points - lying on a snake - for each point on the generator. The model dubbed as a "snake pedal" allows for interactive manipulation via forces applied to the snake. Automatic topological changes of the model may be achieved by implementing the geometric active contour in a level-set framework. We demonstrate the applicability of this modeling scheme via examples of shape estimation from a variety of medical image data.

  426.   Frangi, AF, Niessen, WJ, Hoogeveen, RM, van Walsum, T, and Viergever, MA, "Model-based quantitation of 3-D magnetic resonance angiographic images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 946-956, 1999.

Abstract:   Quantification of the degree of stenosis or vessel dimensions are important for diagnosis of vascular diseases and planning vascular interventions, Although diagnosis from three-dimensional (3-D) magnetic resonance angiograms (MRA's) is mainly performed on two-dimensional (2-D) maximum intensity projections, automated quantification of vascular segments directly from the 3-D dataset is desirable to provide accurate and objective measurements of the 3-D anatomy. A model-based method for quantitative 3-D MRA is proposed. Linear vessel segments are modeled with a central vessel axis curve coupled to a vessel wall surface. A novel image feature to guide the deformation of the central vessel axis is introduced. Subsequently, concepts of deformable models are combined with knowledge of the physics of the acquisition technique to accurately segment the vessel wall and compute the vessel diameter and other geometrical properties. The method is illustrated and validated on a carotid bifurcation phantom, with ground truth and medical experts as comparisons, Also, results on 3-D time-of-flight (TOF) MRA images of the carotids are shown, The approach is a promising technique to assess several geometrical vascular parameters directly on the source 3-D images, providing an objective mechanism for stenosis grading.

  427.   Toyama, K, and Hager, GD, "Incremental focus of attention for robust vision-based tracking," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 35, pp. 45-63, 1999.

Abstract:   We present the Incremental Focus of Attention (IFA) architecture for robust, adaptive, real-time motion tracking. IFA systems combine several visual search and vision-based tracking algorithms into a layered hierarchy. The architecture controls the transitions between layers and executes algorithms appropriate to the visual environment at hand: When conditions are good, tracking is accurate and precise; as conditions deteriorate, more robust, yet less accurate algorithms take over; when tracking is lost altogether, layers cooperate to perform a rapid search for the target and continue tracking. Implemented IFA systems are extremely robust to most common types of temporary visual disturbances. They resist minor visual perturbances and recover quickly after full occlusions, illumination changes, major distractions, and target disappearances. Analysis of the algorithm's recovery times are supported by simulation results and experiments on real data. In particular, examples show that recovery times after lost tracking depend primarily on the number of objects visually similar to the target in the field of view.

  428.   Ladret, P, Latombe, B, and Granada, F, "Active contour algorithm: An attractive tool for snow avalanche analysis," SIGNAL PROCESSING, vol. 79, pp. 197-204, 1999.

Abstract:   Image processing is increasingly used for the study of snow avalanches in order to prevent them. The study of the dynamics of snow avalanches has produced many numerical models. The difficulty of the measurement of parameters provided by these models has prevented their validation by comparison with those of real phenomena. Image processing is a first approach for these validations. This study aims to determine and analyse the velocity of the envelope in case of powder-snow avalanches. This work is based on active snake methods. In this paper, we present a new algorithm of active contours in order to analyse the front of motion of snow avalanches. The algorithm uses an energy-minimising curve. The model developed takes avalanche characteristics and the nature of the images into account. The algorithm gives good results and we obtain a sequence of avalanche contours. (C) 1999 Elsevier Science B.V. All rights reserved.

  429.   Unser, M, "Splines - A perfect fit for signal and image processing," IEEE SIGNAL PROCESSING MAGAZINE, vol. 16, pp. 22-38, 1999.

Abstract:   Image processing is increasingly used for the study of snow avalanches in order to prevent them. The study of the dynamics of snow avalanches has produced many numerical models. The difficulty of the measurement of parameters provided by these models has prevented their validation by comparison with those of real phenomena. Image processing is a first approach for these validations. This study aims to determine and analyse the velocity of the envelope in case of powder-snow avalanches. This work is based on active snake methods. In this paper, we present a new algorithm of active contours in order to analyse the front of motion of snow avalanches. The algorithm uses an energy-minimising curve. The model developed takes avalanche characteristics and the nature of the images into account. The algorithm gives good results and we obtain a sequence of avalanche contours. (C) 1999 Elsevier Science B.V. All rights reserved.

  430.   Xu, XY, Long, Q, Collins, MW, Bourne, M, and Griffith, TM, "Reconstruction of blood flow patterns in human arteries," PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART H-JOURNAL OF ENGINEERING IN MEDICINE, vol. 213, pp. 411-421, 1999.

Abstract:   Local haemodynamic factors in large arteries are associated with the pathophysiology of cardiovascular diseases such as atherosclerosis and strokes. In search of these factors and their correlation with atheroma formation, quantitative haemodynamic data in realistic arterial geometry become crucial. At present no in vivo non-invasive technique is available that can provide accurate measurement of three-dimensional blood velocities and shear stresses in curved and branching sites of vessels where atherosclerotic plaques are found frequently. This paper presents a computer modelling technique which combines state-of-the-art computational fluid dynamics (CFD) with new noninvasive magnetic resonance imaging techniques to provide the complete haemodynamic data in 'real' arterial geometries. Using magnetic resonance angiographic and velocity images acquired from the aortic bifurcation of a healthy human subject, CFD simulations have been carried out and the predicted flow patterns demonstrate the non-planar-type flow characteristics found in experimental studies.

  431.   Aubert, G, and Blanc-Feraud, L, "Some remarks on the equivalence between 2D and 3D classical snakes and geodesic active contours," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 34, pp. 19-28, 1999.

Abstract:   Recently, Caselles et al. have shown the equivalence between a classical snake problem of Kass et al. and a geodesic active contour model. The PDE derived from the geodesic problem gives an evolution equation for active contours which is very powerfull for image segmentation since changes of topology are allowed using the level set implementation. However in Caselles' paper the equivalence with classical snake is only shown for 2D images and 1D curves, by using concepts of Hamiltonian theory which have no meanings for active surfaces. This paper propose to examine the notion of equivalence and to revisite Caselles et al. arguments. Then a notion equivalence is introduced and shown for classical snakes and geodesic active contours in the 2D (active contour) and 3D (active surface) case.

  432.   Salden, AH, Romeny, BMT, and Viergever, MA, "Linearised euclidean shortening flow of curve geometry," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 34, pp. 29-67, 1999.

Abstract:   The geometry of a space curve is described in terms of a Euclidean invariant frame field, metric, connection, torsion and curvature. Here the torsion and curvature of the connection quantify the curve geometry. In order to retain a stable and reproducible description of that geometry, such that it is slightly affected by non-uniform protrusions of the curve, a linearised Euclidean shortening flow is proposed. (Semi)-discretised versions of the flow subsequently physically realise a concise and exact (semi-)discrete curve geometry. Imposing special ordering relations the torsion and curvature in the curve geometry can be retrieved on a multi-scale basis not only for simply closed planar curves but also for open, branching, intersecting and space curves of non-trivial knot type. In the context of the shortening flows we revisit the maximum principle, the semi-group property and the comparison principle normally required in scale-space theories. We show that our linearised flow satisfies an adapted maximum principle, and that its Green's functions possess a semi-group property. We argue that the comparison principle in the case of knots can obstruct topological changes being in contradiction with the required curve simplification principle. Our linearised flow paradigm is not hampered by this drawback; all non-symmetric knots tend to trivial ones being infinitely small circles in a plane. Finally, the differential and integral geometry of the multi-scale representation of the curve geometry under the flow is quantified by endowing the scale-space of curves with an appropriate connection, and calculating related torsion and curvature aspects. This multi-scale modern geometric analysis forms therewith an alternative for curve description methods based on entropy scale-space theories.

  433.   Weerasinghe, C, Yan, H, and Ji, LL, "A fast method for estimation of object rotation function in MRI using a similarity criterion among k-space overlap data," SIGNAL PROCESSING, vol. 78, pp. 215-230, 1999.

Abstract:   A major obstacle to the success of post-processing artifact correction techniques in magnetic resonance imaging (MRI) is the scarcity of reliable motion estimation algorithms. Most on-line motion estimation schemes demand patient preparation, modifications to standard spin-echo pulse sequences and increased scanning times. Therefore, off-line motion estimation algorithms have gained interest in the research arena. However, the existing algorithms are plagued by high computational and time demands that restrict the estimation capability to only a few motion parameters. This paper presents an efficient off-line motion estimation algorithm with applications to in-plane rotational motion artifact correction in MRI. The algorithm is based on maximizing the similarity among the k-space data subjected to angular overlap. The initial guesses are derived from measuring projection width of X-directional inverse Fourier transforms of the acquired k-space views. Simulation studies involving stepwise and continuous rotation show that the proposed method can accurately estimate rotation angles corresponding to each view. This method has been incorporated in a rotational motion artifact correction scheme, previously developed by the authors, producing successful results. (C) 1999 Elsevier Science B.V. All rights reserved.

  434.   Aletras, AH, Balaban, RS, and Wen, H, "High-resolution strain analysis of the human heart with fast-DENSE," JOURNAL OF MAGNETIC RESONANCE, vol. 140, pp. 41-57, 1999.

Abstract:   Single breath-hold displacement data from the human heart were acquired with fast-DENSE (fast displacement encoding with stimulated echoes) during systolic contraction at 2.5 x 2.5 mm in-plane resolution. Encoding strengths of 0.86-1.60 mm/pi were utilized in order to extend the dynamic range of the phase measurements and minimize effects of physiologic and instrument noise. The noise level in strain measurements for both contraction and dilation corresponded to a strain value of 2.8%. In the human heart, strain analysis has sufficient resolution to reveal transmural variation across the left ventricular wall. Data processing required minimal user intervention and provided a rapid quantitative feedback. The intrinsic temporal integration of fast-DENSE achieves high accuracy at the expense of temporal resolution.

  435.   Cagnoni, S, Dobrzeniecki, AB, Poli, R, and Yanch, JC, "Genetic algorithm-based interactive segmentation of 3D medical images," IMAGE AND VISION COMPUTING, vol. 17, pp. 881-895, 1999.

Abstract:   This article describes a method for evolving adaptive procedures for the contour-based segmentation of anatomical structures in 3D medical data sets. With this method, the user first manually traces one or more 2D contours of an anatomical structure of interest on parallel planes arbitrarily cutting the data set. Such contours are then used as training tramples for a genetic algorithm to evolve a contour detector. By applying the detector to the rest of the image sequence it is possible to obtain a full segmentation of the structure. The same detector can then be used to segment other image sequences of the same sort. Segmentation is driven by a contour-tracking strategy that relies on an elastic-contour model whose parameters are also optimized by the genetic algorithm. We report results obtained on a software-generated phantom and on real tomographic images of different sorts. (C) 1999 Elsevier Science B.V. All rights reserved.

  436.   Sakalli, M, Yan, H, and Fu, A, "A region-based scheme using RKLT and predictive classified vector quantization," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 75, pp. 269-280, 1999.

Abstract:   This paper proposes a compression scheme for face profile images based on three stages, modelling, transformation, and the partially predictive classified vector quantization (CVQ) stage. The modelling stage employs deformable templates in the localisation of salient features of face images and in the normalization of the image content. The second stage uses a dictionary of feature-bases trained for profile face images to diagonalize the image blocks. At this stage, all normalized training and test images are spatially clustered (objectively) into four subregions according to their energy content, and the residuals of the most important clusters are further clustered (subjectively) in the spectral domain, to exploit spectral redundancies. The feature-basis functions are established with the region-based Karhunen-Loeve transform (RKLT) of clustered image blocks. Each image block is matched with a representative of near-best basis functions. A predictive approach is employed for mid-energy clusters, in both stages of search for a basis and for a codeword from the range of its cluster. The proposed scheme employs one stage of a cascaded region-based KLT-SVD and CVQ complex, followed by residual VQ stages for subjectively important regions. The first dictionary of feature-bases is dedicated to the main content of the image and the second is dedicated to the residuals. The proposed scheme is experimented in a set of human face images. (C) 1999 Academic Press.

  437.   Astrom, K, Cipolla, R, and Giblin, P, "Generalised epipolar constraints," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 33, pp. 51-72, 1999.

Abstract:   In this paper we will discuss structure and motion problems for curved surfaces. These will be studied using the silhouettes or apparent contours in the images. The problem of determining camera motion from the apparent contours of curved three-dimensional surfaces, is studied. It will be shown how special points, called epipolar tangency points or frontier points, can be used to solve this problem. A generalised epipolar constraint is introduced, which applies to points, curves, as well as to apparent contours of surfaces. The theory is developed for both continuous and discrete motion, known and unknown orientation, calibrated and uncalibrated, perspective, weak perspective and orthographic cameras. Results of an iterative scheme to recover the epipolar line structure from real image sequences using only the outlines of curved surfaces, is presented. A statistical evaluation is performed to estimate the stability of the solution. It is also shown how the motion of the camera from a sequence of images can be obtained from the relative motion between image pairs.

  438.   Xu, CY, Pham, DL, Rettmann, ME, Yu, DN, and Prince, JL, "Reconstruction of the human cerebral cortex from magnetic resonance images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 467-480, 1999.

Abstract:   Reconstructing the geometry of the human cerebral cortex from MR images is an important step in both brain mapping and surgical path planning applications, Difficulties with imaging noise, partial volume averaging, image intensity inhomogeneities, convoluted cortical structures, and the requirement to preserve anatomical topology make the development of accurate automated algorithms particularly challenging. In this paper ne address each of these problems and describe a systematic method for obtaining a surface representation of the geometric central layer of the human cerebral cortex. Using fuzzy segmentation, an isosurface algorithm, and a deformable surface model, the method reconstructs the entire cortex with the correct topology, including deep convoluted sulci and gyri. The method is largely automated and its results are robust to imaging noise, partial volume averaging, and image intensity inhomogeneities. The performance of this method is demonstrated, both qualitatively and quantitatively and the results of its application to sis subjects and one simulated MR brain volume are presented.

  439.   Lee, JD, "Wavelet transform for 3-D reconstruction from series sectional medical images," MATHEMATICAL AND COMPUTER MODELLING, vol. 30, pp. 1-13, 1999.

Abstract:   It is well known that the 3-D shape of an organ can be reconstructed from a series of cross-sectional images of human body using ultrasound, Computer Topography (CT), or Magnetic Resonance Imaging (MRI). From the reconstructed images, qualitative evaluation, quantitative analysis, and other further clinical research become possible. In this paper, a novel interpolation technique that utilizes the whole object contour information and with no need of feature matching for object reconstruction is proposed. In the method, multiresolution analysis of the object contour of each slices is carried out by using the Wavelet Transformation (WT). The primary contour of the interslices is reconstructed from the coarsest scale information of the slices, while the refined contours are estimated by taking into account the lower scale information of slices. To evaluate the performance of the proposed method and the traditional method, a performance measure is proposed and the experimental results are also included, (C) 1999 Elsevier Science Ltd. All rights reserved.

  440.   Knoll, C, Alcaniz, M, Grau, V, Monserrat, C, and Juan, MC, "Outlining of the prostate using snakes with shape restrictions based on the wavelet transform (Doctoral Thesis: Dissertation)," PATTERN RECOGNITION, vol. 32, pp. 1767-1781, 1999.

Abstract:   This paper considers the problem of deformable contour initialization and modeling for segmentation of the human prostate in medical images. We propose a new technique for elastic deformation restriction to particular object shapes of any closed planar curve using localized multiscale contour parameterization based on the 1D dyadic wavelet transform. For this purpose we define internal curve deformation forces as a result of multiscale parametrical contour analysis. The form restricted contour deformation and its initialization by template matching are performed in a coarse to fine segmentation process based on a multiscale image edge representation containing the important edges of the image at various scales. The method is useful for 3D conformal radiotherapy planning and automatic prostate volume measurements in ultrasonographic diagnosis. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  441.   Huang, WC, Hsu, CC, Lee, C, and Lai, PH, "Recurrent nasal tumor detection by dynamic MRI," IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, vol. 18, pp. 100-105, 1999.

Abstract:   This paper considers the problem of deformable contour initialization and modeling for segmentation of the human prostate in medical images. We propose a new technique for elastic deformation restriction to particular object shapes of any closed planar curve using localized multiscale contour parameterization based on the 1D dyadic wavelet transform. For this purpose we define internal curve deformation forces as a result of multiscale parametrical contour analysis. The form restricted contour deformation and its initialization by template matching are performed in a coarse to fine segmentation process based on a multiscale image edge representation containing the important edges of the image at various scales. The method is useful for 3D conformal radiotherapy planning and automatic prostate volume measurements in ultrasonographic diagnosis. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  442.   Kozerke, S, Botnar, R, Oyre, S, Scheidegger, MB, Pedersen, EM, and Boesiger, P, "Automatic vessel segmentation using active contours in cine phase contrast flow measurements," JMRI-JOURNAL OF MAGNETIC RESONANCE IMAGING, vol. 10, pp. 41-51, 1999.

Abstract:   The segmentation of images obtained by cine magnetic resonance (MR) phase contrast velocity mapping using manual or semi-automated methods is a time consuming and observer-dependent process that still hampers the use of now quantification in a clinical setting. A fully automatic segmentation method based on active contour model algorithms for defining vessel boundaries has been developed, For segmentation, the phase image, in addition to the magnitude image, is used to address image distortions frequently seen in the magnitude image of disturbed now fields. A modified definition for the active contour model is introduced to reduce the influence of missing or spurious edge information of the vessel wall. The method was evaluated on now phantom data and on in vivo images acquired in the ascending aorta of humans. Phantom experiments resulted in an error of 0.8% in assessing the luminal area of a now phantom equipped with an artificial heart valve. Blinded evaluation of the volume now rates from automatic vs. manual segmentation of gradient echo (FFE) phase contrast images obtained in vivo resulted in a mean difference of -0.9 +/- 3%. The mean difference from automatic vs. manual segmentation of images acquired with a hybrid phase contrast sequence (TFEPI) within a single breath-hold was -0.9 +/- 6%. J. Magn. Reson. Imaging 1999: 10:41-51. (C) 1999 Wiley-Liss, Inc.

  443.   Zhao, BS, Yankelevitz, D, Reeves, A, and Henschke, C, "Two-dimensional multi-criterion segmentation of pulmonary nodules on helical CT images," MEDICAL PHYSICS, vol. 26, pp. 889-895, 1999.

Abstract:   A multi-criterion algorithm for automatic delineation of small pulmonary nodules on helical CT images has been developed. In a slice-by-slice manner, the algorithm uses density, gradient strength, and a shape constraint of the nodule to automatically control segmentation process. The multiple criteria applied to separation of the nodule from its surrounding structures in lung are based on the fact that typical small pulmonary nodules on CT images have high densities, show a distinct difference in density at the boundary, and tend to be compact in shape. Prior to the segmentation, a region-of-interest containing the nodule is manually selected on the CT images. Then the segmentation process begins with a high density threshold that is decreased stepwise, resulting in expansion of the area of nodule candidates. This progressive region growing approach is terminated when subsequent thresholds provide either a diminished gradient strength of the nodule contour or significant changes of nodule shape :from the compact form. The shape criterion added to the algorithm can effectively prevent the high density surrounding structures (e.g., blood vessels) from being falsely segmented as nodule, which occurs frequently when only the gradient strength criterion is applied. This has been demonstrated by examples given in the Results section. The algorithm's accuracy has been compared with that of radiologist's manual segmentation, and no statistically significant difference has been found between the nodule areas delineated by radiologist and those obtained by the multi-criterion algorithm. The improved nodule boundary allows for more accurate assessment of nodule size and hence nodule growth over a short time period, and for better characterization of nodule edges. This information is useful in determining malignancy status of a nodule at an early stage and thus provides significant guidance for further clinical management. (C) 1999 American Association of Physicists in Medicine.

  444.   Levienaise-Obadia, B, and Gee, A, "Adaptive segmentation of ultrasound images," IMAGE AND VISION COMPUTING, vol. 17, pp. 583-588, 1999.

Abstract:   This article describes a novel approach to the semi-automatic segmentation of ultrasound images. Assisted segmentation is particularly attractive when processing many slices through a 3D data set, and even though fully automatic segmentation would be ideal, this is currently not feasible given the quality of ultrasound images. The algorithm developed in this article is based on the active contour paradigm, with several important modifications. The contour is attracted to boundaries described locally by statistical models: this allows for the fact that the definition of what constitutes a boundary may vary around the boundary's length. The statistical models are trained on-the-fly by observing boundaries accepted by the operator. In this way, operator intervention in a particular slice is sensibly exploited to reduce the need for intervention in subsequent slices. The resulting algorithm provides fast, reliable and verifiable segmentation of in vivo ultrasound images. (C) 1999 Elsevier Science B.V. All rights reserved.

  445.   Peterfreund, N, "Robust tracking of position and velocity with Kalman snakes," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 21, pp. 564-569, 1999.

Abstract:   A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space. The model employs measurements of gradient-based image potential and of optical-flow along the contour as system measurements. In order to improve robustness to image clutter and to occlusions an optical-flow based detection mechanism is proposed. The method detects and rejects spurious measurements which are not consistent with previous estimation of image motion.

  446.   Mayer, H, "Automatic object extraction from aerial imagery - A survey focusing on buildings," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 74, pp. 138-149, 1999.

Abstract:   This paper surveys the state-of-the-art automatic object extraction techniques from aerial imagery. It focuses on building extraction approaches, which present the majority of the work in this area. After proposing well-defined criteria for their assessment, characteristic approaches are selected and assessed, based on their models and strategies. The assessment gives rise to a combined model and strategy covering the current knowledge in the field. The model comprises: the derivation of characteristic properties from the function of objects; three-dimensional geometry and material properties; scales and levels of abstraction/aggregation; local and global context. The strategy consists of grouping, focusing on different scales, context-based control and generation of evidence from structures of parts, and fusion of data and algorithms. Many ideas which have not been explored in depth lead to promising directions for further research. (C) 1999 Academic Press.

  447.   Malassiotis, S, and Strintzis, MG, "Tracking the left ventricle in echocardiographic images by learning heart dynamics," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 18, pp. 282-290, 1999.

Abstract:   In this paper a temporal learning-filtering procedure is applied to refine the left ventricle (LV) boundary detected by an active-contour model. Instead of making prior assumptions about the LV shape or its motion, this information is incrementally gathered directly from the images and is exploited to achieve more coherent segmentation, A Hough transform technique is used to find an initial approximation of the object boundary at the first frame of the sequence, Then, an active-contour model is used in a coarse-to-fine framework, for the estimation of a noisy LV boundary, The PCA transform is applied to form a reduced ordered orthonormal basis of the LV deformations based on a sequence of noisy boundary observations. Then this basis is used to constrain the motion of the active contour in subsequent frames, and thus provide more coherent identification. Results of epicardial boundary identification in E-mode images are presented.

  448.   Udupa, JK, "Three-dimensional visualization and analysis methodologies: A current perspective," RADIOGRAPHICS, vol. 19, pp. 783-806, 1999.

Abstract:   Three-dimensional (3D) imaging was developed to provide both qualitative and quantitative information about an object or object system from images obtained with multiple modalities including digital radiography, computed tomography, magnetic resonance imaging, positron emission tomography, single photon emission computed tomography, and ultrasonography, Three-dimensional imaging operations may be classified under four basic headings: preprocessing, visualization, manipulation, and analysis. Preprocessing operations (volume of interest, filtering, interpolation, registration, segmentation) are aimed at extracting or improving the extraction of object information in given images. Visualization operations facilitate seeing and comprehending objects in their full dimensionality and may be either scene-based or object-based. Manipulation may be either rigid or deformable acid allows alteration of object structures and of relationships between objects. Analysis operations, like visualization operations, may be either scene-based or object-based and deal with methods of quantifying object information. There are many challenges involving matters of precision, accuracy, and efficiency in 3D imaging. Nevertheless, 3D imaging is an exciting technology that promises to offer an expanding number and variety of applications.

  449.   Kuijer, JPA, Marcus, JT, Gotte, MJW, van Rossum, AC, and Heethaar, RM, "Simultaneous MRI tagging and through-plane velocity quantification: A three-dimensional myocardial motion tracking algorithm," JMRI-JOURNAL OF MAGNETIC RESONANCE IMAGING, vol. 9, pp. 409-419, 1999.

Abstract:   A tracking algorithm was developed for calculation of three-dimensional point-specific myocardial motion. The algorithm was designed for images acquired with simultaneous magnetic resonance imaging (MRI) grid tagging and through-plane velocity quantification. The tagging grid provided the in-plane motion while the velocity quantification measured the through-plane motion, In four healthy volunteers, the in vivo performance was evaluated by comparing the systolic through-plane displacement with the displacement of tagging-grid intersections in long-axis images, The correlation coefficient was 0.93 (P < 0.001, N = 183), A t-test for paired samples revealed a small underestimation of the through-plane displacement by 0.04 +/- 0.09 cm (mean +/- SD, P < 0.001) on an average displacement of 0.77 +/- 0.23 cm toward the apex. The authors conclude that three-dimensional point-specific motion tracking based on simultaneous tagging and velocity quantification is competitive with other methods such as tagging in mutually orthogonal image planes or quantification of three orthogonal velocity components, (C) 1999 Wiley-Liss, Inc.

  450.   Shufelt, JA, "Performance evaluation and analysis of monocular building extraction from aerial imagery," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 21, pp. 311-326, 1999.

Abstract:   Research in monocular building extraction from aerial imagery has neglected performance evaluation in three areas: unbiased metrics for quantifying detection and delineation performance, an evaluation methodology for applying these metrics to a representative body of test imagery, and an approach for understanding the impact of image and scene content on building extraction algorithms. This paper addresses these areas with an end-to-end performance evaluation of four existing monocular building extraction systems, using image space and object space-based metrics on 83 test images of 18 sites. This analysis is supplemented by an examination of the effects of image obliquity and object complexity on system performance, as well as a case study on the effects of edge fragmentation. This widely applicable performance evaluation approach highlights the consequences of various traditional assumptions about camera geometry, image content, and scene structure, and demonstrates the utility of rigorous photogrammetric object space modeling and primitive-based representations for building extraction.

  451.   Schnabel, JA, and Arridge, SR, "Active shape focusing," IMAGE AND VISION COMPUTING, vol. 17, pp. 419-428, 1999.

Abstract:   This paper presents a framework for hierarchical shape description which enables quantitative and qualitative shape studies at multiple levels of image detail. It allows the capture of the global object shape at higher image scales, and to focus it down to finer details at decreasing levels of image scale. A multi-scale active contour model, whose energy function is regularized with respect to underlying geometric image structure in a natural scale setting, is developed for the purpose of implicit shape extraction or regularization with respect to scale. The resulting set of shapes is formulated and visualized as a multi-scale shape stack for the investigation of shape changes across scales. We demonstrate the functionality of this framework by applying it to a set of true fractal structures, and to 3D brain MRI. The framework is shown to be capable of recovering the fractal dimension of the fractal shapes directly from their embedding image context. The equivalent measure on the medical images and its potential for medical shape analysis is discussed. (C) 1999 Elsevier Science B.V. All rights reserved.

  452.   Morris, DT, and Donnison, C, "Identifying the neuroretinal rim boundary using dynamic contours," IMAGE AND VISION COMPUTING, vol. 17, pp. 169-174, 1999.

Abstract:   The neuroretinal rim forms the outer boundary of the optic nerve head: that region of the retina where blood vessels and nerve fibres pass out of the eye. It is normally a circular structure, but is known to change shape due to nerve damage in glaucoma. Its shape can therefore be used in the diagnosis and assessment of the treatment of this disease. Automatically finding the boundary would be useful as it would allow reliable quantitative shape measurements to be made. However, it is a difficult problem as the boundary is ill defined and partially obscured by blood vessels. In this paper we present an algorithm that successfully identifies the boundary using dynamic contours (snakes). The success of the algorithm is very dependent on preprocessing the image to enhance the contrast between the retina and the optic nerve head. We therefore describe the preprocessing in some detail. The algorithm has been tested on numerous images and found to be successful, as judged by an optometrist, in every case. (C) 1999 Elsevier Science B.V. All rights reserved.

  453.   Horritt, MS, "A statistical active contour model for SAR image segmentation," IMAGE AND VISION COMPUTING, vol. 17, pp. 213-224, 1999.

Abstract:   A statistical active contour model is developed for segmenting synthetic aperture radar (SAR) images into regions of homogeneous speckle statistics. The technique measures both the local tone and texture along the contour so that no smoothing across segment boundaries occurs. A smooth contour is favoured by the inclusion of a curvature constraint, whose weight is determined analytically by considering the model energy balance. The algorithm spawns smaller snakes to represent multiply connected regions. The algorithm is capable of segmenting noisy SAR imagery whilst accurately depicting (to within 1 pixel) segment boundaries. (C) 1999 Elsevier Science B.V. All rights reserved.

 
2000

  454.   Araabi, BN, Kehtarnavaz, N, McKinney, T, Hillman, G, and Wursig, B, "A string matching computer-assisted system for dolphin photoidentification," ANNALS OF BIOMEDICAL ENGINEERING, vol. 28, pp. 1269-1279, 2000.

Abstract:   This paper presents a syntactic/semantic string representation scheme as well as a string matching method as part of a computer-assisted system to identify dolphins from photographs of their dorsal fins. A low-level string representation is constructed from the curvature function of a dolphin's fin trailing edge, consisting of positive and negative curvature primitives. A high-level string representation is then built over the low-level string via merging appropriate groupings of primitives in order to have a less sensitive representation to curvature fluctuations or noise. A family of syntactic/semantic distance measures between two strings is introduced. A composite distance measure is then defined and used as a dissimilarity measure for database search, highlighting both the syntax (structure or sequence) and semantic (attribute or feature) differences. The syntax consists of an ordered sequence of significant protrusions and intrusions on the edge, while the semantics consist of seven attributes extracted from the edge and its curvature function-The matching results are reported for a database of 624 images corresponding to 164 individual dolphins. The identification results indicate that the developed string matching method performs better than the previous matching methods including dorsal ratio, curvature, and curve matching. The developed computer-assisted system can help marine mammalogists in their identification of dolphins, since it allows them to examine only a handful of candidate images instead of the currently used manual searching of the entire database. (C) 2000 Biomedical Engineering Society. [S0090-6964(00)00510-5].

  455.   Froumentin, M, Labrosse, F, and Willis, P, "A vector-based representation for image warping," COMPUTER GRAPHICS FORUM, vol. 19, pp. C419-+, 2000.

Abstract:   A method for image analysis, representation and re-synthesis is introduced. Unlike other schemes it is not pixel based but rather represents a picture as vector data, from which an altered version of the original image can be rendered. Representing an image as vector data allows performing operations such as zooming, retouching or colourising, avoiding common problems associated with pixel image manipulation. This paper brings together methods from the areas of computer vision, image compositing and image based rendering to prove that this type of image representation is a step towards accurate and efficient image manipulation.

  456.   Sarti, A, de Solorzano, CO, Lockett, S, and Malladi, R, "A geometric model for 3-D confocal image analysis," IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 47, pp. 1600-1609, 2000.

Abstract:   In this paper, we use partial-differential-equation-based filtering as a preprocessing add post processing strategy for computer-aided cytology, We wish to accurately extract and classify. the shapes of nuclei from confocal microscopy images, which is a prerequisite to an accurate quantitative intranuclear (genotypic and phenotypic) and internuclear (tissue structure) analysis of tissue and cultured specimens. First, we study the use of a geometry-driven edge-preserving image smoothing mechanism before nuclear segmentation. We show how this biter outperforms other widely-used filters in that it provides higher edge fidelity. Then we apply the same filter,,vith a different initial condition, to smooth nuclear surfaces and obtain sub-pixel accuracy. Finally we use another instance of the geometrical filter to correct for misinterpretations of the nuclear surface by the segmentation algorithm. Our prefiltering and post filtering nicely complements our initial segmentation strategy, in that it provides substantial and measurable improvement in the definition of the nuclear surfaces.

  457.   Fu, Y, Erdem, AT, and Tekalp, AM, "Tracking visible boundary of objects using occlusion adaptive motion snake," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 2051-2060, 2000.

Abstract:   We propose a novel technique for tracking the visible boundary of a video object in the presence of occlusion, Starting with an initial contour that is interactively specified by the user and may be automatically refined by using intraenergy terms, the proposed technique employs piecewise contour prediction using local motion and color information on both sides of the contour segment, and contour snapping using scale-invariant intraframe and interframe energy terms. The piecewise (segmented) nature of the contour prediction scheme and modeling of the motion on both sides of each contour segment enable accurate determination of whether and where the tracked boundary is occluded by another object. The proposed snake energy terms are associated with contour segments (as opposed to node points) and they are scale/resolution independent to allow multi-resolution contour tracking without the need to retune the weights of the energy terms at each resolution level. This facilitates contour prediction at coarse resolution and snapping at fine resolution with high accuracy. Experimental results are provided to illustrate the performance of the proposed occlusion detection algorithm and the novel snake energy terms that enable visible boundary tracking in the presence of occlusion.

  458.   Vanegas, O, Tokuda, K, and Kitamura, T, "Lip location normalized training for visual speech recognition," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E83D, pp. 1969-1977, 2000.

Abstract:   This paper describes a method to normalize the lip position for improving the performance of a visual-information-based speech recognition system. Basically, there are two types of information useful in speech recognition processes; the first one is the speech signal itself and the second one is the visual information from the lips in motion. This paper tries to solve some problems caused by using images from the lips in motion such as the effect produced by the Variation of the lip location. The proposed lip location normalization method is based on a search algorithm of the lip position in which the location normalization is integrated into the model training. Experiments of speaker-independent isolated word recognition were carried out on the Tulips1 and M2VTS databases. Experiments showed a recognition rate of 74.5% and an error reduction rate of 35.7% for the ten digits word recognition M2VTS database.

  459.   Furukawa, T, Gu, J, Lee, WS, and Magnenat-Thalmann, N, "3D clothes modeling from photo cloned human body," VIRTUAL WORLDS, LECTURE NOTES IN ARTIFICIAL INTELLIGENCE, vol. 1834, pp. 159-170, 2000.

Abstract:   An important advantage of virtual reality technology is that real 3D objects including humans can be edited in the virtual world. In this paper, we present a technique for 3D clothes modeling based on a photo cloned human body. Photo cloning is an efficient 3D human body modeling method using a generic body model and photographs. A part segmentation technique for 3D color objects is applied for the clothes modeling, which uses multi-dimensional mixture Gaussians fitting. Firstly, we construct a 6D point set representing both the geometric and color information Next, the mixture Gaussians are fitted to the point set by using the EM algorithm in order to determine the clusters. This approximation gives probabilities for each point. Finally the probabilities determine the segmented part models corresponding to the clothes models. An advantage of this method is that the clustering is unsupervised learning without any prior knowledge as well as integrating geometric and color data in multi-dimensional space.

  460.   Ding, ZH, and Friedman, MH, "Quantification of 3-D coronary arterial motion using clinical biplane cineangiograms," INTERNATIONAL JOURNAL OF CARDIAC IMAGING, vol. 16, pp. 331-346, 2000.

Abstract:   Speculation that the motion of the coronary arteries might be involved in the pathogenesis of coronary atherosclerosis has generated growing interest in the study of this motion. Accordingly, a system has been developed to quantify 3-D coronary arterial motion using clinical biplane cineangiograms. Exploiting the temporal continuity of sequential angiographic images, a template matching technique is designed to track the non-uniform frame-to-frame motion of coronary arteries without assuming that the vessels experience uniform axial strain. The implementation of the system is automated by a coarse-to-fine matching process, thus improving the efficiency and objectivity of motion analysis. The system has been validated and employed to characterize the in vivo motion dynamics of human coronary arteries; illustrative results show that this system is a promising tool for routine clinical and laboratory analysis of coronary arterial motion.

  461.   Wu, RY, Ling, KV, and Ng, WS, "Automatic prostate boundary recognition in sonographic images using feature model and genetic algorithm," JOURNAL OF ULTRASOUND IN MEDICINE, vol. 19, pp. 771-782, 2000.

Abstract:   This paper describes the development of a model based boundary recognition system for transrectal prostate ultrasonographic images. It consists of two techniques: boundary modeling and boundary searching with model constraints. To achieve higher specificity of the model, a method called feature modeling is derived from the existing point distribution modeling method. To improve the robustness of the searching technique, the genetic algorithm is used. Incremental genetic algorithm with crowding replacement and binary string chromosome type was found experimentally to give good search results. It was shown that the system could recognize the boundary with considerable accuracy and consistency within a few minutes in transrectal ultrasonographic images taken from approximate middle position of the prostate.

  462.   Leavers, VF, "Use of the two-dimensional Radon Transform to generate a taxonomy of shape for the characterization of abrasive powder particles," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 1411-1423, 2000.

Abstract:   A novel image processing technique for the extraction of parameters characteristic of the shape and angularity of abrasive powder particles is proposed. The image data are not analyzed directly. Information concerning angularity and shape is extracted from the parametric transformation of the 2D binarized edge map. The transformation process used, the Radon Transform, is one to many, that is, each image point generates in transform space the parameters of all the possible curves on which it may lie and the resulting distribution is an accumulation of that evidence. Once the image data are segmented, the technique has the potential to deliver a comprehensive numerical description of the shape and angularity of the particles under investigation without the need for further interaction by the operator. The parameters obtained are arranged into a Taxonomy according to their usefulness in categorizing the shapes under inspection. The technique is novel in that it offers an analytical definition of a corner and its apex and it automatically selects only those protrusions coincident with the convex hull of the shape and, hence, those most likely to contribute to the process of abrasion. The advantages and potential pitfalls of using the technique are illustrated and discussed using real image data.

  463.   Chen, CM, and Lu, HHS, "An adaptive snake model for ultrasound image segmentation: Modified trimmed mean filter, ramp integration and adaptive weighting parameters," ULTRASONIC IMAGING, vol. 22, pp. 214-236, 2000.

Abstract:   The snake model is a widely-used approach to finding the boundary of the object of interest in an ultrasound image. However, due to the speckles, the weak edges and the tissue-related textures in an ultrasound image, conventional snake models usually cannot obtain the desired boundary satisfactorily. In this paper, we propose a new adaptive snake model for ultrasound image segmentation. The proposed snake model is composed of three major techniques, namely, the modified trimmed mean (MTM) filtering, ramp integration and adaptive weighting parameters. With the advantages of the mean and median filters, the MIM filter is employed to alleviate the speckle interference in the segmentation process. The weak edge enhancement by ramp integration attempts to capture the slowly varying edges, which are hard to capture by conventional snake models. The adaptive weighting parameter allows weighting of each energy term to change adaptively during the deformation process. The proposed snake model has been verified on the phantom and clinical ultrasound images. The experimental results showed that the proposed snake model achieves a reasonable performance with an initial contour placed 10 to 20 pixels away from the desired boundary. The mean minimal distances from the derived boundary to the desired boundary have been shown to be less than 3.5 (for CNR greater than or equal to 0.5) and 2.5 pixels, respectively, for the phantom and ultrasound images.

  464.   Zaritsky, R, Peterfreund, N, and Shimkin, N, "Velocity-Guided tracking of deformable contours in three dimensional space," COMPUTER VISION - ECCV 2000, PT I, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1842, pp. 251-266, 2000.

Abstract:   This paper presents a 3D active contour model for boundary tracking, motion analysis and position prediction of non-rigid objects, which applies stereo vision and velocity control to the class of deformable contour models, known as snakes. The proposed contour evolves in three dimensional space in reaction to a 3D potential function, which is derived by projecting the contour onto the 2D stereo images. The potential function is augmented by a velocity term, which is related to the three dimensional velocity field along the contour, and is used to guide the contour displacement between subsequent images. This leads to improved spatio-temporal tracking performance, which is demonstrated through experimental results with real and synthetic images. Good tracking performance is obtained with as little as one iteration per frame, which provides a considerable advantage for real time operation.

  465.   Harari, D, Furst, M, Kiryati, N, Caspi, A, and Davidson, M, "Computer-based assessment of body image distortion in anorexia nervosa patients," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 766-775, 2000.

Abstract:   A computer-based method for the assessment of body image distortions in anorexia nervosa and other eating-disordered patients is presented. At the core of the method is a realistic pictorial simulation of lifelike weight-changes, applied to a real source image of the patient. The patients, using a graphical user interface, adjust their body shapes until they meet their self-perceived appearance. Measuring the extent of virtual fattening or slimming of a body with respect to its real shape and size, allows direct, quantitative evaluation of the cognitive distortion in body image. In a preliminary experiment involving 20 anorexia-nervosa patients, 70% of the subjects chose an image with simulated visual weight gain of about 20% as their "real" body image. None of them recognized the original body image, thus demonstrating the quality of the transformed images. The method presented can be applied in the research, diagnosis, evaluation and treatment of eating disorders.

  466.   Baert, SAM, Niessen, WJ, Meijering, EHW, Frangi, AF, and Viergever, MA, "Guide wire tracking during endovascular interventions," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 727-734, 2000.

Abstract:   A method is presented to extract and track the position of a guide wire during endovascular interventions under X-ray fluoroscopy. The method can be used to improve guide wire visualization in the low quality fluoroscopy images. A two-step procedure is utilized to track the guide wire in subsequent frames. First a rough estimate of the displacement is obtained using a template matching procedure. Subsequently, the position of the guide wire is determined by fitting the guide wire to a feature image in which line-like structures are enhanced. In this optimization step the influence of the scale at which the feature is calculated and the additional value of using directional information is investigated. The method is applied both on the original and subtraction images. Using the proper parameter settings, the guide wire could successfully be tracked based on the original images, in 141 out of 146 frames from 5 image sequences.

  467.   Jang, DS, Jang, SW, and Choi, HI, "Structured Kalman filter for tracking partially occluded moving objects," BIOLOGICALLY MOTIVATED COMPUTER VISION, PROCEEDING, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1811, pp. 248-257, 2000.

Abstract:   Moving object tracking is one of the most important techniques in motion analysis and understanding, and it has many difficult problems to solve. Especially estimating and tracking moving objects, when the background and moving objects vary dynamically, are very difficult. The Kalman filter has been used to estimate motion information and use the information in predicting the appearance of targets in succeeding frames. It is possible under such a complex environment that targets may disappear totally or partially due to occlusion by other objects. In this paper, we propose another version of the Kalman filter, to be called Structured Kalman filter, which can successfully work its role of estimating motion information under such a deteriorating condition as occlusion. Experimental results allow that the suggested approach is very effective in estimating and tracking non-rigid moving objects reliably.

  468.   Gatzoulis, L, Anderson, T, Pye, SD, O'Donnell, R, McLean, CC, and McDicken, WN, "Scanning techniques for three-dimensional forward-viewing intravascular ultrasound imaging," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 26, pp. 1461-1474, 2000.

Abstract:   Intravascular ultrasound (US) imaging is a useful tool for assessing arterial disease and aiding treatment procedures. Forward-viewing intravascular US imaging could be of particular use in severely stenosed or totally occluded arteries, where the current side-viewing intravascular US systems are limited by their inability to access the site of interest. In this study, five 3-D forward-viewing intravascular scanning patterns were investigated. The work was carried out using scaled-up vessel phantoms constructed from tissue-mimicking material and a PC-controlled scanning and acquisition system. The scanning patterns were examined and evaluated with regard to the image quality of dense and sparse data sets, the accuracy of quantitative measurements of lumen dimensions and the potential for clinical use. The relative merits and drawbacks of the different patterns are discussed and a preferred scanning pattern is recommended. (C) 2001 World Federation for Ultrasound in Medicine & Biology.

  469.   Murino, V, and Trucco, A, "Three-dimensional image generation and processing in underwater acoustic vision," PROCEEDINGS OF THE IEEE, vol. 88, pp. 1903-1946, 2000.

Abstract:   Underwater exploration is becoming more and more important for many applications involving physical, biological, geological, archaeological, and industrial issues. Unfortunately, only a small percentage of potential resources has been exploited under the sea. The inherent structureless environment and the difficulties implied by the nature of the propagating medium have placed limitations on the sensing and the understanding of the underwater world. Typically, acoustic imaging systems are widely utilized for both large- and small-scale underwater investigations, as they can more easily achieve short and large visibility ranges, though at the expense of a coarse resolution and a poor visual quality. This paper aims at surveying the up-to-date advances in acoustic acquisition systems and data processing techniques, especially focusing on three-dimensional (3-D) short-range imaging for scene reconstruction and understanding. In fact, the advent of smarter and more efficient imaging systems has allowed the generation of good-quality high-resolution images and the related design of proper techniques for underwater scene understanding. The term acoustic vision is introduced to generally describe all data processing (especially image processing) methods devoted to the interpretation of a scene. Since acoustics is also used for medical applications, a short overview of the related systems for biomedical acoustic image formation is provided. The final goal of the paper is to establish the state of the art of the techniques and algorithms for acoustic image generation and processing, providing technical details and results for the most promising techniques, and pointing out the potential capabilities of this technology for underwater scene understanding.

  470.   Zeng, ZH, and Ma, SD, "Real-time face tracking under partial occlusion and illumination change," ADVANCES IN MULTIMODAL INTERFACES - ICMI 2000, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1948, pp. 135-142, 2000.

Abstract:   In this paper, we present an approach which tracks human faces robustly in real-time applications by taking advantage of both region matching and active contour model. Region matching with motion prediction robustly locates the approximate position of the target, then active contour model detects the local variation of the target's boundary which is insensitive to illumination changes, and results from active contour model guides updating the template for successive tracking. In this case, the system can tolerate changes in both pose and illumination. To reduce the influence of local error due to partial occlusion and weak edge strength, we use a priori knowledge of head shape to re-initialize the curve of the object every a few frames. To realize real-time tracking, we adopt region matching with adaptively matching density and modify greedy algorithm to be more effective in its implementation. The proposed technique is applied to track the head of the person who is doing Taiji exercise in live video sequences. The system demonstrates promising performance, and the tracking time per frame is about 40ms on Pentium II 400MHZ PC.

  471.   Wang, R, Gao, W, and Ma, JY, "An approach to robust and fast locating lip motion," ADVANCES IN MULTIMODAL INTERFACES - ICMI 2000, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1948, pp. 332-339, 2000.

Abstract:   In this paper,we present a novel approach to robust and fast locating lip motion.Firstly, the fisher transform with constraints is presented to enhance the lip region in a face image. Secondly, two distribution characteristics of the lip in human face space are proposed to increase the accuracy and and real-time implementation performance of lip locating Experiments with 2000 images show that this approach can satisfy requirements not only in real-time performance but also in reliability and accuracy.

  472.   Tabb, K, Davey, N, Adams, R, and George, S, "Analysis of human motion using snakes and neural networks," ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1899, pp. 48-57, 2000.

Abstract:   A novel technique is described for analysing human movement in outdoor scenes. Following initial detection of the humans using active contour models, the contours are then re-represented as normalised axis crossover vectors. These vectors are then fed into a neural network which determines the typicality of a given human shape, allowing for a given human's motion deformation to be analysed. Experiments are described which investigate the success of the technique being presented.

  473.   Schenk, A, Prause, G, and Peitgen, HO, "Efficient semiautomatic segmentation of 3D objects in medical images," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 186-195, 2000.

Abstract:   We present a fast and accurate tool for semiautomatic segmentation of volumetric medical images based on the live wire algorithm, shape-based interpolation and a new optimization method. While the user-steered live wire algorithm represents an efficient, precise and reproducible method for interactive segmentation of selected two-dimensional images, the shape-based interpolation allows the automatic approximation of contours on slices between user-defined boundaries. The combination of both methods leads to accurate segmentations with significantly reduced user interaction time. Moreover, the subsequent automated optimization of the interpolated object contours results in a better segmentation quality or can be used to extend the distances between user-segmented images and for a further reduction of interaction time, Experiments were carried out on hepatic computer tomographies from three different clinics. The results of the segmentation of liver parenchyma have shown that the user interaction time can be reduced more than 60% by the combination of shape-based interpolation and our optimization method with volume deviations in the magnitude of inter-user differences.

  474.   Chen, T, and Metaxas, D, "Image segmentation based on the integration of Markov Random Fields and deformable models," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 256-265, 2000.

Abstract:   This paper proposes a new methodology for image segmentation based on the integration of deformable and Markov Random Field models. Our method makes use of Markov Random Field theory to build a Gibbs Prior model of medical images with arbitrary initial parameters to estimate the boundary of organs with low signal to noise ratio (SNR). Then we use a deformable model to fit the estimated boundary, The result of the deformable model fit is used to update the Gibbs prior model parameters, such as the gradient threshold of a boundary. Based on the updated parameters we restart the Gibbs prior models. By iteratively integrating these processes we achieve an automated segmentation of the initial images. By careful choice of the method used for the Gibbs prior models, and based on the above method of integration with deformable model our segmentation solution runs in close to real time. Results of the method are presented for several examples, including some MRI images with significant amount of noise.

  475.   Moretti, B, Fadili, JM, Ruan, S, Bloyet, D, and Mazoyer, B, "Phantom-based performance evaluation: Application to brain segmentation from magnetic resonance images," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 303-316, 2000.

Abstract:   This paper presents a new technique for assessing the accuracy of segmentation algorithms, applied to the performance evaluation of brain editing and brain tissue segmentation algorithms for magnetic resonance images. We propose performance evaluation criteria derived from the use of the realistic digital brain phantom Brainweb. This 'ground truth' allows us to build distance-based discrepancy features between the edited brain or the segmented brain tissues (such as cerebro-spinal fluid, grey matter and white matter) and the phantom model, taken as a reference. Furthermore, segmentation errors can be spatially determined, and ranged in terms of their distance to the reference. The brain editing method used is the combination of two segmentation techniques. The first is based on binary mathematical morphology and a region growing approach. It represents the initialization step, the results of which are then refined with the second method, using an active contour model. The brain tissue segmentation used is based on a Markov random field model. Segmentation results are shown on the phantom for each method, and on real magnetic resonance images for the editing step; performance is evaluated by the new distance-based technique and corroborates the effective refinement of the segmentation using active contours. The criteria described here can supersede biased Visual inspection in order to compare, evaluate and validate any segmentation algorithm. Moreover, provided a 'ground truth' is given, we are able to determine quantitatively to what extent a segmentation algorithm is sensitive to internal parameters, noise, artefacts or distortions. (C) 2000 Elsevier Science B.V. All rights reserved.

  476.   Rifai, H, Bloch, I, Hutchinson, S, Wiart, J, and Garnero, L, "Segmentation of the skull in MRI volumes using and taking the partial volume effect into account deformable model," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 219-233, 2000.

Abstract:   Segmentation of the skull in medical imagery is an important stage in applications that require the construction of realistic models of the head. Such models are used, for example, to simulate the behavior of electro-magnetic fields in the head and to model the electrical activity of the cortex in EEG and MEG data. in this paper, we present a new approach for segmenting regions of bone in MRI volumes using deformable models. Our method takes into account the partial volume effects that occur with MRI data, thus permitting a precise segmentation of these bone regions. At each iteration of the propagation of the model, partial volume is estimated in a narrow band around the deformable model, Our segmentation method begins with a pre-segmentation stage, in which a preliminary segmentation of the skull is constructed using a region-growing method. The surface that bounds the pre-segmented skull region offers an automatic 3D initialization of the deformable model. This surface is then propagated (in 3D) in the direction of its normal. This propagation is achieved using level set method, thus permitting changes to occur in the topology of the surface as it evolves, an essential capability for our problem. The speed at which the surface evolves is a function of the estimated partial volume. This provides a sub-voxel accuracy in the resulting segmentation. (C) 2000 Elsevier Science B.V. All rights reserved.

  477.   Westin, CF, Richolt, J, Moharir, V, and Kikinis, R, "Affine adaptive filtering of CT data," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 161-177, 2000.

Abstract:   A novel method for resampling and enhancing image data using multidimensional adaptive fillers is presented. The underlying issue that this paper addresses is segmentation of image structures that are close in size to the voxel geometry. Adaptive filtering is used to reduce both the effects of partial volume averaging by resampling the data to a lattice with higher sample density and to reduce the image noise level. Resampling is achieved by constructing filter sets that have subpixel offsets relative to the original sampling lattice. The filters are also frequency corrected for ansisotropic voxel dimensions. The shift and the voxel dimensions are described by an affine transform and provides a model for tuning the filter frequency functions. The method has been evaluated on CT data where the voxels are in general non cubic. The in-plane resolution in CT image volumes is often higher by a factor of 3-10 than the through-plane resolution. The method clearly shows an improvement over conventional resampling techniques such as cubic spline interpolation and sine interpolation. (C) 2000 Elsevier Science B.V. All rights reserved.

  478.   Ozanian, TO, and Phillips, R, "Image analysis for computer-assisted internal fixation of hip fractures," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 137-159, 2000.

Abstract:   This paper focuses on the task of automatic feature detection for intra-operative drilling trajectory planning for computer-assisted internal fixation of hip fractures. The features of interest are the lateral cortex line of the femoral shaft, the femoral neck centre and the femoral head centre, the latter being the most challenging of all. Since the object is known, the detection process is regarded as a localisation task rather than a recognition one. Simple anatomical relationships between bone parts provide a naturally hierarchical approach to searching, allowing refinement of image-derived information based on a priori constraints. Use of knowledge and an unconventional "divide-and-conquer" approach produce more reliable and faster results than the standard global image processing routine. Analysis of summed 1D grey level profiles is used as a main segmentation tool to carry out the above strategy. (C) 2000 Elsevier Science B.V. All rights reserved.

  479.   McInerney, T, and Terzopoulos, D, "T-snakes: Topology adaptive snakes," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 73-91, 2000.

Abstract:   We present a new class of deformable contours (snakes) and apply them to the segmentation of medical images. Our snakes are defined in terms of an affine cell image decomposition (ACID). The 'snakes in ACID' framework significantly extends conventional snakes, enabling topological flexibility among other features. The resulting topology adaptive snakes, or 'T-snakes', can be used to segment some of the most complex-shaped biological structures from medical images in an efficient and highly automated manner. (C) 2000 Elsevier Science BN. All rights reserved.

  480.   Westin, CF, Lorigo, LM, Faugeras, O, Grimson, WEL, Dawson, S, Norbash, A, and Kikinis, R, "Segmentation by adaptive geodesic active contours," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 266-275, 2000.

Abstract:   This paper introduces the use of spatially adaptive components into the geodesic active contour segmentation method for application to volumetric medical images. These components are derived from local structure descriptors and are used both in regularization of the segmentation and in stabilization of the image-based vector field which attracts the contours to anatomical structures in the images. They are further used to incorporate prior knowledge about spatial location of the structures of interest. These components can potentially decrease the sensitivity to parameter settings inside the contour evolution system while increasing robustness to image noise. We show segmentation results on blood vessels in magnetic resonance angiography data and bone in computed tomography data.

  481.   Boykov, Y, and Jolly, MP, "Interactive organ segmentation using graph cuts," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 276-286, 2000.

Abstract:   An N-dimensional image is divided into "object" and "background" segments using a graph cut approach, A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.

  482.   Yao, JH, and Taylor, R, "Tetrahedral mesh modeling of density data for anatomical atlases and intensity-based registration," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 531-540, 2000.

Abstract:   In this paper, we present the first phase of our effort to build a bone density atlas. We adopted a tetrahedral mesh structure to represent anatomical structures. We propose an efficient and automatic algorithm to construct the tetrahedral mesh from contours in CT images corresponding to the outer bone surfaces and boundaries between compact bone, spongy bone, and medullary cavity. We approximate bone density variations by means of continuous density functions in each tetrahedron of the mesh. Currently, our density functions are second degree polynomial functions expressed in terms of barycentric coordinates associated with each tetrahedron. We apply our density model to efficiently generate Digitally Reconstructed Radiographs. These results are immediately applicable as means of speeding up 2D-3D and 3D-3D intensity based registration and will be incorporated into our future work on construction of atlases and deformable intensity-based registration.

  483.   Hernandez-Hoyos, M, Anwander, A, Orkisz, M, Roux, JP, Douek, P, and Magnin, IE, "A deformable vessel model with single point initialization for segmentation, quantification and visualization of blood vessels in 3D MRA," MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1935, pp. 735-745, 2000.

Abstract:   We deal with image segmentation applied to three-dimensional (3D) analysis of of vascular morphology in magnetic resonance angiography (MRA) images. The main goal of our work is to develop a fast and reliable method for stenosis quantification. The first step towards this purpose is the extraction of the vessel axis by an expansible skeleton method. Vessel boundaries are then detected in the planes locally orthogonal to the centerline using an improved active contour. Finally, area measurements based on the resulting contours allow the calculation of stenosis parameters. The expansible nature of the skeleton associated with a single point initialization of the active contour allows overcoming some limitations of traditional deformable models. As a result, the algorithm performs well even for severe stenosis and significant vessel curvatures. Experimental results are presented in 3D phantom images as well as in real images of patients.

  484.   Gomes, J, and Faugeras, O, "Level sets and distance functions," COMPUTER VISION - ECCV 2000, PT I, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1842, pp. 588-602, 2000.

Abstract:   This paper is concerned with the simulation of the Partial Differential Equation (PDE) driven evolution of a closed surface by means of an implicit representation. In most applications, the natural choice for the implicit representation is the signed distance function to the closed surface. Osher and Sethian propose to evolve the distance function with a Hamilton-Jacobi equation. Unfortunately the solution to this equation is not a distance function. As a consequence, the practical application of the level set method is plagued with such questions as when do we have to "reinitialize" the distance function? How do we "reinitialize" the distance function? Etc... which reveal a disagreement between the theory and its implementation. This paper proposes an alternative to the use of Hamilton-Jacobi equations which eliminates this contradiction: in our method the implicit representation always remains a distance function by construction, and the implementation does not differ from the theory anymore. This is achieved through the introduction of a new equation. Besides its theoretical advantages, the proposed method also has several practical advantages which we demonstrate in three applications: (i) the segmentation of the human cortex surfaces from MRI images using two coupled surfaces [27], (ii) the construction of a hierarchy of Euclidean skeletons of a 3D surface, (iii) the reconstruction of the surface of 3D objects through stereo [13].

  485.   Weerasinghe, C, Ji, L, and Yan, H, "A new method for ROI extraction from motion affected MR images based on suppression of artifacts in the image background," SIGNAL PROCESSING, vol. 80, pp. 867-881, 2000.

Abstract:   Patient motion during a magnetic resonance imaging (MRI) examination causes ghost artifacts and blurring in the image. Object boundary extraction from such a degraded image is a challenging task, especially if the motion function of the object is unknown. Although there are many algorithms presently available for solving segmentation tasks, they can be easily misled by the ghost artifacts and blurring in the background of the image. Therefore, we propose a two-step background clearing algorithm, in order to facilitate the object boundary extraction. The first step involves selection of the least motion affected views, using an entropy minimization criterion for suppression of motion induced blur. The second step involves cancellation of the remaining ghost artifacts, using a fuzzy model representing the image background region. Both the steps involved in background clearing tend to increase the number of dark pixels in the image. The contour extraction is performed using an active contour model (snake), which was previously developed by the authors. The proposed method has been applied to phantom data affected by severe rotational motion and to spin-echo MR images, producing encouraging results. (C) 2000 Elsevier Science B.V. All rights reserved.

  486.   Tsap, LV, Goldgof, DB, Sarkar, S, and Powers, PS, "A method for increasing precision and reliability of elasticity analysis in complicated burn scar cases," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 14, pp. 189-210, 2000.

Abstract:   In this paper we propose a method for increasing precision and reliability of elasticity analysis in complicated burn scar cases. The need for a technique that would help physicians by objectively assessing elastic properties of scars, motivated our original algorithm. This algorithm successfully employed active contours for tracking and finite element models for strain analysis. However, the previous approach considered only one normal area and one abnormal area within the region of interest, and scar shapes which were somewhat simplified. Most burn scars have rather complicated shapes and may include multiple regions with different elastic properties. Hence, we need a method capable of adequately addressing these characteristics. The new method can split the region into more than two localities with different material properties, select and quantify abnormal areas, and apply different forces if it is necessary for a better shape description of the scar. The method also demonstrates the application of scale and mesh refinement techniques in this important domain. It is accomplished by increasing the number of Finite Element Method (FEM) areas as well as the number of elements within the area. The method is successfully applied to elastic materials and real burn scar cases. We demonstrate all of the proposed techniques and investigate the behavior of elasticity function in a 3-D space. Recovered properties of elastic materials are compared with those obtained by a conventional mechanics-based approach. Scar ratings achieved with the method are correlated against the judgments of physicians.

  487.   Jang, DS, and Choi, HI, "Active models for tracking moving objects," PATTERN RECOGNITION, vol. 33, pp. 1135-1146, 2000.

Abstract:   In this paper, we propose a model-based tracking algorithm which can extract trajectory information of a target object by detecting and tracking a moving object from a sequence of images. The algorithm constructs a model from the detected moving object and match the model with successive image frames to track the target object. We use an active model which characterizes regional and structural features of a target object such as shape, texture, color, and edgeness. Our active model can adapt itself dynamically to an image sequence so that it can track a non-rigid moving object. Such an adaptation is made under the framework of energy minimization. We design an energy function so that the Function can embody structural attributes of a target as well as its spectral attributes. We applied Kalman filter to predict motion information, The predicted motion information by Kalman filter was used very efficiently to reduce the search space in the matching process, (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  488.   Bronkorsta, PJH, Reinders, MJT, Hendriks, EA, Grimbergen, J, Heethaar, RM, and Brankenhoff, GJ, "On-line detection of red blood cell shape using deformable templates," PATTERN RECOGNITION LETTERS, vol. 21, pp. 413-424, 2000.

Abstract:   For the purpose of automating a clinical diagnostic apparatus to quantify the deformability of human red blood cells, we present an automated image analysis procedure for on-line detection of the cell shape based upon the method of parametric deformable templates. (C) 2000 Elsevier Science B.V. All rights reserved.

  489.   Lanterman, AD, Grenander, U, and Miller, MI, "Bayesian segmentation via asymptotic partition functions," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 337-347, 2000.

Abstract:   Asymptotic approximations to the partition function of Gaussian random fields are derived. Textures are characterized via Gaussian random fields induced by stochastic difference equations determined by finitely supported, stationary, linear difference operators, adjusted to be nonstationary at the boundaries. It is shown that as the scale of the underlying shape increases, the log-normalizer converges to the integral of the log-spectrum of the operator inducing the random field. Fitting the covariance of the fields amounts to fitting the parameters of the spectrum of the differential operator-induced random field model. Matrix analysis techniques are proposed for handling textures with variable orientation. Examples of texture parameters estimated from training data via asymptotic maximum-likelihood are shown. Isotropic models involving powers of the Laplacian and directional models involving partial derivative mixtures are explored. Parameters are estimated for mitochondria and actin-myocin complexes in electron micrographs and clutter in forward-looking infrared images. Deformable template models are used to infer the shape of mitochondria in electron micrographs, with the asymptotic approximation allowing easy recomputation of the partition function as inference proceeds.

  490.   Gomes, J, and Faugeras, O, "Reconciling distance functions and level sets," JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 11, pp. 209-223, 2000.

Abstract:   This paper is concerned with the simulation of the partial differential equation driven evolution of a closed surface by means of an implicit representation. In most applications, the natural choice for the implicit representation is the signed distance function to the closed surface. Osher and Sethian have proposed to evolve the distance function with a Hamilton-Jacobi equation. Unfortunately the solution to this equation is not a distance function. As a consequence, the practical application of the level set method is plagued with such questions as When do we have to reinitialize the distance function? How do we reinitialize the distance function?, which reveal a disagreement between the theory and its implementation. This paper proposes an alternative to the use of Hamilton-Jacobi equations which eliminates this contradiction: in our method the implicit representation always remains a distance function by construction, and the implementation does not differ from the theory anymore. This is achieved through the introduction of a new equation. Besides its theoretical advantages, the proposed method also has several practical advantages which we demonstrate in three applications: (i) the segmentation of the human cortex surfaces from MRI images using two coupled surfaces (X. Zeng, et al., in Proceedings of the International Conference on Computer Vision and Pattern Recognition, June 1998), (ii) the construction of a hierarchy of Euclidean skeletons of a 3D surface, (iii) the reconstruction of the surface of 3D objects through stereo (O. Faugeras and R. Keriven, Lecture Notes in Computer Science, Vol. 1252, pp. 272-283). (C) 2000 Academic Press.

  491.   Shah, J, "Riemannian drums, anisotropic curve evolution, and segmentation," JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 11, pp. 142-153, 2000.

Abstract:   The method of curve evolution is a popular method for recovering shape boundaries. However, isotropic metrics have always been used to induce the how of the curve and potential steady states tend to be difficult to determine numerically, especially in noisy or tow-contrast situations. Initial curves shrink past the steady slate and soon vanish. In this paper, anisotropic metrics are considered to remedy the situation by taking the orientation of the feature gradient into account. The problem of shape recovery or segmentation is formulated as the problem of finding minimum cuts of a Riemannian manifold. Approximate methods, namely anisotropic geodesic flows and the solution of an eigenvalue problem, are discussed. (C) 2000 Academic Press.

  492.   Chan, TE, Sandberg, BY, and Vese, LA, "Active contours without edges for vector-valued images," JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 11, pp. 130-141, 2000.

Abstract:   In this paper, we propose an active contour algorithm for object detection in vector-valued images (such as RGB or multispectral). The model is an extension of the scalar Chan-Vese algorithm to the vector-valued case [1]. The model minimizes a Mumford-Shah functional over the length of the contour, plus the sum of the fitting error over each component of the vector-valued image. Like the Chan-Vese model, our vector-valued model can detect edges both with or without gradient. We show examples where our model detects vector-valued objects which are undetectable in any scalar representation. For instance, objects with different missing parts in different channels are completely detected (such as occlusion). Also, in color images, objects which are invisible in each channel or in intensity can be detected by our algorithm. Finally, the model is robust with respect to noise, requiring no a priori denoising step. (C) 2000 Academic Press.

  493.   Lepage, R, Rouhana, RG, St-Onge, B, Noumeir, R, and Desjardins, R, "Cellular neural network for automated detection of geological lineaments on radarsat images," IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 38, pp. 1224-1233, 2000.

Abstract:   The analysis of natural linear structures, termed "lineaments in satellite images, provides important information to the geologist, In the satellite imaging process, important features of the observed tridimensional scene, including geological lineaments, are mapped into the resulting 2-D image as sharp radiation variations or ed,ne elements (edgels), Edgels are detected by a first-order differentiation operator and are linked together with those in the vicinity on a basis of orientation continuity. Lineaments are mapped into remotely sensed satellite images as long and continuous quasilinear features and can be described as a connected sequence of edgels whose direction may change gradually along the sequence. Parts of the same lineament can be occluded by geomorphological features and must be linked together, a major drawback with local and small neighborhood detectors. We propose a cellular neural network (CNN) architecture to offer a large directional neighborhood to the lineament detection algorithm. The CNN uses a large circular neighborhood coupled with a directional-induced gradient field to link together edgels with similar and continuous orientation. Missing edgels are restored if a surrounding lineament is detected.

  494.   Oliver, N, Pentland, A, and Berard, F, "LAFTER: a real-time face and lips tracker with facial expression recognition," PATTERN RECOGNITION, vol. 33, pp. 1369-1382, 2000.

Abstract:   This paper describes an active-camera real-time system for tracking, shape description, and classification of the human face and mouth expressions using only a PC or equivalent computer. The system is based on use of 2-D blob features, which are spatially compact clusters of pixels that are similar in terms of low-level image properties. Patterns of behavior (e.g., facial expressions and head movements) can be classified in real-time using hidden Markov models (HMMs). The system has been tested on hundreds of users and has demonstrated extremely reliable and accurate performance. Typical facial expression classification accuracies are near 100%. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  495.   Toklu, C, Tekalp, AM, and Erdem, AT, "Semi-automatic video object segmentation in the presence of occlusion," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 10, pp. 624-629, 2000.

Abstract:   We describe a semi-automatic approach for segmenting a video sequence into spatio-temporal video objects in the presence of occlusion, Motion and shape of each video object is represented by a 2-D mesh. Assuming that the boundary of an object of interest is interactively marked on some keyframes, the proposed method finds the boundary of the object in all other frames automatically by tracking the 2-D mesh representation of the object in both forward and backward directions. A key contribution of the proposed method is automatic detection of covered and uncovered regions at each frame, and assignment of pixels in the uncovered regions to the object or background based on color and motion similarity. Experimental results are presented on two MPEG-4 test sequences and the resulting segmentations are evaluated both visually and quantitatively.

  496.   Sarti, A, Malladi, R, and Sethian, JA, "Subjective surfaces: A method for completing missing boundaries," PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, vol. 97, pp. 6258-6263, 2000.

Abstract:   We present a model and algorithm for segmentation of images with missing boundaries. In many situations. the human visual system fills in missing gaps in edges and boundaries, building and completing information that is not present This presents a considerable challenge in computer vision, since most algorithms attempt to exploit existing data. Completion models, which postulate how to construct missing data, are popular but are often trained and specific to particular images. In this paper, we take the following perspective: We consider a reference point within an image as given and then develop an algorithm that tries to build missing information on the basis of the given point of view and the available information as boundary data to the algorithm. We test the algorithm on some standard images, including the classical triangle of Kanizsa and low signal:noise ratio medical images.

  497.   Imelinska, C, Downes, MS, and Yuan, W, "Semi-automated color segmentation of anatomical tissue," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 24, pp. 173-180, 2000.

Abstract:   We propose a semi-automated region-based color segmentation algorithm to extract anatomical structures, including soft tissues, in the color anatomy slices of the Visible Human data. Our approach is based on repeatedly dividing an image into regions using Voronoi diagrams and classifying the regions based on experimental classification statistics. The user has the option of reclassifying regions in order to improve the final boundary. Our results indicate that the algorithm can find accurate outlines in a small number of iterations and that manual interaction can markedly improve the outline. This approach can be extended to 3D color segmentation. (C) 2000 Published by Elsevier Science Ltd. All rights reserved.

  498.   Kim, JS, Koh, KC, and Cho, HS, "An active contour model with shape regulation scheme," ADVANCED ROBOTICS, vol. 14, pp. 495-514, 2000.

Abstract:   This paper presents an active method for locating target objects in images, which is aimed at improving the performance of detecting object boundaries by enhancing the behavioral characteristics of an active contour. The proposed active contour model simulates a mechanical system consisting of two main parts: the first is a rigid fixture, called the 'core' , specifying the expected shape of target boundaries, while the second is an elastic rod attached to the rigid fixture. The elastic rod deforms or moves relative to the rigid core according to the classical laws of the mechanical system, When the initial contour is applied to an image data, it is attracted near the dominant image features, but tries to keep its home shape and simultaneously make the deformation smooth if a deformation is more natural for force equilibrium. This mechanism significantly improves the performance of detecting object boundaries in the presence of some disturbing image features. The active contour is scale invariant, thereby significantly relieving the difficulty in selecting proper values for the model parameters. The values for the model parameters can be selected to make the contour have the desired behaviors around the equilibrium position through the analysis of the vibration mode of the mechanical system. The performance of the proposed method is validated through a series of experiments, which include detection of heavily degraded objects, tracking of objects under non-rigid motion and comparisons with the original snake models.

  499.   Egmont-Petersen, M, Schreiner, U, Tromp, SC, Lehmann, TM, Slaaf, DW, and Arts, T, "Detection of leukocytes in contact with the vessel wall from in vivo microscope recordings using a neural network," IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 47, pp. 941-951, 2000.

Abstract:   Leukocytes play an important role in the host defense as they may travel from the blood stream into the tissue in reacting to inflammatory stimuli. The leukocyte-vessel wall interactions are studied in post capillary vessels by intraviral video microscopy during in vivo animal experiments. Sequences of video images are obtained and digitized with a frame grabber. A method for automatic detection and characterization of leukocytes in the video images is developed. Individual leukocytes are detected using a neural network that is trained with synthetic leukocyte images generated using a novel stochastic model. This model makes it feasible to generate images of leukocytes with different shapes and sizes under various lighting conditions. Experiments indicate that neural networks trained with the synthetic leukocyte images perform better than networks trained with images of manually detected leukocytes. The best performing neural network trained with synthetic leukocyte images resulted in an 18% larger area under the ROC curve than the best performing neural network trained with manually detected leukocytes.

  500.   Pardo, XM, and Cabello, D, "Biomedical active segmentation guided by edge saliency," PATTERN RECOGNITION LETTERS, vol. 21, pp. 559-572, 2000.

Abstract:   Deformable models are very popular approaches in biomedical image segmentation. Classical snake models are edge-oriented and work well if the target objects have distinct gradient values. This is not always true in biomedical imagery, which makes the model very dependent on initial conditions. In this work we propose an edge-based potential aimed at the elimination of local minima due to undesired edges. The new approach integrates knowledge about the features of the desired boundaries apart from gradient strength and uses a new method to eliminate local minima, which makes the segmentation less sensitive to initial contours. (C) 2000 Elsevier Science B.V. All rights reserved.

  501.   Chung, R, and Ho, CK, "3-D reconstruction from tomographic data using 2-D active contours," COMPUTERS AND BIOMEDICAL RESEARCH, vol. 33, pp. 186-210, 2000.

Abstract:   Reconstructing three-dimensional (3-D) shapes of structures like internal organs from tomographic data is an important problem in medical imaging. Various forms of the deformable surface model have been proposed to tackle it, but they are either computationally expensive or limited to tubular shapes. In this paper a 3-D reconstruction mechanism that requires only 2-D deformations is proposed. Advantages of the proposed model include that it is conformable to any 3-D shape, efficient, and highly parallelizable. Most importantly, it requires from the user an initial 2-D contour on only one of the tomograph slices to start with. Experimental results are shown to illustrate the performance of the model. (C) 2000 Academic Press.

  502.   Iannizzotto, G, and Vita, L, "Fast and accurate edge-based segmentation with no contour smoothing in 2-D real images," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 1232-1237, 2000.

Abstract:   In this paper we propose an edge-based segmentation algorithm built on a new type of active contour which is fast, has a low computational complexity and does not introduce unwanted smoothing on the retrieved contours. The contours are always returned as closed chains of points, resulting in a very useful base for subsequent shape representation techniques.

  503.   Wink, O, Niessen, WJ, and Viergever, MA, "Fast delineation and visualization of vessels in 3-D angiographic images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 337-346, 2000.

Abstract:   A method is presented which aids the clinician in obtaining quantitative measures and a three-dimensional (3-D) representation of vessels from 3-D angiographic data with a minimum of user interaction. Based on two user defined starting points, an iterative procedure tracks the central vessel axis. During the tracking process, the minimum diameter and a surface rendering of the vessels are computed, allowing for interactive inspection of the vasculature. Applications of the method to CTA, contrast enhanced (CE)-MRA and phase contrast (PC)-MRA images of the abdomen are shown, In all applications, a long stretch of vessels with varying width is tracked, delineated, and visualized, in less than 10 s on a standard clinical workstation.

  504.   Chen, SJ, and Carroll, JD, "3-D reconstruction of coronary arterial tree to optimize angiographic visualization," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 318-336, 2000.

Abstract:   Due to vessel overlap and foreshortening, multiple projections are necessary to adequately evaluate the coronary tree with arteriography, Catheter-based interventions can only be optimally performed when these visualization problems are successfully solved. The traditional method provides multiple selected views in which overlap and foreshortening are subjectively minimized based on two dimensional (2-D) projections. A pair of images acquired from routine angiographic study at arbitrary orientation using a single-plane imaging system were chosen far three-dimensional (3-D) reconstruction. After the arterial segment of interest (e.g., a single coronary stenosis or bifurcation lesion) was selected, a set of gantry angulations minimizing segment foreshortening was calculated. Multiple computer-generated projection images with minimized segment foreshortening were then used to choose views with minimal overlapped vessels relative to the segment of interest. The optimized views could then be utilized to guide subsequent angiographic acquisition and interpretation. Over 800 cases of coronary arterial trees have been reconstructed, in which more than 40 cases were performed in room during cardiac catheterization. The accuracy of 3-D length measurement was confirmed to be within an average root-mean-square (rms) 3.5% error using eight different pairs of angiograms of an intracoronary guidewire of 105-mm length with eight radiopaque markers of 15-mm interdistance. The accuracy of similarity between the additional computer-generated projections versus the actual acquired views was demonstrated with the average rms errors of 3.09 mm and 3.13 mm in 20 LCA and 20 RCA cases, respectively. The projections of the reconstructed patient-specific 3-D coronary tree model can be utilized for planning optimal clinical views: minimal overlap and foreshortening, The assessment of lesion length and diameter narrowing can be optimized in both interventional cases and studies of disease progression and regression.

  505.   Suter, D, and Chen, F, "Left ventricular motion reconstruction based on elastic vector splines," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 295-305, 2000.

Abstract:   In medical imaging it is common to reconstruct dense motion estimates, from sparse measurements of that motion, using some form of elastic spline (thin-plate spline, snakes and other deformable models, etc.). Usually the elastic spline uses only bending energy (second-order smoothness constraint) or stretching energy (first-order smoothness constraint), or a combination of the two. These elastic splines belong to a family of elastic vector splines called the Laplacian splines. This spline family is derived from an energy minimization functional, which is composed of multiple-order smoothness constraints. These splines can be explicitly tuned to vary the smoothness of the solution according to the deformation in the modeled material/tissue. In this context, it is natural to question which members of the family will reconstruct the motion more accurately, We compare different members of this spline family to assess how well these splines reconstruct human cardiac motion. We find that the commonly used splines (containing first-order and/or second-order smoothness terms only) are not the most accurate for modeling human cardiac motion.

  506.   Szekely, G, Brechbuhler, C, Dual, J, Enzler, R, Hug, J, Hutter, R, Ironmonger, N, Kauer, M, Meier, V, Niederer, P, Rhomberg, A, Schmid, P, Schweitzer, G, Thaler, M, Vuskovic, V, Troster, G, Haller, U, and Bajka, M, "Virtual reality-based simulation of endoscopic surgery," PRESENCE-TELEOPERATORS AND VIRTUAL ENVIRONMENTS, vol. 9, pp. 310-333, 2000.

Abstract:   Virtual reality (VR)-based surgical simulator systems offer a very elegant approach to enriching and enhancing traditional training in endoscopic surgery. However, while a number of VR simulator systems have been proposed and realized in the past Few years, most of these systems are far from being able to provide a reasonably realistic surgical environment We explore the current limits for realism and the approaches to reaching and surpassing those limits by describing and analyzing the mast important components of VR-based endoscopic simulators. The feasibility of the proposed techniques is demonstrated on a modular prototype system that implements the basic algorithms for VR training in gynaecologic laparoscopy.

  507.   Tillett, R, McFarlane, N, and Lines, J, "Estimating dimensions of free-swimming fish using 3D point distribution models," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 79, pp. 123-141, 2000.

Abstract:   Monitoring the growth of farmed fish is an important task which is currently difficult to carry out. An underwater stereo image analysis technique offers the potential for estimating key dimensions of free-swimming fish, from which the fish mass can be estimated. This paper describes the development of a three-dimensional point distribution model to capture the typical shape and variability of salmon viewed from the side. The model was fitted to stereo images of test fish by minimizing an energy function, which was based on probability distributions. The minimization was an iterated two-step method in which edges were selected for magnitude, direction, and proximity to the model, and the model was then fitted to the edges. A search strategy for locating the edges in 3D was devised. The model is tested on two image sets. In the first set 19 of the 26 fish are located in spite of their variable appearance and the presence of neighboring fish. In the second set the measurements made on 11 images of fish are compared with manual measurements of the fish dimensions and show an average error in length estimation of 5%. (C) 2000 Academic Press.

  508.   Erlandsson, K, Visvikis, D, Waddington, WA, and Jarritt, P, "Truncation reduction in fan-beam transmission scanning using the radon transform consistency conditions," IEEE TRANSACTIONS ON NUCLEAR SCIENCE, vol. 47, pp. 989-993, 2000.

Abstract:   Transmission scanning is needed for accurate attenuation correction in cardiac Single Photon Emission Tomography (SPET). Simultaneous emission and transmission imaging can be done using a scintillation camera with a fan-beam collimator and a line source at the focal point. The transmission data will be truncated, however, which may lead to inaccuracy in the reconstructed emission values. We have developed two different methods for augmentation of truncated transmission data, based on the Radon transform consistency conditions. Our results show that the uniformity in the myocardium can be improved with these methods, as compared to using the truncated data directly in the reconstruction.

  509.   Zhong, Y, Jain, AK, and Dubuisson-Jolly, MP, "Object tracking using deformable templates," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 544-549, 2000.

Abstract:   We propose a novel method for object tracking using prototype-based deformable template models. To track an object in an image sequence, we use a criterion which combines two terms: the frame-to-frame deviations of the object shape and the fidelity of the modeled shape to the Input image. The deformable template model utilizes the prior shape information which is extracted from the previous frames along with a systematic shape deformation scheme to model the object shape in a new frame. The following image information Is used in the tracking process: 1) edge and gradient information: the object boundary consists of pixels with large image gradient, 2) region consistency: the same object region possesses consistent color and texture throughout the sequence, and 3) interframe motion: the boundary of a moving object is characterized by large interframe motion. The tracking proceeds by optimizing an objective function which combines both the shape deformation and the fidelity of the modeled shape to the current image (in terms of gradient, texture, and interframe motion). The inherent structure in the deformable template. together with region, motion, and image gradient cues. makes the proposed algorithm relatively insensitive to the adverse effects of weak image features and moderate amounts of occlusion.

  510.   Ma, WY, and Manjunath, BS, "EdgeFlow: A technique for boundary detection and image segmentation," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 1375-1388, 2000.

Abstract:   A novel boundary detection scheme based on "edge flow" is proposed in this paper. This scheme utilizes a predictive coding model to identify the direction of change in color and texture at each image location at a given scale, and constructs an edge flow vector. By propagating the edge flow vectors, the boundaries can be detected at image Locations which encounter two opposite directions of flow in the stable state. A user defined image scale is the only significant control parameter that is needed by the algorithm. The scheme facilitates integration of color and texture into a single framework for boundary detection. Segmentation results on a large and diverse collections of natural images are provided, demonstrating the usefulness of this method to content based image retrieval.

  511.   Haque, H, Hassanien, AE, and Nakajima, M, "Generation of missing medical slices using morphing technology," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E83D, pp. 1400-1407, 2000.

Abstract:   When the inter-slice resolution of tomographic image slices is large, it is necessary to estimate the locations and intensities of pixels, which would appear in the non-existed intermediate slices. This paper presents a new method for generating the missing medical slices from two given slices. It uses the contours of organs as the control parameters to the intensity information in the physical gaps of sequential medical slices. The Snake model is used for generating the control points required for the elastic body spline (EBS) morphing algorithm. Contour information derived from this segmentation pre-process is then further processed and used as control parameters to warp the corresponding regions in both input slices into compatible shapes. In this way, the intensity information of the interpolated intermediate slices can be derived more faithfully. In comparison with the existing intensity interpolation methods, including linear interpolation, which only considers corresponding points in a small physical neighborhood, this method warps the data images into similar shapes according to contour information to provide a more meaningful correspondence relationship.

  512.   Davison, NE, Eviatar, H, and Somarjai, RL, "Snakes simplified," PATTERN RECOGNITION, vol. 33, pp. 1651-1664, 2000.

Abstract:   The snake formulation of Eviatar and Somorjai has the advantages of bring both conceptually simple and rapidly convergent. We extend this formulation in two ways, by exploring additional energy terms whose interpretation is transparent and by using a simple minimization technique. The usefulness of the simplified model is illustrated using artificial images as well as images obtained with MRI, optical microscopy and ultrasound. (C) 2000 Published by Elsevier Science Ltd on behalf of Pattern Recognition Society.

  513.   Germond, L, Dojat, M, Taylor, C, and Garbay, C, "A cooperative framework for segmentation of MRI brain scans," ARTIFICIAL INTELLIGENCE IN MEDICINE, vol. 20, pp. 77-93, 2000.

Abstract:   Automatic segmentation of MRI brain scans is a complex task for two main reasons: the large variability of the human brain anatomy, which limits the use of general knowledge and, inherent to MRI acquisition, the artifacts present in the images that are difficult to process. To tackle these difficulties, we propose to mix, in a cooperative framework, several types of information and knowledge provided and used by complementary individual systems: presently, a multi-agent system, a deformable model and an edge detector. The outcome is a cooperative segmentation performed by a set of region and edge agents constrained automatically and dynamically by both, the specific gray levels in the considered image, statistical models of the brain structures and general knowledge about MRI brain scans. Interactions between the individual systems follow three modes of cooperation: integrative, augmentative and confrontational cooperation, combined during the three steps of the segmentation process namely, the specialization of the seeded-region-growing agents, the fusion of heterogeneous information and the retroaction over slices. The described cooperative framework allows the dynamic adaptation of the segmentation process to the own characteristics of each MRI brain scan. Its evaluation using realistic brain phantoms is reported. (C) 2000 Elsevier Science B.V. All rights reserved.

  514.   Shelton, CR, "Morphable Surface Models," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 38, pp. 75-91, 2000.

Abstract:   We describe a novel automatic technique for finding a dense correspondence between a pair of n-dimensional surfaces with arbitrary topologies. This method employs a different formulation than previous correspondence algorithms (such as optical flow) and includes images as a special case. We use this correspondence algorithm to build Morphable Surface Models (an extension of Morphable Models) from examples. We present a method for matching the model to new surfaces and demonstrate their use for analysis, synthesis, and clustering.

  515.   Drummond, T, and Cipolla, R, "Application of Lie algebras to visual servoing," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 37, pp. 21-41, 2000.

Abstract:   A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of 'teaching by showing' in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end effector of the robot, the 'camera-in-hand' approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to provide a novel method for integrating observed deformations of the target contour. These can be compensated with appropriate robot motion using a non-linear control structure. The local differential representation of contour deformations is extended to allow accurate integration of an extended series of small perturbations. This differs from existing approaches by virtue of the properties of the Lie algebra representation which implicitly embeds knowledge of the three-dimensional world within a two-dimensional image-based system. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.

  516.   Hobolth, A, and Jensen, EBV, "Modelling stochastic chances in curve shape, with an application to cancer diagnostics," ADVANCES IN APPLIED PROBABILITY, vol. 32, pp. 344-362, 2000.

Abstract:   Often, the statistical analysis of the shape of a random planar curve is based on a model for a polygonal approximation to the curve. In the present paper, we instead describe the curve as a continuous stochastic deformation of a template curve. The advantage of this continuous approach is that the parameters in the model do not relate to a particular polygonal approximation. A somewhat similar approach has been used by Kent et al. (1996), who describe the limiting behaviour of a model with a first-order Markov property as the landmarks on the curve become closely spaced; see also Grenander(1993). The model studied in the present paper is an extension of this model. Our model possesses a second-order Markov property. Its geometrical characteristics are studied in some detail and an explicit expression for the covariance function is derived. The model is applied to the boundaries of profiles of cell nuclei from a benign tumour and a malignant tumour. It turns out that the model with the second-order Markov property is the most appropriate, and that it is indeed possible to distinguish between the two samples.

  517.   Tiddeman, B, Duffy, N, and Rabey, G, "Construction and visualisation of three-dimensional facial statistics," COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, vol. 63, pp. 9-20, 2000.

Abstract:   This paper presents a new method for the construction of three-dimensional (3D) probabilistic Facial averages and demonstrates the potential for applications in clinical craniofacial research and patient assessment. Averages are constructed from a database of registered laser-range scans and photographic images using feature based image warping. Facial features are extracted using a template of connected contours, adapted to each subject interactively using snakes. Each subject's images are warped to the average template shape acid the mean depth, colour and covariance matrix is found at each point. Statistical comparison of individuals with an average or between two averages is visualised by converting the probabilities to a coloured texture map. (C) 2000 Elsevier Science Inland Ltd. All rights reserved.

  518.   Toklu, C, Erdem, AT, and Tekalp, AM, "Two-dimensional mesh-based mosaic representation for manipulation of video objects with occlusion," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 1617-1630, 2000.

Abstract:   We present a two-dimensional (2-D) mesh-based mosaic representation, consisting of an object mesh and a mosaic mesh for each frame and a final mosaic image, for video objects with mildly deformable motion in the presence of self and/or object-to-object (external) occlusion, Unlike classical mosaic representations where successive frames are registered using global motion models, we map the uncovered regions in the successive frames onto the mosaic reference frame using local affine models, i.e., those of the neighboring mesh patches. The proposed method to compute this mosaic representation is tightly coupled with an occlusion adaptive 2-D mesh tracking procedure, which consist of propagating the object mesh frame to frame, and updating of both object and mosaic meshes to optimize texture mapping from the mosaic to each instance of the object. The proposed representation has been applied to video object rendering and editing, including self transfiguration, synthetic transfiguration, and 2-D augmented reality in the presence of self and/or external occlusion, We also provide an algorithm to determine the minimum number of still views needed to reconstruct a replacement mosaic which is needed for synthetic transfiguration. Experimental results are provided to demonstrate both the 2-D mesh-based mosaic synthesis and two different video object editing applications on real video sequences.

  519.   Brigger, P, Hoeg, J, and Unser, M, "B-Spline snakes: A flexible tool for parametric contour detection," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 1484-1496, 2000.

Abstract:   We present a novel formulation for B-spline snakes that can be used as a tool for fast and intuitive contour outlining. We start with a theoretical argument in favor of splines in the traditional formulation by showing that the optimal, curvature-constrained snake is a cubic spline, irrespective of the form of the external energy held, Unfortunately, such regularized snakes suffer from slow convergence speed because of a large number of control points, as well as from difficulties in determining the weight factors associated to the internal energies of the curve. We therefore propose an alternative formulation in which the intrinsic scale of the spline model is adjusted a priori; this Leads to a reduction of the number of parameters to be optimized and eliminates the need for internal energies (i.e., the regularization term), In other words, we are now controlling the elasticity of the spline implicitly and rather intuitively by varying the spacing between the spline knots. The theory is embedded into a multiresolution formulation demonstrating improved stability in noisy image environments. Validation results are presented, comparing the traditional snake using internal energies and the proposed approach without internal energies, showing the similar performance of the latter, Several biomedical examples of applications are included to illustrate the versatility of the method.

  520.   Bertalmio, M, Sapiro, G, and Randall, G, "Morphing active contours," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 733-737, 2000.

Abstract:   A method for deforming curves in a given image to a desired position in the second image is introduced in this paper. The algorithm is based on deforming the first image toward the second one via a Partial Differential Equation (PDE), while tracking the deformation of the curves of interest in the first image with an additional, coupled, PDE. The tracking is performed by projecting the velocities of the first equation into the second one. In contrast with previous PDE-based approaches, both the images and the curves on the frames/slices of interest are used for tracking. The technique can be applied to object tracking and sequential segmentation. The topology of the deforming curve can change without any special topology handling procedures added to the scheme. This permits, for example, the automatic tracking of scenes where, due to occlusions, the topology of the objects of interest changes from frame to frame. In addition, this work introduces the concept of projecting velocities to obtain systems of coupled PDEs for image analysis applications We show examples for object tracking and segmentation of electronic microscopy.

  521.   Nikolaidis, A, and Pitas, I, "Facial feature extraction and pose determination," PATTERN RECOGNITION, vol. 33, pp. 1783-1791, 2000.

Abstract:   A combined approach for facial feature extraction and determination of gaze direction is proposed that employs some improved variations of the adaptive Hough transform for curve detection, minima analysis of feature candidates, template matching for inner facial feature localization, active contour models for inner face contour detection and projective geometry properties for accurate pose determination. The aim is to provide a sufficient set of features for further use in a face recognition or face tracking system. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  522.   Chen, CM, Lu, HHS, and Lin, YC, "An early vision-based snake model for ultrasound image segmentation," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 26, pp. 273-285, 2000.

Abstract:   Due to the speckles and the ill-defined edges of the object of interest, the classic image-segmentation techniques are usually ineffective in segmenting ultrasound (US) images. In this paper, we present a new algorithm for segmenting general US images that is composed of two major techniques; namely, the early-vision model and the discrete-snake model, By simulating human early vision, the early-vision model can capture both grey-scale and textural edges while the speckle noise is suppressed. By performing deformation only on the peaks of the distance map, the discrete-snake model promises better noise immunity and more accurate convergence. Moreover, the constraint for most conventional snake models that the initial contour needs to be located very close to the actual boundary has been relaxed substantially. The performance of the proposed snake model has been shown to be comparable to manual delineation and superior to that of the gradient vector flow (GVF) snake model. (C) 2000 World Federation for Ultrasound in Medicine & Biology.

  523.   Garrido, A, and de la Blanca, NP, "Applying deformable templates for cell image segmentation," PATTERN RECOGNITION, vol. 33, pp. 821-832, 2000.

Abstract:   This paper presents an automatic method. based on the deformable template approach, for cell image segmentation under severe noise conditions. We define a new methodology, dividing the process into three parts: (1) obtain evidence from the image about the location of the cells, (2) use this evidence to calculate an elliptical approximation of these locations; (3) refine cell boundaries using locally deforming models. We have designed a new algorithm to locate cells and propose an energy function to be used together with 3 stochastic deformable template model. Experimental results show that this approach for segmenting cell images is both Fast and robust, and that this methodology may be used for automatic classification as part of a computer-aided medical decision making technique. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd, All rights reserved.

  524.   Falcao, AX, Udupa, JK, and Miyazawa, FK, "An ultra-fast user-steered image segmentation paradigm: Live wire on the fly," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 55-62, 2000.

Abstract:   We have been developing general user steered image segmentation strategies for routine use in applications involving a large number of data sets. In the past, we have presented three segmentation paradigms: live wire, live lane, and a three-dimensional (3-D) extension of the live-wire method. In this paper, we introduce an ultra-fast live-wire method, referred to as live wire on the fly, for further reducing user's time compared to the basic live-wire method. In live wire, 3-D/four-dimensional (4-D) object boundaries are segmented in a slice-by-slice fashion. To segment a two-dimensional (2-D) boundary, the user initially picks a point on the boundary and all possible minimum-cost paths from this point to all other points in the image are computed via Dijkstra's algorithm. Subsequently a live wire is displayed in real time From the initial point to any subsequent position taken by the cursor. If the cursor is close to the desired boundary, the live wire snaps on to the boundary. The cursor is then deposited and a new live-wire segment is Found next, The entire 2-D boundary is specified via a set of live-wire segments in this fashion. A drawback of this method is that the speed of optimal path computation depends on image size. On modestly powered computers, for images of even modest size, some sluggishness appears in user interaction, which reduces the overall segmentation efficiency. In this work, we solve this problem by exploiting some known properties of graphs to avoid unnecessary minimum-cost path computation during segmentation. In live wire on the fly, when the user selects a point on the boundary the live-wire segment is computed and displayed in real time from the selected point to any subsequent position of the cursor in the image, even for large images and even on low-powered computers. Based on 492 tracing experiments from an actual medical application, we demonstrate that live wire on the fly is 1.3-31 times faster than live wire for actual segmentation for varying image sizes, although the pure computational part alone is found to be about 120 times faster.

  525.   Chen, YM, Vemuri, BC, and Wang, L, "Image denoising and segmentation via nonlinear diffusion," COMPUTERS & MATHEMATICS WITH APPLICATIONS, vol. 39, pp. 131-149, 2000.

Abstract:   Image denoising and segmentation are fundamental problems in the field of image processing and computer vision with numerous applications. In this paper, we present a nonlinear PDE-based model for image denoising and segmentation which unifies the popular model of Alvarez, Lions and Morel (ALM) for image denoising and the Caselles, Kimmel and Sapiro model of geodesic "snakes". Our model includes nonlinear diffusive as well as reactive terms and leads to quality denoising and segmentation results as depicted in the experiments presented here. We present a proof for the existence, uniqueness, and stability of the viscosity solution of this PDE-based model. The proof is in spirit similar to the proof of the ALM model; how ever, there are several differences which arise due to the presence of the reactive terms that require careful treatment/consideration. A fast implementation of our model is realized by embedding the model in a scale space and then achieving the solution via a dynamic system governed by a coupled system of first-order differential equations. The dynamic system finds the solution at a coarse scale and tracks it continuously to a desired fine scale. We demonstrate the smoothing and segmentation results on several real images. (C) 2000 Elsevier Science Ltd. All rights reserved.

  526.   Pollak, I, Willsky, AS, and Krim, H, "Image segmentation and edge enhancement with stabilized inverse diffusion equations," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 256-266, 2000.

Abstract:   We introduce a family of first-order multidimensional ordinary differential equations (ODE's) with discontinuous right-hand sides and demonstrate their applicability in image processing. An equation belonging to this family is an inverse diffusion everywhere except at local extrema, where some stabilization is introduced. For this reason, we call these equations "stabilized inverse diffusion equations" (SIDE's), Existence and uniqueness of solutions, as well as stability, are proven for SIDE's, A SIDE in one spatial dimension may be interpreted as a limiting case of a semi-discretized Perona-Malik equation [14], [15], In an experimental section, SIDE's are shown to suppress noise while sharpening edges present in the input signal, Their application to image segmentation is also demonstrated.

  527.   Lee, WS, and Magnenat-Thalmann, N, "Fast head modeling for animation," IMAGE AND VISION COMPUTING, vol. 18, pp. 355-364, 2000.

Abstract:   This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct a three-dimensional (3D) facial model for animation from two orthogonal pictures taken from front and side views, or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then fine modifications follow if range data is available. Automatic texture mapping is employed using an image composed from the two images. The reconstructed 3D-face can be animated immediately with given expression parameters. Several faces by obtained one methodology applied to different input data to get a final animatable face are illustrated. (C) 2000 Elsevier Science B.V. All rights reserved.

  528.   Lengagne, R, Fua, P, and Monga, O, "3D stereo reconstruction of human faces driven by differential constraints," IMAGE AND VISION COMPUTING, vol. 18, pp. 337-343, 2000.

Abstract:   Conventional stereo algorithms often fail in accurately reconstructing a 3D object because the image data do not provide enough information about the geometry of the object. We propose a way to incorporate a priori information in a reconstruction process from a sequence of calibrated face images. A 3D mesh modeling the face is iteratively deformed in order to minimize an energy function. Differential information extracted from the object shape is used to generate an adaptive mesh. We also propose to explicitly incorporate a priori constraints related to the differential properties of the surface where the image information cannot yield an accurate shape recovery. (C) 2000 Elsevier Science B.V. All rights reserved.

  529.   Blank, M, and Kalender, WA, "Medical volume exploration: gaining insights virtually," EUROPEAN JOURNAL OF RADIOLOGY, vol. 33, pp. 161-169, 2000.

Abstract:   Since modern imaging modalities deliver huge amounts of data, which cannot be assessed easily, the visualization techniques are utilized to emphasize the structures of interest. To compare them, the different visualization techniques (maximum intensity projection, multiplanar reformations, shaded surface display and volume rendering) are regressed to a common ground whereby their strengths and weaknesses can be revealed. Additionally, medical image analysis can detect anatomical objects in volumetric data sets and provides their descriptions for further use. Usually, segmentation plays a crucial roll in that process. There are many segmentation methods which can be categorized in boundary-based and content-based ones. The extraction of anatomical objects also allows their quantification. Image analysis and visualization do not squeeze more information out of a data volume, but they provide different ways to look at it. As in real life, this alone may enlarge the insight. (C) 2000 Elsevier Science Ireland Ltd. All rights reserved.

  530.   Lei, ZB, and Lin, YT, "3D shape inferencing and modeling for video retrieval," JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 11, pp. 41-57, 2000.

Abstract:   We present a geometry-based indexing approach for the retrieval of video databases. It consists of two modules: 3D object shape inferencing from video data and geometric modeling from the reconstructed shape structure. A motion-based segmentation algorithm employing feature block tracking and principal component split is used for multi-moving-object motion classification and segmentation. After segmentation, feature blocks from each individual object are used to reconstruct its motion and structure through a factorization method. The estimated shape structure and motion parameters are used to generate the implicit polynomial model for the object. The video data is retrieved using the geometric structure of objects and their spatial relationship. We generalize the 2D string to 3D to compactly encode the spatial relationship of objects. (C) 2000 Academic Press.

  531.   Shishido, O, Yoshida, N, and Umino, O, "Image processing experiments for computer-based three-dimensional reconstruction of neurones from electron micrographs from serial ultrathin sections," JOURNAL OF MICROSCOPY-OXFORD, vol. 197, pp. 224-238, 2000.

Abstract:   This study examined an image processing technique that uses a computer to reconstruct a three-dimensional image of neurones from electron micrographs of serial ultrathin sections. The major problems involved were: (a) a distortion of features in electron micrographs; (b) a significant change of cross-section features of neurones in electron micrographs of neighbouring sections; and (c) disagreement between the electron microscopic section face and the coordinate plane desired for the reconstruction. Electron micrographs of a retinal bipolar cell stained with a biotinylated tracer were used. We corrected the distortion of features by means of a warp, a widely used algorithm in morphing image processing. The change of features between neighbouring electron micrographs was minimized by filling the gaps with an interpolated image produced by a dissolve, another algorithm in morphing, as well as the warp. The distortion of the three-dimensional reconstructed image made by piling up features was corrected by making the image with a wire frame model. Furthermore, in order to estimate a closed contour of features, an active contour model, Snakes, was applied to the electron microscope features. Snakes successfully detected the contour of the target feature, but in some electron microscope images broke into the target feature.

  532.   Rabben, SI, Torp, AH, Stoylen, A, Slordahl, S, Bjornstad, K, Haugen, BO, and Angelsen, B, "Semiautomatic contour detection in ultrasound M-mode images," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 26, pp. 287-296, 2000.

Abstract:   We have developed a method for semiautomatic contour detection in M-mode images. The method combines tissue Doppler and grey-scale data, It was used to detect: 1. the left endocardium of the septum, the endocardium and epicardium of the posterior wall in 16 left ventricular short-axis M-modes, and 2. the mitral ring in 38 anatomical M-modes extracted pair-wise in 19 apical four-chamber cine-loops (healthy subjects). We validated the results by comparing the computer-generated contours with contours manually outlined by four echocardiographers. For all boundaries, the average distance between the computer-generated contours and the manual outlines was smaller than the average distance between the manual outlines. We also calculated left ventricular wall thickness and diameter at end-diastole and end-systole and lateral and septal mitral ring excursions, and found, on average, clinically negligible differences between the computer-generated indices and the same indices based on manual outlines (0.8-1.8 mm), The results were also within published normal values. In conclusion, this initial study showed that it was feasible in a robust and efficient manner to detect continuous wall boundaries in M-mode images so that tracings of left ventricular wall thickness, diameter and long axis could be derived. (C) 2000 World Federation for Ultrasound in Medicine & Biology.

  533.   MacDonald, D, Kabani, N, Avis, D, and Evans, AC, "Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI," NEUROIMAGE, vol. 12, pp. 340-356, 2000.

Abstract:   Automatic computer processing of large multidimensional images such as those produced by magnetic resonance imaging (MRI) is greatly aided by deformable models, which are used to extract, identify, and quantify specific neuroanatomic structures. A general method of deforming polyhedra is presented here, with two novel features, First, explicit prevention of self-intersecting surface geometries is provided, unlike conventional deformable models, which use regularization constraints to discourage but not necessarily prevent such behavior. Second, deformation of multiple surfaces with intersurface proximity constraints allows each surface to help guide other surfaces into place using model-based constraints such as expected thickness of an anatomic surface. These two features are used advantageously to identify automatically the total surface of the outer and inner boundaries of cerebral cortical gray matter from normal human MR images, accurately locating the depths of the sulci, even where noise and partial volume artifacts in the image obscure the visibility of sulci. The extracted surfaces are enforced to be simple two-dimensional manifolds (having the topology of a sphere), even though the data may have topological holes, This automatic 3-D cortex segmentation technique has been applied to 150 normal subjects, simultaneously extracting both the gray/white and gray/cerebrospinal fluid interface from each individual. The collection of surfaces has been used to create a spatial map of the mean and standard deviation for the location and the thickness of cortical gray matter. Three alternative criteria for defining cortical thickness at each cortical location were developed and compared. These results are shown to corroborate published postmortem and in vivo measurements of cortical thickness. (C) 2000 Academic Press.

  534.   Zhong, Y, and Jain, AK, "Object localization using color, texture and shape," PATTERN RECOGNITION, vol. 33, pp. 671-684, 2000.

Abstract:   We address the problem of localizing objects using color, texture and shape. Given a handrawn sketch for querying an object shape. and its color and texture, the proposed algorithm automatically searches the image database for objects which meet the query attributes. The database images do not need to be presegmented or annotated. The proposed algorithm operates in two stages. In the first stage, we use local texture and color features to find a small number of candidate images in the database, and identify regions in the candidate images which share similar texture and color as the query. To speed up the processing, the texture and color features are directly extracted from the Discrete Cosine Transform (DCT) compressed domain. In the second stage. we use a deformable template matching method to match the query shape to the image edges at the locations which possess the desired texture and color attributes. This algorithm is different from other content-based image retrieval algorithms in that: (i) no presegmentation of the database images is needed, and (ii) the color and texture features are directly extracted from the compressed images. Experimental results demonstrate performance of the algorithm and show that substantial computational savings can be achieved by utilizing multiple image cues. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  535.   Suri, JS, Haralick, RM, and Sheehan, FH, "Greedy algorithm for error correction in automatically produced boundaries from low contrast ventriculograms," PATTERN ANALYSIS AND APPLICATIONS, vol. 3, pp. 39-60, 2000.

Abstract:   Non-homogeneous mixing of the dye with the blood in the left ventricle chamber of the heart causes poor contrast in the ventriculograms. The pixel-based classifiers [1] operating on these ventriculograms yield boundaries which are not close to ground truth boundaries as delineated by the cardiologist. They have a mean boundary error of 6.4 mm and an error of 12.5 mm in the apex zone. These errors have a systematic positional and orientational bias, the boundary being under-estimated in the apex zone. This paper discusses two calibration methods: the identical coefficient and the independent coefficient to remove these systematic biases. From these methods, we constitute a fused algorithm which reduces the boundary error compared to either of the calibration methods. The algorithm, in a greedy way, computes which and how many vertices of the left ventricle boundary can be taken from the computed boundary of each method in order to best improve the performance. The corrected boundaries have a mean error of less than 3.5 mm with a standard deviation of 3.4 mm over the approximately 6 x 10(4) vertices in the data set of 291 studies. Our method reduces the mean boundary error by 2.9 mm over the boundary produced by the classifier. We also show that the calibration algorithm performs better in the apex zone where the dye is unable to propagate. For end diastole, the: algorithm reduces the error in the apex zone by 8.5 mm over the pixel-based classifier boundaries.

  536.   Blom, AS, Pilla, JJ, Pusca, SV, Patel, HJ, Dougherty, L, Yuan, Q, Ferrari, VA, Axel, L, and Acker, MA, "Dynamic cardiomyoplasty decreases myocardial workload as assessed by tissue tagged MRI," ASAIO JOURNAL, vol. 46, pp. 556-562, 2000.

Abstract:   The effects of dynamic cardiomyoplasty (CMP) on global and regional left ventricular (LV) function in end-stage heart failure still remain unclear. MRI with tissue-tagging is a novel tool for studying intramyocardial motion and mechanics. To date, no studies have attempted to use MRI to simultaneously study global and regional cardiac function in a model of CMP. In this study, we used MRI with tissue-tagging and a custom designed MR compatible muscle stimulating/pressure monitoring system to assess long axis regional strain and displacement variations, as well as changes in global LV function in a model of dynamic cardiomyoplasty. Three dogs underwent rapid ventricular pacing (RVP; 215 BPM) For 10 weeks; after 4 weeks of RVP, a left posterior CMP was performed. After 1 year of dynamic muscle stimulation, the dogs were imaged in a 1.5 T clinical MR scanner. Unstimulated and muscle stimulated tagged long axis images were acquired. Quantitative 2-D regional image analysis was performed by dividing the hearts into three regions: apical, septal, and lateral. Maximum and minimum principal strains (lambda(1) and lambda(2)) and displacement (D) were determined and pooled for each region. MR LV pressure-volume (PV) loops were also generated. Muscle stimulation produced a leftward shift of the PV loops in two of the three dogs, and an increase in the peak LV pressure, while stroke volume remained unchanged. With stimulation, lambda(1) decreased significantly (p < 0.05) in the lateral region, whereas lambda(2) increased significantly (p < 0.05) in both the lateral and apical regions, indicating a decrease in strain resulting from stimulation. D only increased significantly (p < 0.05) in the apical region. The decrease in strain between unassisted and assisted states indicates the heart is performing less work, while maintaining stroke volume and increasing peak LV pressure. These findings demonstrate that the muscle wrap functions as an active assist, decreasing the workload of the heart, while preserving total pump performance.

  537.   Shen, DG, and Davatzikos, C, "An adaptive-focus deformable model using statistical and geometric information," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 906-913, 2000.

Abstract:   An active contour (snake) model is presented, with emphasis on medical imaging applications. There are three main novelties in the proposed model. First. an attribute vector is used to characterize the geometric structure around each point of the snake model: the deformable model then deforms in a way that seeks regions with similar attribute vectors. This is in contrast to most deformable models, which deform to nearby edges without considering geometric structure. and it was motivated by the need to establish point-correspondences that have anatomical meaning. Second, an adaptive-focus statistical model has been suggested which allows the deformation of the active contour in each stage to be influenced primarily by the most reliable matches. Third, a deformation mechanism that is robust to local minima is proposed by evaluating the snake energy function on segments of the snake at a time, instead of individual points. Various experimental results show the effectiveness of the proposed model.

  538.   Jeon, BK, Jang, JH, and Hong, KS, "Map-based road detection in spaceborne synthetic aperture radar images based on curvilinear structure extraction," OPTICAL ENGINEERING, vol. 39, pp. 2413-2421, 2000.

Abstract:   This paper presents an automatic map-based road detection algorithm for spaceborne synthetic aperture radar (SAR) images. Our goal is to find roads in a SAR image with subpixel accuracy with the help of a digital map. There are location errors between the digital map and the geocoded SAR image, which are about 20 to 30 pixels, and we adopt a coarse-to-fine strategy to reduce it. In the coarse matching step, we roughly find the locations of roads by a simple search using water areas or a generalized Hough transform based on digital map information. The fine matching step detects roads accurately by using the active contour model (snake). The input of the snake operation is the potential field constructed from the extracted ridges or ravines of curvilinear structures in the SAR image. Experimental results show that our algorithm detects roads with average error of less than one pixel, (C) 2000 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(00)01309-X].

  539.   Haacke, EM, and Liang, ZP, "Challenges of imaging structure and function with MRI," IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, vol. 19, pp. 55-62, 2000.

Abstract:   A semi-automatic system for determining volumes of interest (VOI) from positron emission tomography (PET) scans of brain is described. The VOIs surface extraction is based on user selectable threshold and three-dimensional target flood-fill. Contrast to anatomical volume detection approaches, Volumes are determined from functional PET images and the obtained objects are checked against anatomical images. The developed VOI program was evaluated with brain FDOPA-PET studies where the striatum was the object. The results were comparable to entirely manual method and the target extraction time is reduced to about one third of manual method. (C) 2000 Elsevier Science Ireland Ltd. All rights reserved.

  540.   Mykkanen, JM, Juhola, M, and Ruotsalainen, U, "Extracting VOIs from brain PET images," INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, vol. 58, pp. 51-57, 2000.

Abstract:   A semi-automatic system for determining volumes of interest (VOI) from positron emission tomography (PET) scans of brain is described. The VOIs surface extraction is based on user selectable threshold and three-dimensional target flood-fill. Contrast to anatomical volume detection approaches, Volumes are determined from functional PET images and the obtained objects are checked against anatomical images. The developed VOI program was evaluated with brain FDOPA-PET studies where the striatum was the object. The results were comparable to entirely manual method and the target extraction time is reduced to about one third of manual method. (C) 2000 Elsevier Science Ireland Ltd. All rights reserved.

  541.   Maksimovic, R, Stankovic, S, and Milovanovic, D, "Computed tomography image analyzer: 3D reconstruction and segmentation applying active contour models - 'snakes'," INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, vol. 58, pp. 29-37, 2000.

Abstract:   Many diagnostic and therapeutic procedures depend on medical images. In order to overcome imperfections of the obtained images, which are due to the acquisition process, and to extract new information from the available images, many techniques have been developed. In this study, a new method of image segmentation and 3D reconstruction based on active contour models ('snakes') was applied in analyzing computed tomography (CT) images in patients with acute head trauma. Using this method, lesion to brain (LBR) and ventricle to brain ratio (VBR) parameters, as well as 3D reconstruction of traumatic lesion, was obtained accurately. In our study group, 215 patients (mean age 42.4 +/- 23.5 years, 138/215 (64.2%) males) were included. Among them, 72 (33.5%) did not survive during hospitalisation in the Emergency Department. LBR correlated with the Glasgow Coma Score and the intrahospital outcome (r = -0.457 and r = 0.515, respectively). Besides, non-survivors had greater LTB values (0.042 +/- 0.034) than survivors (0.005 +/- 0.011). However, VER did not correlate with these clinical parameters. In addition, LBR was significantly higher in the patients with other pathologic CT findings. The proposed methodology, based on extracting maximum information from available CT scans, could be a basis for further medical decision making in patients with acute head trauma. (C) 2000 Elsevier Science Ireland Ltd. All rights reserved.

  542.   Rodriguez-Sanchez, R, Garcia, JA, Fdez-Valdivia, J, and Fdez-Vidal, XR, "Origins of illusory percepts in digital images," PATTERN RECOGNITION, vol. 33, pp. 2007-2017, 2000.

Abstract:   Here we show the relation between illusory percepts and statistical regularities across scales and orientations. To this aim, the performance of a computational model for the partitioning of statistical regularities is analyzed on several tasks such as long-range boundary completion, phase-induced contour detection, as well as shape and size illusions. The system for the automatically learned partitioning of statistical regularities in 2D images, is based on a sophisticated, band-pass, filtering operation, with fixed scale and orientation sensitivity. Experimental results are provided to illustrate this analysis on several examples: (i) Kanizsa-type subjective figures; (ii) phase-induced subjective contours; (iii) the Zollner illusion; and (iv) the Muller-Lyer illusion. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  543.   Yang, WF, and Smith, MR, "Using an MRI distortion transfer function to characterize the ghosts in motion-corrupted images," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 577-584, 2000.

Abstract:   Motion artefact suppression remains an active topic in MRI. In this paper, we suggest that certain nonrigid, or spatially variant, characteristics of motion of an object can be represented by extending the work of Mitsa et al, This empirical extension uses a ghost distortion transfer function (GTDF) applied to the k-space (frequency domain) data. We demonstrate the variety of ghost characteristics that can be generated from various two-dimensional (2-D) GTDF's, The distortion transfer function for periodic motion along the Z-axis can be determined from the nonoverlapped portions of the ghost and central image, It required a GDTF with the shape of a belt bandpass filter to produce an image corresponding to the ghosts of a volunteer's abdomen image corrupted by unknown respiratory motion artefacts, The preliminary results of a composite method of motion artefact suppression are presented. The artefact suppression was successful for ghost images described by a GDTF have a low-pass nature, but less successful with ghosts have a GDTF of a bandpass nature.

  544.   Yoshida, H, and Keserci, B, "Bayesian wavelet snake for computer-aided diagnosis of lung nodules," INTEGRATED COMPUTER-AIDED ENGINEERING, vol. 7, pp. 253-269, 2000.

Abstract:   An edge-guided active contour based on the wavelet transform called the Bayesian wavelet snake has been developed for identifying a closed-contour object with a fuzzy and low-contrast boundary. The wavelet snake is designed to deform its shape based on a maximum a posteriori estimate calculated by the fast wavelet transform. Our new method was applied to a computer-aided diagnosis scheme for detection of pulmonary nodules in digital chest radiographs. In this scheme, a filter based on the edge gradient was employed for enhancement of nodules, followed by creation of multiscale edges by spline wavelets for extraction of portions of the boundary of a candidate nodule. These multiscale edges are then used to "guide" the wavelet snake for estimation of the boundary of the nodule. The degree of overlap between the resulting snake and the multiscale edges was used as a feature for distinguishing nodules from false-positive detections that consist of only normal anatomic structures. The wavelet snake was combined with morphological features by means of an artificial neural network for further reduction of false detections. The performance of our scheme was evaluated by receiver operating characteristic analysis based on a publicly available database of chest radiographs.

  545.   Senasli, M, Garnero, L, Herment, A, and Mousseaux, E, "3D reconstruction of vessel lumen from very few angiograms by dynamic contours using a stochastic approach," GRAPHICAL MODELS, vol. 62, pp. 105-127, 2000.

Abstract:   3D luminal vessel geometry description and visualization are important for the diagnosis and the prognosis of heart attack and stroke. A general mathematical framework is proposed for 3D reconstruction of vessel sections from a few angiograms, Regularization is introduced by modeling the vessel boundary slices by smooth contours to get the reconstruction problem well posed. A dynamic contour approach is applied to optimize the shape of the contour according to the recorded angiograms and the internal smoothness constraints. The solution is achieved following the minimization of a nonconvex energy function assigned to the contour with a simulated annealing algorithm. Preliminary testing on noisy and truncated synthetic images produces promising results, Evaluation and validation of the method on hardware phantoms are also presented. (C) 2000 Academic Press.

  546.   Lurig, C, Kobbelt, L, and Ertl, T, "Hierarchical solutions for the deformable surface problem in visualization," GRAPHICAL MODELS, vol. 62, pp. 2-18, 2000.

Abstract:   In this paper we present a hierarchical approach for the deformable surface technique. This technique is a three dimensional extension of the snake segmentation method. We use it in the context of visualizing three dimensional scalar data sets. In contrast to classical indirect volume visualization methods, this reconstruction is not based on iso-values but on boundary information derived from discontinuities in the data. We propose a multilevel adaptive finite difference solver, which generates a target surface minimizing an energy functional based on an internal energy of the surface and an outer energy induced by the gradient of the volume. The method is attractive for preprocessing in numerical simulation or texture mapping. Red-green triangulation allows adaptive refinement of the mesh. Special considerations help to prevent self interpenetration of the surfaces. We will also show some techniques that introduce the hierarchical aspect into the inhomogeneity of the partial differential equation. The approach proves to be appropriate for data sets that contain a collection of objects separated by distinct boundaries. These kind of data sets often occur in medical and technical tomography, as we will demonstrate in a few examples. (C) 2000 Academic Press.

  547.   Paragios, N, and Deriche, R, "Geodesic active contours and level sets for the detection and tracking of moving objects," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 266-280, 2000.

Abstract:   This paper presents a new variational framework for detecting and tracking multiple moving objects in image sequences. Motion detection is performed using a statistical framework for which the observed interframe difference density function is approximated using a mixture model. This model is composed of two components, namely, the static (background) and the mobile (moving objects) one. Both components are zero-mean and obey Laplacian or Gaussian law. This statistical framework is used to provide the motion detection boundaries. Additionally, the original frame is used to provide the moving object boundaries. Then, the detection and the tracking problem are addressed in a common framework that employs a geodesic active contour objective function. This function is minimized using a gradient descent method, where a flow deforms the initial curve towards the minimum of the objective function, under the influence of internal and external image dependent forces. Using the level set formulation scheme, complex curves can be detected and tracked white topological changes for the evolving curves are naturally managed. To reduce the computational cost required by a direct implementation of the level set formulation scheme, a new approach named Hermes is proposed. Hermes exploits aspects from the well-known front propagation algorithms (Narrow Band. Fast Marching) and compares favorably to them. Very promising experimental results are provided using real video sequences.

  548.   Cremers, D, Schnorr, C, Weickert, J, and Schellewald, C, "Diffusion-snakes using statistical shape knowledge," ALGEBRAIC FRAMES FOR THE PERCEPTION-ACTION CYCLE, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1888, pp. 164-174, 2000.

Abstract:   We present a novel extension of the Mumford-Shah functional that allows to incorporate statistical shape knowledge at the computational level of image segmentation. Our approach exhibits various favorable properties: non-local convergence, robustness against noise, and the ability to take into consideration both shape evidence in given image data and knowledge about learned shapes. In particular, the latter property distinguishes our approach from previous work on contour-evolution based image segmentation. Experimental results confirm these properties.

  549.   Ravi, D, "A new active contour model for shape extraction," MATHEMATICAL METHODS IN THE APPLIED SCIENCES, vol. 23, pp. 709-722, 2000.

Abstract:   We propose a new active contour model for shape extraction of objects in grey-valued two-dimensional images based on an energy-minimization formulation. The energy functional that we consider takes into account the two requirements of object isolation and smoothness of the contour. After deriving the Euler-Lagrange equations corresponding to the energy functional, we bring out some important geometric properties of a solution to these equations. The discussion on our solution method-with the help of which we try to minimize the energy functional by evolving an initial curve-also focuses on how to prescribe the initial curve fully automatically. The effectiveness of our algorithms is demonstrated with the help of experimental results. Copyright (C) 2000 John Wiley & Sons, Ltd.

  550.   Chung, DH, and Sapiro, G, "Segmenting skin lesions with partial-differential-equations-based image processing algorithms," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 763-767, 2000.

Abstract:   In this paper, a partial-differential equations (PDE)-based system for detecting the boundary of skin lesions in digital clinical skin images is presented. The image is first preprocessed via contrast-enhancement and anisotropic diffusion. If the lesion is covered by hairs, a PDE-based continuous morphological filter that removes them is used as an additional preprocessing step. Following these steps, the skin lesion is segmented either by the geodesic active contours model or the geodesic edge tracing approach. These techniques are based on computing, again via PDEs, a geodesic curve in a space defined by the image content. Examples showing the performance of the algorithm are given.

  551.   Xuan, JH, Adali, T, Wang, Y, and Siegel, E, "Automatic detection of foreign objects in computed radiography," JOURNAL OF BIOMEDICAL OPTICS, vol. 5, pp. 425-431, 2000.

Abstract:   This paper presents an effective two-step scheme for automatic object detection in computed radiography (CR) images. First, various structure elements of the morphological filters, designed by incorporating available morphological features of the objects of interest including their sizes and rough shape descriptions, are used to effectively distinguish the foreign object candidates from the complex background structures. Second, since the boundaries of the objects are the key features in reflecting object characteristics, active contour models are employed to accurately outline the morphological shapes of the suspicious foreign objects to further reduce the rate of false alarms. The actual detection scheme is accomplished by jointly using these two steps. The proposed methods are tested with a database of 50 hand-wrist computed radiographic images containing various types of foreign objects. Our experimental results demonstrate that the combined use of morphological filters and active contour models can provide an effective automatic detection of foreign objects in CR images achieving good sensitivity and specificity, and the accurate descriptions of the object morphological characteristics. (C) 2000 Society of Photo-Optical Instrumentation Engineers. [S1083-3668(00)00704-8].

  552.   Ida, T, and Sambonsugi, Y, "Self-affine mapping system and its application to object contour extraction," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 1926-1936, 2000.

Abstract:   A self-affine mapping system which has conventionally been used to produce fractal images is used to fit rough lines to contours. The self-affine map's parameters are detected by analyzing the blockwise self-similarity of a grayscale image using a simplified algorithm in fractal encoding. The phenomenon that edges attract mapping points in self-affine mapping is utilized in the proposed method. The boundary of the foreground region of an alpha mask is fitted by mapping iterations of the region. It is shown that the proposed method accurately produces not only smooth curves but also sharp corners, and has the ability to extract both distinct edges and blurred edges using the same parameter. It is also shown that even large gaps between the hand-drawn line and the contour can be fitted well by the recursive procedure of the proposed algorithm, in which the block size is progressively decreased. These features reduce the time required for drawing contours by hand.

  553.   Suri, JS, "Computer vision, pattern recognition and image processing in left ventricle segmentation: The last 50 years," PATTERN ANALYSIS AND APPLICATIONS, vol. 3, pp. 209-242, 2000.

Abstract:   In the last decade, computer vision, pattern recognition, image processing and cardiac researchers have given immense attention to cardiac image analysis and modelling. This paper surveys state-of-the-are computer vision and pattern recognition techniques for Left Ventricle (LV) segmentation and modelling juring the second half of the twentieth century The paper presents the key characteristics of successful model-based segmentation techniques for LV modelling. This sun ey paper concludes the following: (1) any one pattern recognition or computer vision technique is nut sufficient for accurate 2D, 3D or 4D modelling of LV; (2) fitting mathematical models for LV modelling have dominated in the last 15 years; (3) knowledge extracted from the ground truth has lead to very successful attempts at LV modelling; (4) spatial and temporal behaviour of LV through different imaging modalities has yielded information which has led to accurate LV modelling; and (5) not much attention has bern paid to LV modelling validation.

  554.   Wang, HY, and Ghosh, B, "Geometric active deformable models in shape modeling," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 9, pp. 302-308, 2000.

Abstract:   This paper analyzes the problem of shape modeling using the principle of active geometric deformable models. While the basic modeling technique already exists in the literature, we highlight many of its drawbacks and discuss their source and steps to overcome them. We propose a new stopping criterion to address the stopping problem. We also propose to apply level set algorithm to implement the active geometric deformable models, thereby handling topology changes automatically. To alleviate the numerical problems associated with the implementation of the level set algorithm, we propose a new adaptive multigrid narrow band algorithm. All the proposed new changes have been illustrated with experiments with synthetic images and medical images.

  555.   Park, J, and Park, SI, "Strain analysis and visualization: left ventricle of a heart," COMPUTERS & GRAPHICS-UK, vol. 24, pp. 701-714, 2000.

Abstract:   Clinical utility of computational models is crucial in the applications of medical data visualization. Previously we have developed a new class of volumetric models whose parameters are functions in conjunction with physically based deformable modeling framework, and have applied the technique to estimate the left ventricular (LV) wall motion. We have successfully showed that the model parameter functions characterize the LV motion of normal and abnormal stares and that no further non-trivial post-processing is required for anatomically meaningful interpretation. In an effort to evaluate the LV model, this paper presents a method and results from a strain analysis based on the nodal displacements of the deformable LV model. Furthermore, in order to visualize the local quantities on the volumetric model for an effective analysis, we also developed a methodology to assist in assessing the cardiac function utilizing principal strains, Von-Mises' yield criteria, and a smoothing filter. Each strain tensor component,vas in the range of values observed in other reported studies. The application of a smoothing filter on the model improved in visualizing the overall trend of each strain variation. With our platform for a comprehensive strain analysis, we have augmented a clinical utility to the deformable models with parameter functions, (C) 2000 Elsevier Science Ltd. All rights reserved.

  556.   Shin, H, Stamm, G, Hogemann, D, and Galanski, M, "Basic principles of data acquisition and data processing in the construction of high-quality virtual models," RADIOLOGE, vol. 40, pp. 304-312, 2000.

Abstract:   Creating models for virtual reality subdivides into several steps. The aim of the data acquisition is the extraction of nearly isotropic (same solution in all three axes) data sets with low noise content. An approximate isotropy can be achieved by suitable choice of scan parameters. For raw data reconstruction, the application of high-resolution reconstruction algorithms is prohibited due to increased noise. A missing isotropy can computationally be approximated by interpolation. Further noise suppression is achieved by applying filters. Additionally, the contrast of the object for segmentation can be increased by image processing operators. The correct choice of the segmentation method and the editing tools is essential for a precise segmentation with minimal user interaction. Prior to visualization,smoothing the shape of the segmented model (shape-based or morphological interpolation, polygon reduction of wire frame model) further improves the visual appearance of the 3D model.

  557.   Loreti, P, and March, R, "Propagation of fronts in a nonlinear fourth order equation," EUROPEAN JOURNAL OF APPLIED MATHEMATICS, vol. 11, pp. 203-213, 2000.

Abstract:   We consider a geometric motion associated with the minimization of a curvature dependent functional. which is related to the Willmore functional. Such a functional arises in connection with the image segmentation problem in computer vision theory. We show by using formal asymptotics that the geometric motion can be approximated by the evolution of the zero level set of the solution of a nonlinear fourth-order equation related to the Cahn-Hilliard and Allen-Cahn equations.

  558.   Kovalski, G, Beyar, R, Shofti, R, and Azhari, H, "Three-dimensional automatic quantitative analysis of intravascular ultrasound images," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 26, pp. 527-537, 2000.

Abstract:   Intravascular ultrasound (IVUS) has established itself as a useful tool for coronary assessment, The vast amount of data obtained by a single IVUS study renders manual analysis impractical for clinical use, A computerized method is needed to accelerate the process and eliminate user-dependency. In this study, a new algorithm is used to identify the lumen border and the media-adventitia border (the external elastic membrane). Setting an initial surface on the IVUS catheter perimeter and using active contour principles, the surface inflates until virtual force equilibrium defined by the surface geometry and image features is reached, The method extracts these features in three dimensions (3-D), Eight IVUS procedures were performed using an automatic pullback device. Using the ECG signal for synchronization, sets of images covering the entire studied region and corresponding to the same cardiac phase were sampled. Lumen and media-adventitia border contours were traced manually and compared to the automatic results obtained by the suggested method. Linear regression results for vessel area enclosed by the lumen and media-adventitia border indicate high correlation between manual vs, automatic tracings (y = 1.07 x -0.38; r = 0.98; SD = 0.112 mm(2); n = 88), These results indicate that the suggested algorithm may potentially provide a clinical tool for accurate lumen and plaque assessment. (C) 2000 World Federation for Ultrasound in Medicine & Biology.

  559.   Magnenat-Thalmann, N, and Cordier, F, "Construction of a human topological model from medical data," IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, vol. 4, pp. 137-143, 2000.

Abstract:   Medical imaging can provide data for useful views of the interior details of human anatomy. In addition to visualization, which in general has been the primary reason for obtaining these data, many other uses are possible, These include modeling of different elements and their inter-relationships-topological modeling, simulation of physical processes, analysis of movements, and validation of models, Here, we describe some of the modeling issues from medical imaging. The issues are particularly related to topological modeling of different anatomical elements: bones, muscles, articulations, etc. A three-dimensional topological modeler is presented with which anatomists and other users can build a topological database containing structural, topological, and mechanical information of anatomical elements.

  560.   Viblis, MK, and Kyriakopoulos, KJ, "Gesture recognition: The gesture segmentation problem," JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, vol. 28, pp. 151-158, 2000.

Abstract:   The gesture segmentation problem is introduced as the first step towards visual gesture recognition i.e. with the detection, analysis and recognition of gestures from sequences of real images. Our gesture segmentation scheme is composed of two steps: accurate gesture contour tracking in space domain, and continuous tracking in time domain. Experimental results and implementations issues are presented.

  561.   Urayama, S, Matsuda, T, Sugimoto, N, Mizuta, S, Yamada, N, and Uyama, C, "Detailed motion analysis of the left ventricular myocardium using an MR tagging method with a dense grid," MAGNETIC RESONANCE IN MEDICINE, vol. 44, pp. 73-82, 2000.

Abstract:   Detailed analysis of myocardial deformation through a whole cardiac cycle was accomplished using a tagging method with a high-density grid. Four sets of tagged images with a 4-mm-spacing grid were measured by generating four tagging pulses arranged at regular intervals in the cardiac cycle. Through each set of images, tag intersections were tracked semi-automatically. The estimated motions of tag intersections were concatenated so that sequential positions of myocardium were connected through a whole cardiac cycle. In vitro evaluation of the precision of this technique showed that the mean error of tracked 4-mm tag intersections was less than 0.47 +/- 0.17 mm, even on the quite low-contrast images, and the concatenation error caused by double concatenation was comparable to the interpolation error in the subendocardial area obtained with 8-mm tag intersection motion. The small difference between the two mean distance curves of the in vivo evaluation indicated that the method is useful for analyzing heart wall abnormalities. (C) 2000 Wiley-Liss, Inc.

  562.   Samson, C, Blanc-Feraud, L, Aubert, G, and Zerubia, J, "A variational model for image classification and restoration," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 460-472, 2000.

Abstract:   Herein, we present a variational model devoted to image classification coupled with an edge-preserving regularization process. The discrete nature of classification (i.e., to attribute a label to each pixel) has led to the development of many probabilistic image classification models, but rarely to variational ones. In the last decade, the variational approach has proven its efficiency in the field of edge-preserving restoration. In this paper, we add a classification capability which contributes to provide images composed of homogeneous regions with regularized boundaries, a region being defined as a set of pixels belonging to the same class. The soundness of our model is based on the works developed on the phase transition theory in mechanics. The proposed algorithm is fast, easy to implement, and efficient. We compare our results on both synthetic and satellite images with the ones obtained by a stochastic model using a Potts regularization.

  563.   Vemuri, BC, and Guo, YL, "Snake pedals: Compact and versatile geometric models with physics-based control," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, pp. 445-459, 2000.

Abstract:   In this paper, we introduce a novel geometric shape modeling scheme which allows for representation of global and local shape characteristics of an object. Geometric models are traditionally well-suited for representing global shapes without local detail. However, we propose a powerful geometric shape modeling scheme which allows for the representation of global shapes with local detail and permits model shaping as well as topological changes via physics-based control. The proposed modeling scheme consists of representing shapes by pedal curves and surfaces-pedal curves/surfaces are the loci of the foot of perpendiculars to the tangents of a fixed curve/surface from a fixed point called the pedal point. By varying the location of the pedal point, one can synthesize a large class of shapes which exhibit both local and global deformations. We introduce physics-based control for shaping these geometric models by letting the pedal point vary and use a snake to represent the position of this varying pedal point. The model dubbed as a "snake pedal" allows for interactive manipulation via forces applied to the snake. We develop a fast numerical iterative algorithm for shape recovery from image data using this geometric shape modeling scheme. The algorithm involves the Levenberg-Marquardt (LM) method in the outer loop for solving the global parameters and the Alternating Direction Implicit (ADI) method in the inner loop for solving the local parameters of the model. The combination of the global and local scheme leads to an efficient numerical solution to the model fitting problem. We demonstrate the applicability of this modeling scheme via examples of shape synthesis and shape estimation from real image data.

  564.   Shih, WSV, Lin, WC, and Chen, CT, "Volumetric morphologic deformation method for intersubject image registration," INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, vol. 11, pp. 117-124, 2000.

Abstract:   An automated image processing method is proposed for anatomic standardization that can elastically map one subject's magnetic resonance image (MRI) to a standard reference MRI to enable intersubject and cross-group studies. In this method, linear transformations based on bicommissural stereotaxy are first applied to grossly align the input image to the reference image. Then. the candidate corresponding regions in the input image are identified based on the contour information from the presegmented reference image. Next, an active contour model is employed to refine the contour description of the input image. Based on the contour correspondence established in these previous steps, a nonlinear transformation is determined using the proposed weighted local reference coordinate systems to warp the input Image. In this method, geometric correspondence established based on contour matching is used to control the warping and the actual image values corresponding to registered coordinates need not be similar. We tested this algorithm on various synthetic and real images for intersubject registration of MRIs. (C) 2000 John Wiley & Sons, Inc.

  565.   Laptev, I, Mayer, H, Lindeberg, T, Eckstein, W, Steger, C, and Baumgartner, A, "Automatic extraction of roads from aerial images based on scale space and snakes," MACHINE VISION AND APPLICATIONS, vol. 12, pp. 23-31, 2000.

Abstract:   We propose a new approach for automatic road extraction from aerial imagery with a model and a strategy mainly based on the multi-scale detection of roads in combination with geometry-constrained edge extraction using snakes. A main advantage of our approach is, that it allows for the first time a bridging of shadows and partially occluded areas using the heavily disturbed evidence in the image. Additionally, it has only few parameters to be adjusted. The road network is constructed after extracting crossings with varying shape and topology. We show the feasibility of the approach not only by presenting reasonable results but also by evaluating them quantitatively based on ground truth.

  566.   Grace, AE, Pycock, D, Tillotson, HT, and Snaith, MS, "Active shape from stereo for highway inspection," MACHINE VISION AND APPLICATIONS, vol. 12, pp. 7-15, 2000.

Abstract:   This paper describes an unsupervised algorithm for estimating the 3D profile of potholes in the highway surface, using structured illumination. Structured light is used to accelerate computation and to simplify the estimation of range. A low-resolution edge map is generated so that further processing may be focused on relevant regions of interest. Edge points in each region of interest are used to initialise open, active contour models, which are propagated and refined, via a pyramid, to a higher resolution. At each resolution, internal and external constraints are applied to a snake; the internal constraint is a smoothness function and the external one is a maximum-likelihood estimate of the grey-level response at the edge of each light stripe. Results of a provisional evaluation study indicate that this automated procedure provides estimates of pothole dimension suitable for use in a first, screening, assessment of highway condition.

  567.   Frost, AR, Tillett, RD, and Welch, SK, "The development and evaluation of image analysis procedures for guiding a livestock monitoring sensor placement robot," COMPUTERS AND ELECTRONICS IN AGRICULTURE, vol. 28, pp. 229-242, 2000.

Abstract:   The over all objective of the work described here is to develop a robotic system capable of holding a sensor in contact with any one of a set of pre-determined positions on the body of a loosely constrained live animal. This paper is concerned with generating sets of coordinates corresponding to the target points on the animal's body. The problem was approached using image analysis. Models were established to predict the positions of arbitrary points on the body of a pig from the positions of features in the image of the periphery of the pig, which could be measured automatically. From measurements of the movements of pigs in a feeding stall it was shown that the resultant error in the predicted position of an arbitrary point on the pig's body was comparable to that which could be expected from a human operator. The approach of using image analysis to guide a livestock monitoring sensor placement robot shows considerable Promise. and is worthy of further investigation. Future work should concentrate on establishing the generality of target point prediction models. (C) 2000 Elsevier Science B.V. All rights reserved.

  568.   Haber, I, Metaxas, DN, and Axel, L, "Using tagged MRI to reconstruct a 3D heartbeat," COMPUTING IN SCIENCE & ENGINEERING, vol. 2, pp. 18-30, 2000.

Abstract:   Magnetic resonance imaging tissue tagging is a decade-old method that lets scientists follow the motion of a beating heart. The method described here reconstructs 3D motion from multiple 2D MRI images to find new information about the right ventricle.

  569.   Tannenbaum, A, "On the eye tracking problem: a challenge for robust control," INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, vol. 10, pp. 875-888, 2000.

Abstract:   Eye tracking is one of the key problems in controlled active vision. Because of modelling uncertainty and noise in the signals, it becomes a challenging problem for robust control. In this paper, we outline some of the key issues involved as well as some possible solutions. We will need to make contact with techniques from machine vision and multi-scale image processing in carrying out this task. In particular, we will sketch some of the necessary methods from computer vision and image processing including optical flow, active contours ('snakes'), and geometric driven flows. The paper will thus have a tutorial flavor as well. Copyright (C) 2000 John Wiley & Sons, Ltd.

  570.   Fan, LX, Santago, P, Jiang, H, and Herrington, DM, "Ultrasound measurement of brachial flow-mediated vasodilator response," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 621-631, 2000.

Abstract:   Brachial artery flow-mediated vasodilation is increasingly used as a measure of endothelial function. High resolution ultrasound provides a noninvasive method to observe this flow-mediated vasodilation by monitoring the diameter of the artery over time following a transient flow stimulus. Since hundreds of ultrasound images are required to continuously monitor brachial diameter for the 2-3 min during which the vasodilator response occurs, an automated diameter estimation is desirable. However, vascular ultrasound images suffer from structural noise caused by the constructive and destructive interference of the backscattered signals, and the true boundaries of interest that define the diameter are frequently obscured by the multiple-layer structure of the vessel wall, These problems make automated diameter estimation strategies based on the detection of the vessel wall boundary difficult. We obtain a robust automated measurement of the vasodilator response by automatically locating the artery using a variable window method, which gives both the lumen center and width. The vessel wall boundary is detected by a global constraint deformable model, which is insensitive to the structural noise in the boundary area, The ambiguity between the desired boundary and other undesired boundaries is resolved by a spatiotemporal strategy. Our method provides excellent reproducibility both for interreader and intrareader analyzes of percent change in diameter, and has been successfully used in analyzing over 4000 brachial flow-mediated vasodilation scans from several medical centers in the United States.

  571.   Chiueh, TC, Mitra, T, Neogi, A, and Yang, CK, "Zodiac: A history-based interactive video authoring system," MULTIMEDIA SYSTEMS, vol. 8, pp. 201-211, 2000.

Abstract:   Easy-to-use audio/video authoring tools play a crucial role in moving multimedia software from research curiosity to mainstream applications. However, research in multimedia authoring systems has rarely been documented in the literature. This paper describes the design and implementation of an interactive video authoring system called Zodiac, which employs an innovative edit history abstraction to support several unique editing features not found in existing commercial and research video editing systems. Zodiac provides users a conceptually clean and semantically powerful branching history model of edit operations to organize the authoring process, and to navigate among versions of authored documents. In addition, by analyzing the edit history, Zodiac is able to reliably detect a composed video stream's shot and scene boundaries, which facilitates interactive video browsing. Zodiac also features a video object annotation capability that allows users to associate annotations to moving objects in a video sequence. The annotations themselves could be text, image, audio, or video. Zodiac is built on top of MMFS, a file system specifically designed for interactive multimedia development environments, and implements an internal buffer manages that supports transparent lossless compression/decompression. Shot/scene detection, video object annotation, and buffer management all exploit the edit history information for performance optimization.

  572.   Brejl, M, and Sonka, M, "Object localization and border detection criteria design in edge-based image segmentation: Automated learning from examples," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 973-985, 2000.

Abstract:   This paper provides methodology for fully automated model-based image segmentation. All information necessary to perform image segmentation is automatically derived from a training set that is presented in a form of segmentation examples, The training set is used to construct two models representing the objects-shape model and border appearance model. A two-step approach to image segmentation is reported. In the first step, an approximate location of the object of interest is determined. In the second step, accurate border segmentation is performed. The shape-variant Hough transform method was developed that provides robust object localization automatically. It finds objects of arbitrary shape, rotation, or scaling and can handle object variability, The border appearance model was developed to automatically design cost functions that can be used in the segmentation criteria of edge based segmentation methods. Our method was tested in five different segmentation tasks that included 489 objects to be segmented. The final segmentation was compared to manually defined borders with good results [rms errors in pixels: 1.2 (cerebellum), 1.1 (corpus callosum), 1.5 (vertebrae), 1.4 (epicardial), and 1.6 (endocardial) borders], Two major problems of the state-of-the-art edge based image segmentation algorithms were addressed: strong dependency on a close-to-target initialization, and necessity for manual redesign of segmentation criteria whenever new segmentation problem is encountered.

  573.   Davies, ER, "Low-level vision requirements," ELECTRONICS & COMMUNICATION ENGINEERING JOURNAL, vol. 12, pp. 197-210, 2000.

Abstract:   This paper aims to help those with some experience of vision to obtain a more in-depth understanding of the problems of low-level vision. As it is not possible to cover everything in a paper of this length, a carefully chosen series of cases and case studies is presented. Relevant principles are brought out and a set of important ground rules is presented by way of summary.

  574.   Treece, GM, Prager, RW, Gee, AH, and Berman, L, "Surface interpolation from sparse cross sections using region correspondence," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 1106-1114, 2000.

Abstract:   The ability to estimate a surface from a set of cross sections allows calculation of the enclosed volume and the display of the surface in three-dimensions. This process has increasingly been used to derive useful information from medical data. However, extracting the cross sections (segmenting) can be very difficult, and automatic segmentation methods are not sufficiently robust to handle all situations. Hence, it is an advantage if the surface reconstruction algorithm can work effectively on a small number of cross sections. In addition, cross sections of medical data are often quite complex. Shape-based interpolation is a simple and elegant solution to this problem, although it has known Limitations when handling complex shapes. In this paper, the shape-based interpolation paradigm is extended to interpolate a surface through sparse, complex cross sections, providing a significant improvement over our previously published maximal disc-guided interpolation, The performance of this algorithm is demonstrated on various types of medical data (X-ray computed tomography, magnetic resonance imaging and three-dimensional ultrasound). Although the correspondence problem in general remains unsolved, it is demonstrated that correct surfaces can be estimated from a limited amount of real data, through the use of region rather than object correspondence.

  575.   Shiffman, S, Rubin, GD, and Napel, S, "Medical image segmentation using analysis of isolable-contour maps," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, pp. 1064-1074, 2000.

Abstract:   A common challenge for automated segmentation techniques is differentiation between images of close objects that have similar intensities, whose boundaries are often blurred due to partial-volume effects. We propose a novel approach to segmentation of two-dimensional images, which addresses this challenge. Our method, which we call intrinsic shape for segmentation (ISeg), analyzes isolabel-contour maps to identify coherent regions that correspond to major objects. ISeg generates an isolabel-contour map for an image by multilevel thresholding with a fine partition of the intensity range, ISeg detects object boundaries by comparing the shape of neighboring isolabel contours from the map. ISeg requires only little effort from users; it does not require construction of shape models of target objects. In a formal validation with computed-tomography angiography data, we showed that ISeg was more robust than conventional thresholding, and that ISeg's results were comparable to results of manual tracing.

  576.   Sanchez, PJ, Zapata, J, and Ruiz, R, "An active contour model algorithm for tracking endocardiac boundaries in echocardiographic sequences," CRITICAL REVIEWS IN BIOMEDICAL ENGINEERING, vol. 28, pp. 487-492, 2000.

Abstract:   The use of active contour models to track the boundaries of anatomic structures in medical images is a technique that has attracted a great number of efforts during the last decade. Segmentation techniques based in deformable active contours were proposed first by Kass et al.(1) Because of the problems appearing using these models, some solutions have been introduced, such as balloon force(2) or Gradient Vector Flow force (GVF), derived from the Gradient Vector Flow vectorial field.(3) Results obtained with these forces in the tracking endocardiac task in echocardiographic sequences were not adequate. We have designed a new external force called hybrid force, which, by combining both forces, joins the main features of each one.

  577.   Positano, V, Mammoliti, R, Santarelli, MF, Landini, L, and Benassi, A, "Nonlinear analysis of carotid artery echographic images," IEE PROCEEDINGS-SCIENCE MEASUREMENT AND TECHNOLOGY, vol. 147, pp. 327-332, 2000.

Abstract:   Nonlinear analysis is applied to identifying complex spatial patterns in echographic images of normal and pathologic carotid arteries. Complexity and entropy measures of normal and atherosclerotic plaques are evaluated to characterise the space-temporal evolution of biological patterns. They are: correlation dimension, Lyapunov exponent and Kolmogorov entropy. The application of principal component analysis to such measures clusters data according to different atherosclerosis severity degrees, which are confirmed by histologic analysis.

  578.   Samson, C, Blanc-Feraud, L, Aubert, G, and Zerubia, J, "A level set model for image classification," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 40, pp. 187-197, 2000.

Abstract:   We present a supervised classification model based on a variational approach. This model is devoted to find an optimal partition composed of homogeneous classes with regular interfaces. The originality of the proposed approach concerns the definition of a partition by the use of level sets. Each set of regions and boundaries associated to a class is defined by a unique level set function. We use as many level sets as different classes and all these level sets are moving together thanks to forces which interact in order to get an optimal partition. We show how these forces can be defined through the minimization of a unique fonctional. The coupled Partial Differential Equations (PDE) related to the minimization of the functional are considered through a dynamical scheme. Given an initial interface set (zero level set), the different terms of the PDE's are governing the motion of interfaces such that, at convergence, we get an optimal partition as defined above. Each interface is guided by internal forces (regularity of the interface), and external ones (data term, no vacuum, no regions overlapping). Several experiments were conducted on both synthetic and real images.

  579.   Krucker, JF, Meyer, CR, LeCarpentier, GL, Fowlkes, JB, and Carson, PL, "3D spatial compounding of ultrasound images using image-based nonrigid registration," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 26, pp. 1475-1488, 2000.

Abstract:   Medical ultrasound images are often distorted enough to significantly limit resolution during compounding (i.e., summation of images from multiple views). A new, volumetric image registration technique has been used successfully to enable high spatial resolution in three-dimensional (3D) spatial compounding of ultrasound images. Volumetric ultrasound data were acquired by scanning a linear matrix array probe in the elevational direction in a focal lesion phantom and in a breast in vitro. To obtain partly uncorrelated views, the volume of interest was scanned at five different transducer tilt angles separated by 4 degrees to 6 degrees. Pairs of separate views were registered by an automatic procedure based on a mutual information metric, using global full affine and thin-plate spline warping transformations. Registration accuracy was analyzed automatically in the phantom data, and manually in vivo, yielding average registration errors of 0.31 mm and 0.65 mm, respectively. In the vicinity of the warping control points, registrations obtained with warping transformations were significantly more accurate than full affine registrations. Compounded images displayed the expected reduction in speckle noise and increase in contrast-to-noise ratio (CNR), as well as better delineation of connective tissues and reduced shadowing. Compounding also revealed some apparent low contrast lobulations that were not visible in the single-sweep images. Given expected algorithmic and hardware enhancements, nonrigid, image-based registration shows great promise for reducing tissue motion and refraction artifacts in 3D spatial compounding. (C) 2001 World Federation for Ultrasound in Medicine & Biology.

  580.   Lo Presti, L, D'Amato, G, and Sambuelli, L, "Two-dimensional random adaptive sampling for image scanning," IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 38, pp. 2608-2616, 2000.

Abstract:   In this paper, an efficient sampling algorithm for image scanning is proposed, suitable to represent "interesting" objects, defined as a set of spatially close measured values that springs out from a background noise (as in applied geophysics in the process of anomaly detection). This method generates a map of pixels randomly distributed in the plane and able to cover all the image with a reduced number of points with respect to a regular scanning, Simulation results show that a saving factor of about 50% is obtained without information loss, This result can be proved also by using a simplified model of the sampling mechanism. The algorithm is able to detect the presence of an object emerging from a low energy background and to adapt the sampling interval to the shape of the detected object. In this way, all of the interesting objects are well represented and can be adequately reconstructed, while the roughly sampling in the background produces an imperfect reconstruction. Simulation results show that the method is feasible with good performances and moderate complexity.

  581.   Falcao, AX, and Udupa, JK, "A 3D generalization of user-steered live-wire segmentation," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 389-402, 2000.

Abstract:   We have been developing user-steered image segmentation methods for situations which require considerable human assistance in object definition. In the past, we have presented two paradigms, referred to as live-wire and live-lane, for segmenting 2D/3D/4D object boundaries in a slice-by-slice fashion, and demonstrated that live-wire and live-lane are more repeatable, with a statistical significance level of P < 0.03, and are 1.5-2.5 times faster, with a statistical significance level of P < 0.02, than manual tracing. In this paper, we introduce a 3D generalization of the live-wire approach for segmenting 3D/4D object boundaries which further reduces the time spent by the user in segmentation. In a 2D live-wire, given a slice, for two specified points (pixel vertices) on the boundary of the object, the best boundary segment is the minimum-cost path between the two points, described as a set of oriented pixel edges. This segment is found via Dijkstra's algorithm as the user anchors the first point and moves the cursor to indicate the second point. A complete 2D boundary is identified as a set of consecutive boundary segments forming a "closed", "connected", "oriented" contour. The strategy of the 3D extension is that, first, users specify contours via live-wiring on a few slices that are orthogonal to the natural slices of the original scene. If these slices are selected strategically, then we have a sufficient number of points on the 3D boundary of the object to subsequently trace optimum boundary segments automatically in all natural slices of the 3D scene. A 3D object boundary may define multiple 2D boundaries per slice. The points on each 2D boundary form an ordered set such that when the best boundary segment is computed between each pair of consecutive points, a closed, connected, oriented boundary results. The ordered set of points on each 2D boundary is found from the way the users select the orthogonal slices. Based on several validation studies involving segmentation of the bones of the foot in MR images, we found that the 3D extension of live-wire is more repeatable, with a statistical significance level of P < 0.0001, and 2-6 times faster, with a statistical significance level of P < 0.01, than the 2D live-wire method, and 3-15 times faster than manual tracing. (C) 2000 Elsevier Science B.V. AU rights reserved.

  582.   Audette, MA, Ferrie, FP, and Peters, TM, "An algorithmic overview of surface registration techniques for medical imaging," MEDICAL IMAGE ANALYSIS, vol. 4, pp. 201-217, 2000.

Abstract:   This paper presents a literature survey of automatic 3D surface registration techniques emphasizing the mathematical and algorithmic underpinnings of the subject. The relevance of surface registration to medical imaging is that there is much useful anatomical information in the form of collected surface points which originate from complimentary modalities and which must be reconciled. Surface registration can be roughly partitioned into three issues: choice of transformation, elaboration of surface representation and similarity criterion, and matching and global optimization. The first issue concerns the assumptions made about the nature of relationships between the two modalities, e.g. whether a rigid-body assumption applies, and if nor, what type and how general a relation optimally maps one modality onto the other. The second issue determines what type of information we extract from the 3D surfaces, which typically characterizes their local or global shape, and how we organize this information into a representation of the surface which will lead to improved efficiency and robustness in the last stage. The last issue pertains to how we exploit this information to estimate the transformation which best aligns local primitives in a globally consistent manner or which maximizes a measure of the similarity in global shape of two surfaces. Within this framework, this paper discusses in detail each surface registration issue and reviews the state-of-the-art among existing techniques. (C) 2000 Elsevier Science BN. All rights reserved.

 
2001

  583.   Ferrant, M, Nabavi, A, Macq, B, Jolesz, FA, Kikinis, R, and Warfield, SK, "Registration of 3-D intraoperative MR images of the brain using a finite-element biomechanical model," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 1384-1397, 2001.

Abstract:   We present a new algorithm for the nonrigid registration of three-dimensional magnetic resonance (MR) intraoperative image sequences showing brain shift. The algorithm tracks key surfaces of objects (cortical surface and the lateral ventricles) in the image sequence using a deformable surface matching algorithm. The volumetric deformation field of the objects is then inferred from the displacements at the boundary surfaces using a linear elastic biomechanical finite-element model. Two experiments on synthetic image sequences are presented, as well as an initial experiment on intraoperative MR images showing brain shift. The results of the registration algorithm show a good correlation of the internal brain structures after deformation, and a good capability of measuring surface as well as subsurface shift. We measured distances between landmarks in the deformed initial image and the corresponding landmarks in the target scan. Cortical surface shifts of up to 10 min and subsurface shifts of up to 6 mm were recovered with an accuracy of 1 nun or less and 3 min or less respectively.

  584.   Choi, WP, Lam, KM, and Siu, WC, "An adaptive active contour model for highly irregular boundaries," PATTERN RECOGNITION, vol. 34, pp. 323-331, 2001.

Abstract:   Snake is an active contour model for representing image contours. In this paper, we propose an efficient active contour model which can represent highly irregular boundaries. The algorithm includes an adaptive force along the contour, and adjusts the number of points for the snake according to the desired boundary. A better stopping criterion based on the area of a closed contour is devised. Furthermore, in this method, a contour can break automatically to represent the contours of multiple objects. Experiments show that this method can extract object's boundaries accurately and efficiently. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  585.   Germain, O, and Refregier, P, "Edge location in SAR images: Performance of the likelihood ratio filter and accuracy improvement with an active contour approach," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 72-78, 2001.

Abstract:   The likelihood ratio edge detector is an efficient filter for the segmentation of synthetic aperture radar (SAR) images. We show that this filter provides biased location of the edge, when the window does not have the same orientation as the edge. A phenomenological model is proposed to characterize this bias. We then introduce an efficient technique to refine edge location: the statistical active contour. The combination of these two methods permits to achieve accurate and regularized edge location.

  586.   Ojala, T, Nappi, J, and Nevalainen, O, "Accurate segmentation of the breast region from digitized mammograms," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 25, pp. 47-59, 2001.

Abstract:   The segmentation of a digital mammogram into the breast region and the background is a necessary prerequisite in computer-assisted diagnosis of mammograms. By the exclusion of the background region, the accuracy of the analysis is increased and the running-time is decreased. The algorithm which segments the breast region from the background should be fully automated and give correct results for all kinds of digitized mammograms, including low-quality images. In this paper we present such an algorithm based on histogram thresholding, morphological filtering and contour modeling. Quantitative test results indicate that the computed boundary follows the estimated boundary accurately. (C) 2000 Elsevier Science Ltd. All rights reserved.

  587.   Fornefett, M, Rohr, K, and Stiehl, HS, "Radial basis functions with compact support for elastic registration of medical images," IMAGE AND VISION COMPUTING, vol. 19, pp. 87-96, 2001.

Abstract:   Common elastic registration schemes based on landmarks and radial basis functions (RBFs) such as thin-plate splines or multiquadrics are global. Here, we introduce radial basis functions with compact support for elastic registration of medical images which have an improved locality, i.e. which allow to constrain elastic deformations to image parts where required. We give the theoretical background of these basis functions and compare them with other basis functions w.r.t. locality, solvability, and efficiency. A detailed comparison with the Gaussian as well as conditions for preserving topology is given. An important property of the used RBFs (Wendland's psi -functions) is that they are positive definite. Therefore, in comparison to the use of the truncated Gaussian, the solvability of the resulting system of equations is always guaranteed. We demonstrate the applicability of our approach for synthetic as well as for 2D and 3D tomographic images. (C) 2001 Elsevier Science B.V. All rights reserved.

  588.   Chan, TF, and Vese, LA, "Active contours without edges," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 266-277, 2001.

Abstract:   In this paper, we propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by gradient. We minimize an energy which can he seen as a particular case of the minimal partition problem, In the level set formulation, the problem becomes a "mean-curvature flow"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the. image, as in the classical active contour models, hut is instead related to a particular segmentation of the image. We will give a numerical algorithm using finite differences. Finally, we will present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.

  589.   Kim, W, Lee, CY, and Lee, JJ, "Tracking moving object using Snake's jump based on image flow," MECHATRONICS, vol. 11, pp. 199-226, 2001.

Abstract:   An active contour model, Snake, was developed as a useful segmenting and tracking tool for rigid or non-rigid (i.e, deformable) objects by Kass in 1987. Snake is designed on the basis of Snake energies. Segmenting and tracking can be executed successfully by the process of energy minimization. The ability to contract is an important process for segmenting objects from images, but the contraction forces of Kass' Snake are dependent on the object's form. In this research, new contraction energy, independent of the object's form, is proposed for the better segmentation of objects. Kass' Snake can be applied to the case of small changes between images because its solutions can be achieved on the basis of variational approach. If a somewhat fast moving object exists in successive images, Kass' Snake will not operate well because the moving object may have large differences in its position or form, between successive images. Snake's nodes may fall into the local minima in their motion to the new positions of the target object in next image. When the motion is too large to apply image flow energy to tracking, a jump mode is proposed for solving the problem. The vector used to make Snake's nodes jump to the new location can be obtained by processing the image flow. The effectiveness of the proposed Snake is confirmed by some simulations. (C) 2000 Published by Elsevier Science Ltd.

  590.   Dubuisson-Jolly, MP, and Gupta, A, "Tracking deformable templates using a shortest path algorithm," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 81, pp. 26-45, 2001.

Abstract:   This paper proposes a new technique to track deformable templates. We extend the typical graph algorithms that have been used for active contour recovery to incorporate shape information. The advantage of graph algorithms is that they are guaranteed to find the global minimum of the energy function. The difficulty with their traditional use for active contours is that they consider only two pixels at a time when recovering the contour, making it impossible to enforce shape constraints. We define the deformable template as a polygonal contour, demonstrate the proper mapping between the image, the contour, and a graph, and show how to apply Dijkstra's algorithm to track contours in image sequences. Examples are shown for deforming contours, articulated objects, and smooth contours being tracked in simple and complicated backgrounds. We also provide an analysis of the computational requirements. (C) 2001 Academic Press.

  591.   Park, H, Schoepflin, T, and Kim, Y, "Active contour model with gradient directional information: Directional snake," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 11, pp. 252-256, 2001.

Abstract:   Active contours or snakes are an effective edge-based method in segmenting an object of interest, However, the segmented boundary of a moving object in one video frame may lie far from the same moving object in the next frame due to its rapid motion, causing the snake to converge on the wrong edges. To guide the snake toward the appropriate edges, we have added gradient-directional information to the external image force to create a "directional snake." Thus, in minimizing the snake energy, the new method considers both the gradient strength and gradient direction of the image. Experimental results demonstrate that the directional snake can provide a better segmentation than the conventional method in certain situations, e.g., when there are multiple edge candidates in the neighborhood with different directions. The directional snake is significant because it provides a framework to incorporate directional information in digital video segmentation.

  592.   Gotte, MJW, van Rossum, AC, Twisk, JWR, Kuijer, JPA, Marcus, JT, and Visser, CA, "Quantification of regional contractile function after infarction: Strain analysis superior to wall thickening analysis in discriminating infarct from remote myocardium," JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY, vol. 37, pp. 808-817, 2001.

Abstract:   OBJECTIVES Using two-dimensional wall thickening (WT) (expressed as percentage) and strain analysis, regional contractile myocardial function was quantified and compared in 13 control subjects and 13 patients with a first myocardial infarction (MI). The finding in the patient group were related to global ventricular function and infarct size. BACKGROUND In patients with coronary artery disease, regions with dysfunctional myocardium cannot be differentiated easily from regions with normal function by planar WT analysis. Physiologic factors, in combination with limitations of conventional imaging techniques, affect the calculation of WT. Quantitative assessment of contractile function by magnetic resonance (MR) tissue tagging and strain analysis may be less affected by these factors. METHODS Two-dimensional regional WT and strain were calculated in three short-axis MR cine and ragged images, respectively. Left ventricular volumes and ejection fraction (EF) were obtained from a series of contiguous short-axis cine images. RESULTS In patients with infarct-related ventricles, WT and strain analysis both revealed reduced myocardial function, as compared with control subjects (p < 0.005 and p < 0.001, respectively). However, WT analysis yielded no significant regional differences in function between infarct-related and remote myocardium (p = 0.064), whereas strain analysis did (p < 0.005). For detecting dysfunctional myocardium of electrocardiographically and angiographically defined infarct areas, WT analysis had a sensitivity of 69% and a specificity of 92%, whereas strain analysis demonstrated a sensitivity of 92% and a specificity of 99%. The EF correlated with WT (r = 0.76, p < 0.005) and strain (r = 0.89, p < 0.001). CONCLUISONS Two-dimensional strain analysis is more accurate than planar WT analysis in discriminating dysfunctional from functional myocardium, and it provides a strong correlation between regional myocardial and global ventricular function. (J Am Coil Cardiol 2001;37: 808-17) (C) 2001 by the American College of Cardiology.

  593.   Abu-Gharbieh, R, Hamarneh, G, Gustavsson, T, and Kaminski, CF, "Flame front tracking by laser induced fluorescence spectroscopy and advanced image analysis," OPTICS EXPRESS, vol. 8, pp. 278-287, 2001.

Abstract:   This paper presents advanced image analysis methods for extracting information from high speed Planar Laser Induced Fluorescence (PLIF) data obtained from turbulent ames. The application of non-linear anisotropic diffusion filtering and of Active Contour Models ( Snakes) is described to isolate flame boundaries. In a subsequent step, the detected flame boundaries are tra ked in time using a frequency domain contour interpolation scheme. The implementations of the methods are described and possible applications of the techniques are discussed. (C) 2001 Optical Society of America.

  594.   Inglis, IM, and Gray, AJ, "An evaluation of semiautomatic approaches to contour segmentation applied to fungal hyphae," BIOMETRICS, vol. 57, pp. 232-239, 2001.

Abstract:   Semiautomatic image analysis techniques are particularly useful in biological applications, which commonly generate very complex images, and offer considerable flexibility. However, systematic study of such methods is lacking; most research develops fully automatic algorithms. This paper describes a study to evaluate several different semiautomatic or computer-assisted approaches to contour segmentation within the context of segmenting degraded images of fungal hyphae. Four different types of contour segmentation method, with varying degrees and types of user input, are outlined and applied to hyphal images. The methods are evaluated both quantitatively and qualitatively by comparing results obtained by several test subjects segmenting simulated images qualitatively similar to the hyphal images of interest. An active contour model approach, using control points, emerges as the method to be preferred to three more traditional approaches. Feedback from the image provider indicates that any of the methods described have something useful to offer for segmentation of hyphae.

  595.   Mignotte, M, and Meunier, J, "A multiscale optimization approach for the dynamic contour-based boundary detection issue," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 25, pp. 265-275, 2001.

Abstract:   We present a new multiscale approach for deformable contour optimization. The method relies on a multigrid minimization method and a coarse-to-fine relaxation algorithm. This approach consists in minimizing a cascade of optimization problems of reduced and increasing complexity instead of considering the minimization problem on the full and original configuration space. Contrary to classical multiresolution algorithms, no reduction of image is applied. The family of defined energy functions are derived from the original (full resolution) objective function, ensuring that the same function is handled at each scale and that the energy decreases at each step of the deformable contour minimization process. The efficiency and the speed of this multiscale optimization strategy is demonstrated in the difficult context of the minimization of a region-based contour energy function ensuring the boundary detection of anatomical structures in ultrasound medical imagery. In this context, the proposed multiscale segmentation method is compared to other classical region-based segmentation approaches such as Maximum Likelihood or Markov Random Field-based segmentation techniques. We also extend this multiscale segmentation strategy to active contour models using a classical edge-based likelihood approach. Finally, time and performance analysis of this approach, compared to the (commonly used) dynamic programming-based optimization procedure, is given and allows to attest the accuracy and the speed of the proposed method. (C) 2001 Elsevier Science Ltd. All rights reserved.

  596.   Frangi, AF, Niessen, WJ, and Viergever, MA, "Three-dimensional modeling for functional analysis of cardiac images: A review," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 2-25, 2001.

Abstract:   Three-dimensional (3-D) imaging of the heart is a rapidly del eloping area of research in medical imaging, Advances in hardware and methods for fast spatio-temporal cardiac imaging are extending the frontiers of clinical diagnosis and research on cardiovascular diseases. In the last few Sears, many approaches hare been proposed to analyze images and extract parameters of cardiac shape and function from a variety of cardiac imaging modalities. In particular, techniques based on spatio-temporal geometric models have received considerable attention. This paper surveys the literature of tno decades of research on cardiac modeling. The contribution of the paper is three-fold: 1) to serve as a tutorial of the field for both clinicians and technologists, 2) to provide an extensive account of modeling techniques in a comprehensive and systematic manner, and 3) to critically review these approaches in terms of their performance and degree of clinical evaluation with respect to the final goal of cardiac functional analysis, From this review it is concluded that whereas 3-D model-based approaches have the capability. to improve the diagnostic value of cardiac images, issues as robustness, 3-D interaction, computational complexity and clinical validation still require significant attention.

  597.   Yabuki, N, Matsuda, Y, Ota, M, Sumi, Y, Fukui, Y, and Miki, S, "Improvement of active net model for region detection in an image," IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, vol. E84A, pp. 720-726, 2001.

Abstract:   Processes in image recognition include target detection and shape extraction. Active Net has been proposed as one of the methods for such processing. It treats the target detection in all image as an energy optimization problem. In this paper. a problem of the conventional Active Net is presented and the new Active Net is proposed. The new net is improved the ability for detecting a target. Finally, the validity of the proposed net is confirmed by experimental results.

  598.   Heller, EN, Staib, LH, Dione, DP, Constable, RT, Shi, CQX, Duncan, JS, and Sinusas, AJ, "A new method for quantification of spatial and temporal parameters of endocardial motion: Evaluation of experimental infarction using magnetic resonance imaging," CANADIAN JOURNAL OF CARDIOLOGY, vol. 17, pp. 309-318, 2001.

Abstract:   BACKGROUND: With the development of high-resolution myocardial imaging there has evolved a need for automated techniques that can accurately quantify regional function. OBJECTIVE: To develop a new method for quantification of spatial and temporal parameters of endocardial motion. DESIGN: Magnetic resonance images were analyzed using a unique, shape-based approach that tracks endocardial surface motion at defined points through the cardiac cycle by minimizing the bending energy. SETTING: Animal instrumentation was performed in the Nuclear Cardiology Experimental Research Laboratory at Yale University, New Haven, Connecticut. Magnetic resonance imaging was performed at the Yale New Haven Hospital Center. ANIMALS: Eight mongrel canines were used. INTERVENTIONS: Electrocardiograph gated, gradient-echo magnetic resonance images were obtained before and after occlusion of the left anterior descending coronary artery. Thirty-two points along automatically defined endocardial contours were tracked. Average displacements and cumulative path lengths were computed from end-diastole for each point over the entire cardiac cycle. The average cumulative path length was computed for each of four quarters of systole for the normal, border and infarct zones. Shape-based parameters of systolic motion were compared with the centreline approach. Infarct zone was defined by postmortem histochemical staining. MAIN RESULTS: Displacement and cumulative path length over the cardiac cycle decreased significantly in the infarct and border zones (P<0.05), but did not change in the normal zone (P was not significant). Temporal changes in motion were observed in all zones. Displacement measured using the shape based algorithm was more consistent than cumulative path length when compared with systolic motion measured using the centreline method. CONCLUSIONS: An automated, shape-based approach permits quantitative evaluation of both spatial and temporal parameters of regional endocardial motion from high-resolution electrocardiograph gated images. Analysis of endocardial motion and cumulative motion over the entire cardiac cycle discriminated infarcted from normal and border regions.

  599.   De Solorzano, CO, Malladi, R, Lelievre, SA, and Lockett, SJ, "Segmentation of nuclei and cells using membrane related protein markers," JOURNAL OF MICROSCOPY-OXFORD, vol. 201, pp. 404-415, 2001.

Abstract:   Segmenting individual cell nuclei from microscope images normally involves volume labelling of the nuclei with a DNA stain. However, this method often fails when the nuclei are tightly clustered in the tissue, because there is little evidence from the images on where the borders of the nuclei are. In this paper we present a method which solves this limitation and furthermore enables segmentation of whole cells. Instead of using volume stains, we used stains that specifically label the surface of nuclei or cells: lamins for the nuclear envelope and alpha-6 or beta-1 integrins for the cellular surface. The segmentation is performed by identifying unique seeds for each nucleus/cell and expanding the boundaries of the seeds until they reach the limits of the nucleus/cell, as delimited by the lamin or integrin staining, using gradient-curvature flow techniques. We tested the algorithm using computer-generated objects to evaluate its robustness against noise and applied it to cells in culture and to tissue specimens. In all the cases that we present the algorithm gave accurate results.

  600.   Little, JJ, and Shi, P, "Structural lines, TINs, and DEMs," ALGORITHMICA, vol. 30, pp. 243-263, 2001.

Abstract:   The standard method of building compact triangulated surface approximations to terrain surfaces (TINs) from dense digital elevation models (DEMs) adds points to an initial sparse triangulation or removes points from a dense initial mesh. Instead, we find structural lines to act as the initial skeleton of the triangulation. These lines are based on local curvature of the surface, not on the Row of water. We build TINs from DEMs with points and structural lines. These experiments show that initializing the TIN with structural lines at the correct scale creates a TIN with fewer points given a particular approximation error. Structural lines are especially effective for small numbers of points and correspondingly rougher approximations.

  601.   Shearer, K, Wong, KD, and Venkatesh, S, "Combining multiple tracking algorithms for improved general performance," PATTERN RECOGNITION, vol. 34, pp. 1257-1269, 2001.

Abstract:   Automated tracking of objects through a sequence of images has remained one of the difficult problems in computer vision. Numerous algorithms and techniques have been proposed for this task. Some algorithms perform well in restricted environments, such as tracking using stationary camel as, but a general solution is not currently available. A frequent problem is that when an algorithm is refined for one application, it becomes unsuitable for other applications, This paper proposes a general tracking system based on a different approach. Rather than refine one algorithm for a specific tracking task, two tracking algorithms are employed, and used to correct each other during the tracking task. By choosing the two algorithms such that they have complementary failure modes, a robust algorithm is created without increased specialisation. (C) 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  602.   Zahalka, A, and Fenster, A, "An automated segmentation method for three-dimensional carotid ultrasound images," PHYSICS IN MEDICINE AND BIOLOGY, vol. 46, pp. 1321-1342, 2001.

Abstract:   We have developed an automated segmentation method for three-dimensional vascular ultrasound images. The method consists of two steps: an automated initial contour identification, followed by application of a geometrically deformable model (GDM). The formation of the initial contours requires the input of a single seed point by the user, and was shown to be insensitive to the placement of the seed within a structure. The GDM minimizes contour energy, providing a smoothed final result. It requires only three simple parameters, all with easily selectable values. The algorithm is fast, performing segmentation on a 336 x 352 x 200 volume in 25 s when running on a 100 MHz 9500 Power Macintosh prototype. The segmentation algorithm was tested on stenosed vessel phantoms with known geometry, and the segmentation of the cross-sectional areas was found to be within 3% of the true area. The algorithm was also applied to two sets of patient carotid images, one acquired with a mechanical scanner and the other with a freehand scanning system, with good results on both.

  603.   Pardo, XM, Carreira, MJ, Mosquera, A, and Cabello, D, "A snake for CT image segmentation integrating region and edge information," IMAGE AND VISION COMPUTING, vol. 19, pp. 461-475, 2001.

Abstract:   The 3D representation and solid modeling of knee bone structures taken from computed tomography (CT) scans are necessary processes in many medical applications. The construction of the 3D model is generally carried out by stacking the contours obtained from a 2D segmentation of each CT slice, so the quality of the 3D model strongly depends on the precision of this segmentation process. In this work we present a deformable contour method for the problem of automatically delineating the external bone (tibia and fibula) contours from a set of CT scan images. We have introduced a new region potential term and an edge focusing strategy that diminish the problems that the classical snake method presents when it is applied to the segmentation of CT images. We introduce knowledge about the location of the object of interest and knowledge about the behavior of edges in scale space, in order to enhance edge information. We also introduce a region information aimed at complementing edge information. The novelty in that is that the new region potential does not rely on prior knowledge about image statistics; the desired features are derived from the segmentation in the previous slice of the 3D sequence. Finally, we show examples of 3D reconstruction demonstrating the validity of our model. The performance of our method was visually and quantitatively validated by experts. (C) 2001 Elsevier Science B.V. All rights reserved.

  604.   Francois, ARJ, and Medioni, GG, "Interactive 3D model extraction from a single image," IMAGE AND VISION COMPUTING, vol. 19, pp. 317-328, 2001.

Abstract:   We present a system at the junction between Computer Vision and Computer Graphics, to produce a three-dimensional (3D) model of an object as observed in a single image, with a minimum of high-level interaction from a user. The input to our system is a single image. First, the user points, coarsely, at image features (edges) that are subsequently automatically and reproducibly extracted in real-time. The user then performs a high level labeling of the curves (e.g. limb edge, cross-section) and specifies relations between edges (e.g. symmetry, surface or part). NURBS are used as working representation of image edges. The objects described by the user specified, qualitative relationships are then reconstructed either as a set of connected parts modeled as Generalized Cylinders, or as a set of 3D surfaces for 3D bilateral symmetric objects. In both cases, the texture is also extracted from the image. Our system runs in realtime on a PC. (C) 2001 Elsevier Science B.V. All rights reserved.

  605.   Doucette, P, Agouris, P, Stefanidis, A, and Musavi, M, "Self-organised clustering for road extraction in classified imagery," ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, vol. 55, pp. 347-358, 2001.

Abstract:   The extraction of road networks from digital imagery is a fundamental image analysis operation. Common problems encountered in automated road extraction include high sensitivity to typical scene clutter in high-resolution imagery, and inefficiency to meaningfully exploit multispectral imagery (MSI). With a ground sample distance (GSD) of less than 2 m per pixel, roads can be broadly described as elongated regions. We propose an approach of elongated region-based analysis for 2D road extraction from high-resolution imagery, which is suitable for MSI, and is insensitive to conventional edge definition. A self-organising road map (SORM) algorithm is presented. inspired from a specialised variation of Kohonens self-organising map (SOM) neural network algorithm. A spectrally classified high-resolution image is assumed to be the input for our analysis. Our approach proceeds by performing spatial cluster analysis as a mid-level processing technique. This allows us to improve tolerance to road clutter in high-resolution images, and to minimise the effect on road extraction of common classification errors. This approach is designed in consideration of the emerging trend towards high-resolution multispectral sensors. Preliminary results demonstrate robust road extraction ability due to the non-local approach, when presented with noisy input. (C) 2001 Elsevier Science B.V. All rights reserved.

  606.   Ballerini, L, "Genetic snakes for color images segmentation," APPLICATIONS OF EVOLUTIONARY COMPUTING, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 2037, pp. 268-277, 2001.

Abstract:   The world of meat faces a permanent need for new methods of meat quality evaluation. Recent advances in the area of computer and video processing have created new ways to monitor quality in the food industry. In this paper we propose a segmentation method to separate connective tissue from meat. We propose the use of Genetic Snakes, that are active contour models, also known as snakes, with an energy minimization procedure based on Genetic Algorithms (GA). Genetic Snakes have been proposed to overcome some limits of the classical snakes, as initialization, existence of multiple minima, and the selection of elasticity parameters, and have both successfully applied to medical and radar images. We extend the formulation of Genetic Snakes in two ways, by exploring additional internal and external energy terms and by applying them to color images. We employ a modified version of the image energy which considers the gradient of the three color RGB (red, green and blue) components. Experimental results on synthetic images as well as on meat images are reported. Images used in this work are color camera photographs of beef meat.

  607.   Tao, CV, Chapman, MA, and Chaplin, BA, "Automated processing of mobile mapping image sequences," ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, vol. 55, pp. 330-346, 2001.

Abstract:   Automated approaches to image sequence processing using mobile mapping imagery have been under investigation in the Department of Geomatics Engineering at The University of Calgary. This paper presents an overview of several methods developed for the VISAT (TM) mobile mapping system at The University of Calgary. Following a brief overview of mobile mapping technology, an analysis of mobile mapping image sequences from the viewpoint of visual motion theory is provided. Particular attention is paid to image and object domain constraints that can be exploited in the processing of mobile mapping image sequences. Several key methods to automated processing of mobile mapping image sequences are then described. These methods can be grouped into two categories, namely, information extraction and image-based trajectory determination (bridging). (C) 2001 Elsevier Science B.V. All rights reserved.

  608.   Chen, YM, and Bose, P, "On the incorporation of time-delay regularization into curvature-based diffusion," JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 14, pp. 149-164, 2001.

Abstract:   A new anisotropic nonlinear diffusion model incorporating time-delay regularization into curvature-based diffusion is proposed for image restoration and edge detection. A detailed mathematical analysis of the proposed model in the form of the proof of existence, uniqueness and stability of the "viscosity" solution of the model is presented. Furthermore, implementation issues and computational methods for the proposed model are also discussed in detail. The results obtained from testing our denoising and edge detection algorithm on several synthetic and real images showed the effectiveness of the proposed model in prserving sharp edges and fine structures while removing noise.

  609.   Shen, DG, Herskovits, EH, and Davatzikos, C, "An adaptive-focus statistical shape model for segmentation and shape modeling of 3-D brain structures," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 257-270, 2001.

Abstract:   This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale, The attribute vectors, in conjunction with the deformation mechanism of the model, warranty that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.

  610.   Sclaroff, S, and Liu, LF, "Deformable shape detection and description via model-based region grouping," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 23, pp. 475-489, 2001.

Abstract:   A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions obtained via any region segmentation algorithm, e.g., texture. color, or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.

  611.   Bajaj, CL, and Xu, GL, "Regular algebraic curve segments (III) - applications in interactive design and data fitting," COMPUTER AIDED GEOMETRIC DESIGN, vol. 18, pp. 149-173, 2001.

Abstract:   In this paper (part three of the trilogy) we use low degree G(1) and G(2) continuous regular algebraic spline curves defined within parallelograms, to interpolate an ordered set of data points in the plane. We explicitly characterize curve families whose members have the required interpolating properties and possess a minimal number of inflection points. The regular algebraic spline curves considered here have many attractive features: They are easy to construct. There exist convenient geometric control handles to locally modify the shape of the curve. The error of the approximation is controllable. Since the spline curve is always inside the parallelogram, the error of the fit is bounded by the size of the parallelogram. The spline curve can be rapidly displayed, even though the algebraic curve segments are implicitly defined. (C) 2001 Elsevier Science B.V. All rights reserved.

  612.   Lehmann, TM, Bredno, J, Metzler, V, Brook, G, and Nacimiento, W, "Computer-assisted quantification of axo-somatic boutons at the cell membrane of motoneurons," IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 48, pp. 706-717, 2001.

Abstract:   This paper presents a system for computer-assisted quantification of axe-somatic boutons at motoneuron cell-surface membranes, Different immunohistochemical stains can be used to prepare tissue of the spinal cord. Based on micrographs displaying single neurons, a finite element balloon model has been applied to determine the exact location of the cell membrane, A synaptic profile is extracted next to the cell membrane and normalized with reference to the intracellular brightness. Furthermore, a manually selected reference cell is used to normalize settings of the microscope as well as variations in histochemical processing for each stain. Thereafter, staining, homogeneity, and allocation of boutons are determined automatically from the synaptic profiles. The system is evaluated by applying the coefficient of variation (C-v) to repeated measurements of a quantity. Based on 1856 motoneuronal images acquired from four animals with three stains, 93% of the images are analyzed correctly. The others were rejected, based on process protocols. Using only rabbit anti-synaptophysin as primary antibody, the correctness increases above 96%, C-v values are below 3%, 5%, and 6% for all measures with respect to stochastic optimization, cell positioning, and a large range of microscope settings, respectively, A sample size of about 100 is required to validate a significant reduction of staining in motoneurons below a hemi-section (Wilcoxon rank-sum test, alpha = 0.05, beta = 0.9), Our system yields statistically robust results from light micrographs. In future, it is hoped that this system will substitute for the expensive and time-consuming analysis of spinal cord injury at the ultra-structural level, such as by manual interpretation of nonoverlapping electron micrographs.

  613.   Asano, T, Chen, DZ, Katoh, N, and Tokuyama, T, "Efficient algorithms for optimization-based image segmentation," INTERNATIONAL JOURNAL OF COMPUTATIONAL GEOMETRY & APPLICATIONS, vol. 11, pp. 145-166, 2001.

Abstract:   Separating an object in an image from its background is a central problem (called segmentation) in pattern recognition and computer vision. In this paper, we study the computational complexity of the segmentation problem, assuming that the sought object forms a connected region in an intensity image. We show that the optimization problem of separating a connected region in a grid of M x N pixels is NP-hard under the interclass variance, a criterion that is often used in discriminant analysis. More importantly, we consider the basic case in which the object is bounded by two x-monotone curves (i.e., the object itself is x-monotone), and present polynomial-time algorithms for computing the optimal segmentation. Our main algorithm for exact optimal segmentation by two x-monotone curves runs in O(N-4) time; this algorithm is based on several techniques such as a parametric optimization formulation, a hand-probing algorithm for the convex hull of an unknown planar point set, and dynamic programming using fast matrix searching. Our efficient approximation scheme obtains an epsilon -approximate solution in O(epsilon N--1(2) log L) time, where epsilon is any fixed constant with 1 > epsilon > 0, and L is the total sum of the absolute values of the brightness levels of the image.

  614.   Firbank, MJ, Harrison, RM, Williams, ED, and Coulthard, A, "Measuring extraocular muscle volume using dynamic contours," MAGNETIC RESONANCE IMAGING, vol. 19, pp. 257-265, 2001.

Abstract:   The effect of medical treatment on extraocular muscle enlargement in thyroid associated ophthalmopathy (TAO) may be monitored by measuring the change in volume of the extraocular muscles on serial orbital MRI examinations. In theory. 3D image sets offer the opportunity to minimise errors due to poor repositioning and partial volume effects. This study describes an automated technique for estimating extraocular muscle volumes from 3D datasets. Operator input is minimal and the technique is robust. Verification of the technique on both simulated and real datasets is described. For simulated image sets, both automated segmentation and manual outlining produced estimates of volume which were on average 4% less than 'true' volume. For real patient data, extraocular muscle volumes measured by the automated technique were 1.6% (SD 13%) less than volumes measured by manual outlining. Coefficient of variation for repeat outlining of the same image dataset for the automated technique was 1.0%, compared with 4% for manual outlining. The manual technique took an experienced operator approximately 20 min to perform. compared to 7 min fur the automated technique. The automated method is therefore rapid, reproducible and at least as accurate as other available methods. (C) 2001 Elsevier Science Inc. All rights reserved.

  615.   Lie, WN, and Chuang, CH, "Fast and accurate snake model for object contour detection," ELECTRONICS LETTERS, vol. 37, pp. 624-626, 2001.

Abstract:   A new scheme in which a snake model is used fnr object contour detection is proposed. By developing a no-search movement scheme, accepting the effective gradient vector flow field as the contracting force, and adjusting the weighting parameters automatically, an algorithm that is fast, less sensitive to initial contour conditions and accurate in approaching concave parts of an object boundary is obtained.

  616.   Desolneux, A, Moisan, L, and Morel, JM, "Edge detection by Helmholtz principle," JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 14, pp. 271-284, 2001.

Abstract:   We apply to edge detection a recently introduced method for computing geometric structures in a digital image, without any a priori information. According to a basic principle of perception due to Helmholtz, an observed geometric structure is perceptually "meaningful" if its number of occurences would be very small in a random situation: in this context, geometric structures are characterized as large deviations from randomness. This leads us to define and compute edges and boundaries (closed edges) in an image by a parameter-free method. Maximal detectable boundaries and edges are defined, computed, and the results compared with the ones obtained by classical algorithms.

  617.   Cohen, LD, "Multiple contour finding and perceptual grouping using minimal paths," JOURNAL OF MATHEMATICAL IMAGING AND VISION, vol. 14, pp. 225-236, 2001.

Abstract:   We address the problem of finding a set of contour curves in an image. We consider the problem of perceptual grouping and contour completion, where the data is a set of points in the image. A new method to find complete curves from a set of contours or edge points is presented. Our approach is based on a previous work on finding contours as minimal paths between two end points using the fast marching algorithm (L. D Cohen and R. Kimmel, International Journal of Computer Vision, Vol. 24, No. 1, pp. 57-78, 1997). Given a set of key points, we find the pairs of points that have to be linked and the paths that join them. We use the saddle points of the minimal action map. The paths are obtained by backpropagation from the saddle points to both points of each pair. In a second part, we propose a scheme that does not need key points for initialization. A set of key points is automatically selected from a larger set of admissible points. At the same time, saddle points between pairs of key points are extracted. Next, paths are drawn on the image and give the minimal paths between selected pairs of points. The set of minimal paths completes the initial set of contours and allows to close them. We illustrate the capability of our approach to close contours with examples on various images of sets of edge points of shapes with missing contours.

  618.   Coleman, TF, Li, YY, and Mariano, A, "Segmentation of pulmonary nodule images using 1-norm minimization," COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, vol. 19, pp. 243-272, 2001.

Abstract:   Total variation minimization (in the 1-norm) has edge preserving and enhancing properties which make it suitable for image segmentation. We present Image Simplification, a new formulation and algorithm for image segmentation. We illustrate the edge enhancing properties of 1-norm total variation minimization in a discrete setting by giving exact solutions to the problem for piecewise constant functions in the presence of noise. In this case, edges can be exactly recovered if the noise is sufficiently small. After optimization, segmentation is completed using edge detection. We find that our image segmentation approach yields good results when applied to the segmentation of pulmonary nodules.

  619.   Park, JY, McInerney, T, Terzopoulos, D, and Kim, MH, "A non-self-intersecting adaptive deformable surface for complex boundary extraction from volumetric images," COMPUTERS & GRAPHICS-UK, vol. 25, pp. 421-440, 2001.

Abstract:   This paper proposes a non-self-intersecting multiscale deformable surface model with an adaptive remeshing capability. The model is specifically designed to extract the three-dimensional boundaries of topologically simple but geometrically complex anatomical structures, especially those with deep concavities such as the brain, from volumetric medical images. The model successfully addresses three significant problems of conventional deformable models when dealing with such structures-sensitivity to model initialization, difficulties in dealing with severe object concavities, and model self-intersection. The first problem is addressed using a multiscale scheme, which extracts the boundaries of objects in a coarse-to-fine fashion by applying a multiscale deformable surface model to a multiresolution volume image pyramid. The second problem is addressed with adaptive remeshing, which progressively resamples the triangulated deformable surface model both globally and locally, matching its resolution to the levels of the volume image pyramid. Finally, the third problem is solved by including a non-self-intersection force among the customary internal and external forces in a physics-based model formulation. Our deformable surface model is more efficient, much less sensitive to initialization and spurious image features, more proficient in extracting boundary concavities, and not susceptible to self-intersections compared to most other models of its type. This paper presents results of applying our new deformable surface model to the extraction of a spherical surface with concavities from a computer-generated volume image and a brain cortical surface from a real MR volume image. (C) 2001 Elsevier Science Ltd. All rights reserved.

  620.   Lin, IJ, and Kung, SY, "Extraction of video objects via surface optimization and Voronoi order," JOURNAL OF VLSI SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, vol. 29, pp. 23-39, 2001.

Abstract:   We implement a video object segmentation system that integrates the novel concept of Voronoi Order with existing surface optimization techniques to support the MPEG-4 functionality of object-addressable video content in the form of video objects. The major enabling technology for the MPEG-4 standard are systems that compute video object segmentation, i.e., the extraction of video objects from a given video sequence. Our surface optimization formulation describes the video object segmentation problem in the form of an energy function that integrates many visual processing techniques. By optimizing this surface, we balance visual information against predictions of models with a priori information and extract video objects from a video sequence. Since the global optimization of such an energy function is still an open problem, we use Voronoi Order to decompose our formulation into a tractable optimization via dynamic programming within an iterative framework. In conclusion, we show the results of the system on the MPEG-4 test sequences, introduce a novel objective measure, and compare results against those that are hand-segmented by the MPEG-4 committee.

  621.   Ladak, HM, Thomas, JB, Mitchell, JR, Rutt, BK, and Steinman, DA, "A semi-automatic technique for measurement of arterial wall from black blood MRI," MEDICAL PHYSICS, vol. 28, pp. 1098-1107, 2001.

Abstract:   Black blood magnetic resonance imaging (MRI) has become a popular technique fur imaging the artery wall in vivo. Its noninvasiveness and high resolution make it ideal for studying the progression of early atherosclerosis in normal volunteers or asymptomatic patients with mild disease, However, the operator variability inherent in the manual measurement of vessel wall area from MR images hinders the reliable detection of relatively small changes in the artery wall over time. In this paper we present a semi-automatic method for segmenting the inner and outer boundary of the artery wall, and evaluate its operator variability using analysis of variance (ANOVA). In our approach, a discrete dynamic contour is approximately initialized by an operator, deformed to the inner boundary, dilated, and then deformed to the outer boundary. A group of four operators performed repeated measurements on 12 images from normal human subjects using both our semiautomatic technique and a manual approach. Results from the ANOVA indicate that the inter-operator standard error of measurement (SEM) of total wall area decreased from 3.254 mm(2) (manual) to 1.293 mm(2) (semi automatic), and the intra-operator SEM decreased from 3.005 mm(2) to 0.958 mm(2). Operator reliability coefficients increased fi om less than 69% to more than 91% tinter-operator) and 95% (intra-operator). The minimum detectable change in wall area improved from more than 8.32 mm(2) (intra-operator, manual) to less than 3.59 mm(2) tinter-operator, semiautomatic), suggesting that it is better to have multiple operators measure wall area with our semi-automatic technique than to have a single operator make repeated measurements manually. Similar improvements in wall thickness and lumen radius measurements were also recorded. Since the semi-automatic technique has effectively ruled out the effect. of the operator on these measurements, it may be possible to use such techniques to expand prospective studies of atherogenesis to multiple centers so as to increase access to real patient data without sacrificing reliability. (C) 2001 American Association of Physicists in Medicine.

  622.   Ruch, O, and Refregier, P, "Minimal-complexity segmentation with a polygonal snake adapted to different optical noise models," OPTICS LETTERS, vol. 26, pp. 977-979, 2001.

Abstract:   Polygonal active contours (snakes) have been used with success for target segmentation and tracking. We propose to adapt a technique based on the minimum description length principle to estimate the complexity (proportional to the number of nodes) of the polygon used for the segmentation. We demonstrate that, provided that an up-and-down multiresolution strategy is implemented, it is possible to estimate efficiently this number of nodes without a priori knowledge and with a fast algorithm, leading to a segmentation criterion without free parameters. We also show that, for polygonal-shaped objects, this new technique leads to better results than using a simple regularization strategy based on the smoothness of the contour. (C) 2001 Optical Society of America.

  623.   Saha, PK, and Udupa, JK, "Optimum image thresholding via class uncertainty and region homogeneity," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 23, pp. 689-706, 2001.

Abstract:   Thresholding is a popular image segmentation method that converts a gray-level image into a binary image. The selection of optimum thresholds has remained a challenge over decades. Besides being a segmentation tool on its own, often it is also a step in many advanced image segmentation techniques in spaces other than the image space. Most of the thresholding methods reported to date are based on histogram analysis using information-theoretic approaches. These methods have not harnessed the information captured in image morphology. Here, we introduce a novel thresholding method that accounts for both intensity-based class uncertainty-a histogram-based property-and region homogeneity-an image morphology-based property. A scale-based formulation is used for region homogeneity computation. At any threshold, intensity-based class uncertainty is computed by fitting a Gaussian to the intensity distribution of each of the two regions segmented at that threshold. The theory of the optimum thresholding method is based on the postulate that objects manifest themselves with fuzzy boundaries in any digital image acquired by an imaging device. The main idea here is to select that threshold at which pixels with high class uncertainty accumulate mostly around object boundaries. To achieve this, a new threshold energy criterion is formulated using class-uncertainty and region homogeneity such that, at any image location, a high energy is created when both class uncertainty and region homogeneity are high or both are low. Finally, the method selects that threshold which corresponds to the minimum overall energy. The method has been compared to a recently published maximum segmented image information (MSII) method. Superiority of the proposed method was observed both qualitatively on clinical medical images as well as quantitatively on 250 realistic phantom images generated by adding different degrees of blurring, noise, and background variation to real objects segmented from clinical images.

  624.   Mahnken, AH, Kohnen, M, Steinberg, S, Wein, BB, and Gunther, RW, "Automated image analysis of lateral lumber X-rays by a form model.," ROFO-FORTSCHRITTE AUF DEM GEBIET DER RONTGENSTRAHLEN UND DER BILDGEBENDEN VERFAHREN, vol. 173, pp. 554-557, 2001.

Abstract:   Purpose: Development of a software for fully automated image analysis of lateral lumbar spine X-rays. Material and method: Using the concept of active shape models, we developed a software that produces a form model of the lumbar spine from lateral lumbar spine radiographs and runs an automated image segmentation. This model is able to detect lumbar vertebrae automatically after the filtering of digitized X-ray images. The model was trained with 20 lateral lumbar spine radiographs with no pathological findings before we evaluated the software with 30 further X-ray images which were sorted by image quality ranging from one (best) to three (worst). There were 10 images for each quality. Results: image recognition strongly depended on image quality. In group one 52 and in group two 51 out of 60 vertebral bodies including the sacrum were recognized, but in group three only 18 vertebral bodies were properly identified. Conclusion: Fully automated and reliable recognition of vertebral bodies from lateral spine radiographs using the concept of active shape models is possible. The precision of this technique is limited by the superposition of different structures. Further improvements are necessary. Therefore standardized image quality and enlargement of the training data set are required.

  625.   Varekamp, C, and Hoekman, DH, "Segmentation of high-resolution InSAR data of a tropical forest using Fourier parameterized deformable models," INTERNATIONAL JOURNAL OF REMOTE SENSING, vol. 22, pp. 2339-2350, 2001.

Abstract:   Currently, tree maps are produced from field measurements that are time consuming and expensive. Application of existing techniques based on aerial photography is often hindered by cloud cover. This has initiated research into the segmentation of high resolution airborne interferometric Synthetic Aperture Radar (SAR) data for deriving tree maps. A robust algorithm is constructed to optimally position closed boundaries. The boundary of a tree crown will be best approximated when at all points on the boundary, the z-coordinate image gradient is maximum, and directed inwards orthogonal to the boundary. This property can be expressed as the result of a line integral along the boundary. Boundaries with a large value for the line integral are likely to be tree crowns. This paper focuses on the search procedure and on illustrating how smoothing can be used to prevent the search from becoming trapped in a local optimum. The final crown detection stage is not described in this paper but could be based on the gradient and implemented using the above described value for the line integral. Results of this paper indicate that a Fourier parametrization with only three harmonics (nine parameters) can describe the shape variation in the 2D crown projection in sufficient detail. Current ground datasets are not suitable for obtaining detection statistics such as the percentage of tree crowns detected and the number of false alarms. Better ground datasets will be needed to evaluate algorithm performance for real tree mapping situations.

  626.   Chang, IC, and Huang, CL, "Skeleton-based walking motion analysis using hidden Markov model and active shape models," JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, vol. 17, pp. 371-403, 2001.

Abstract:   This paper proposes a skeleton-based human walking motion analysis system which consists of three major phases. In the first phase, it extracts the human body skeleton from the background and then obtains the body signatures. In the second phase, it analyzes the training sequences to generate statistical models. In the third phase, it uses the trained models to recognize the input human motion sequence and calculate the motion parameters. The experimental results demonstrate how our system can recognize the motion type and describe the motion characteristics of the image sequence. Finally, the synthesized motion sequences are illustrated. The major contributions of this paper are: (1) development of a skeleton-based method and use of Hidden Markov Models (HMM) to recognize the motion type; (2) incorporation of the Active Shape Models (ASMs) and the body structure characteristics to generate the motion parameter curves of the human motion.

  627.   Kiyuna, T, Kamijo, K, Yamazaki, T, Moriyama, N, and Sekiguchi, R, "Automated reconstruction of a three-dimensional brain model from magnetic resonance images," NEUROIMAGE, vol. 13, pp. S173-S173, 2001.

Abstract:   In this paper, we address two problems crucial to motion analysis: the detection of moving objects and their localisation. Statistical and level set approaches are adopted in formulating these problems. For the change detection problem, the inter-frame difference is modelled by a mixture of two zero-mean Laplacian distributions. At first, statistical tests using criteria with negligible error probability are used for labelling as changed or unchanged as many sites as possible. All the connected components of the labelled sites are used thereafter as region seeds, which give the initial level sets for which velocity fields for label propagation are provided, We introduce a new multi-label fast marching algorithm for expanding competitive regions. The solution of the localisation problem is based on the map of changed pixels previously extracted. The boundary of the moving object is determined by a level set algorithm, which is initialised by two curves evolving in converging opposite directions. The sites of curve contact determine the position of the object boundary. Experimental results using real video sequences are presented, illustrating the efficiency of the proposed approach. (C) 2001 Elsevier Science B.V. All rights reserved.

  628.   Sifakis, E, and Tziritas, G, "Moving object localisation using a multi-label fast marching algorithm," SIGNAL PROCESSING-IMAGE COMMUNICATION, vol. 16, pp. 963-976, 2001.

Abstract:   In this paper, we address two problems crucial to motion analysis: the detection of moving objects and their localisation. Statistical and level set approaches are adopted in formulating these problems. For the change detection problem, the inter-frame difference is modelled by a mixture of two zero-mean Laplacian distributions. At first, statistical tests using criteria with negligible error probability are used for labelling as changed or unchanged as many sites as possible. All the connected components of the labelled sites are used thereafter as region seeds, which give the initial level sets for which velocity fields for label propagation are provided, We introduce a new multi-label fast marching algorithm for expanding competitive regions. The solution of the localisation problem is based on the map of changed pixels previously extracted. The boundary of the moving object is determined by a level set algorithm, which is initialised by two curves evolving in converging opposite directions. The sites of curve contact determine the position of the object boundary. Experimental results using real video sequences are presented, illustrating the efficiency of the proposed approach. (C) 2001 Elsevier Science B.V. All rights reserved.

  629.   Suri, JS, "Two-dimensional fast magnetic resonance brain segmentation," IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, vol. 20, pp. 84-95, 2001.

Abstract:   This study quantifies variance components of two-dimensional strains in the left-ventricular heart wall assessed by magnetic resonance (MR) tagging in 18 healthy xxvolunteers. For a 7-mm tagging grid and homogeneous strain analysis, the intersubject variability and measurement error were estimated, as well as the intra- and interobserver variability. The variance components were calculated for the mean strain of a circumferential sector. The results show that the measurement error was almost equal to the intra-observer variability. With four circumferential sectors of 90 degrees each, approximately 65% of the total variance in epsilon (r) and epsilon (c) was due to intersubject variability, the remaining 35% was due to measurement error. With 12 sectors of 30 degrees each, the intersubject variability and measurement error both contributed 50% to the total variance. With 18 sectors of 20 degrees each, only 40% of the total variance was due to intersubject variability. The total variability increased with the number of sectors and therefore the number of sectors used in a study will be a trade-off between segment size (defining spatial resolution) and variability.

  630.   Kuijer, JPA, Marcus, JT, Gotte, MJW, van Rossum, AC, Ader, HJ, and Heethaar, RM, "Variance components of two-dimensional strain parameters in the left-ventricular heart wall obtained by magnetic resonance tagging," INTERNATIONAL JOURNAL OF CARDIAC IMAGING, vol. 17, pp. 49-60, 2001.

Abstract:   This study quantifies variance components of two-dimensional strains in the left-ventricular heart wall assessed by magnetic resonance (MR) tagging in 18 healthy xxvolunteers. For a 7-mm tagging grid and homogeneous strain analysis, the intersubject variability and measurement error were estimated, as well as the intra- and interobserver variability. The variance components were calculated for the mean strain of a circumferential sector. The results show that the measurement error was almost equal to the intra-observer variability. With four circumferential sectors of 90 degrees each, approximately 65% of the total variance in epsilon (r) and epsilon (c) was due to intersubject variability, the remaining 35% was due to measurement error. With 12 sectors of 30 degrees each, the intersubject variability and measurement error both contributed 50% to the total variance. With 18 sectors of 20 degrees each, only 40% of the total variance was due to intersubject variability. The total variability increased with the number of sectors and therefore the number of sectors used in a study will be a trade-off between segment size (defining spatial resolution) and variability.

  631.   Ravhon, R, Adam, D, and Zelmanovitch, L, "Validation of ultrasonic image boundary recognition in abdominal aortic aneurysm," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 751-763, 2001.

Abstract:   An aneurysm of the abdominal aorta (AAA) is characterized by modified wall properties, and a balloon-like area usually filled by a thrombus. A rupture of an aortic aneurysm can be fatal, yet there is no way to accurately predict such an occurrence. The study of the wall and thrombus cross-sectional distension, due to a pressure wave, is important as a way of assessing the degradation of the mechanical properties of the vessel wall and the risk of a rupture. Echo ultrasound transverse cross-sectional imaging is used here to study the thrombus and the aortic wall distension, requiring their segmentation within the image. Polar coordinates are defined, and a search is performed for minimizing a cost function, which includes a description of the boundary (based on a limited series of sine and cosine functions) and information from the image intensity gradients along the radii. The method is based on filtering by a modified Canny-Deriche edge detector and then on minimization of an energy function based on five parts. Since echoes from blood in the lumen and the thrombus produce similar patterns and speckle noise, a modified version for identifying the lumen-thrombus border was developed. The method has been validated by various ways, including parameter sensitivity testing and comparison to the performance of an expert. It is robust enough to track the lumen and total arterial cross-sectional area changes during the cardiac cycle. In 34 patients where sequences of images were acquired, the border between the thrombus and the arterial wall was detected with errors less than 2%, while the lumen-thrombus border was detected with a mean error of 4%. Thus, a noninvasive measurement of the AAA cross-sectional area is presented, which has been validated and found to be accurate.

  632.   Gatzoulis, L, Watson, RJ, Jordan, LB, Pye, SD, Anderson, T, Uren, N, Salter, DM, Fox, KAA, and McDicken, WN, "Three-dimensional forward-viewing intravascular ultrasound imaging of human arteries in vitro," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 27, pp. 969-982, 2001.

Abstract:   The aim of this work was to investigate the suitability of a novel forward-viewing intravascular ultrasound (IVUS) technique for three-dimensional imaging of severely stenosed or totally occluded vessels, where the conventional side-viewing IVUS systems are of limited use. A stiff 3.8 mm diameter forward-viewing catheter was manufactured to scan a 72 degrees sector ahead of its tip. Conical volume data were acquired by rotating the catheter over 180 degrees by means of a motorised mechanical system. Operating at 30 MHz, the catheter was integrated with an IVUS scanner and a radiofrequency data acquisition system. Postmortem carotid and femoral arteries were scanned in vitro. Correlation of the reconstructed images with histology demonstrated the ability of this forward-viewing IVUS system to visualise healthy lumens, bifurcations, thickened atherosclerotic walls and, most importantly, severe and complete vessel occlusions. A rotating-sector forward-viewing IVUS system is suitable for anatomical assessment of severely diseased vessels in three dimensions.

  633.   Kerschner, M, "Seamline detection in colour orthoimage mosaicking by use of twin snakes," ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, vol. 56, pp. 53-64, 2001.

Abstract:   In the last step of the mosaic production chain. neighbouring and partly overlapping orthoimages of a scene are merged to one mosaic. This should be done in a way that the transition from one to another orthoimage cannot be seen. The production line of orthoimages consists of several steps, each of which can introduce a different appearance regarding geometry, radiometry and spectral properties to the resulting orthoimage. For mosaicking adjacent orthoimages. a path of lowest difference in a combination of criteria is searched in the overlap area of these images. The seamline is chosen along this path of maximum similarity. In this paper, criteria for such an optimal seamline in colour orthoimages are elaborated. The main requirements are on one hand high colour similarity of the images (mainly in hue and intensity), and on the other hand high texture similarity (in orientation and magnitude of image gradients). The specified criteria are formulated in the energy function of snakes. A snake is an active contour which moves through an image and changes its shape until a minimum of its energy function is found. We use two snakes that attract one another (twin snakes). In a hierarchical strategy, a proper seamline is delineated fully automatically. The potential of the method is shown with an example. (C) 2001 Elsevier Science B.V. Alt rights reserved.

  634.   Chen, JX, Wechsler, H, Pullen, JM, Zhu, Y, and MacMahon, EB, "Knee surgery assistance: Patient model construction, motion simulation, and biomechanical visualization," IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 48, pp. 1042-1052, 2001.

Abstract:   We present a new system that integrates computer graphics, physics-based modeling, and interactive visualization to assist knee study and surgical operation. First, we discuss generating patient-specific three-dimensional (3-D) knee models from patient's magnetic resonant images (MRIs). The 3-D model is obtained by deforming a reference model to match the MRI dataset. Second, we present simulating knee motion that visualizes patient-specific motion data on the patient-specific knee model. Third, we introduce visualizing biomechanical information on a patient-specific model. The focus is on visualizing contact area, contact forces, and menisci deformation. Traditional methods have difficulty in visualizing knee contact area without using invasive methods. The approach presented here provides an alternative of visualizing the knee contact area and forces without any risk to the patient. Finally, a virtual surgery can be performed. The constructed 3-D knee model is the basis of motion simulation, biomechanical visualization, and virtual surgery. Knee motion simulation determines the knee rotation angles as well as knee contact points. These parameters are used to solve the biomechanical model. Our results integrate 3-D construction, motion simulation, and biomechanical visualization into one system. Overall, the methodologies here are useful elements for future virtual medical systems where all the components of visualization, automated model generation, and surgery simulation come together.

  635.   Eriksson, M, and Papanikolopoulos, NP, "Driver fatigue: a vision-based approach to automatic diagnosis," TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, vol. 9, pp. 399-413, 2001.

Abstract:   In this paper, we describe a system that locates and tracks the eyes of a driver. The purpose of such a system is to perform detection of driver fatigue. By mounting a small camera inside the car, we can monitor the face of the driver and look for eye movements which indicate that the driver is no longer in condition to drive. In such a case, a warning signal should be issued. This paper describes how to find and track the eyes. We also describe a method that can determine if the eyes are open or closed. The primary criterion for this system is that it must be highly non-intrusive. The system must also operate regardless of the texture and the color of the face. It must also be able to handle changing conditions such as changes in light, shadows, reflections, etc. Initial experimental results are very promising even when the driver moves his/her head in a way such that the camera does not have a frontal view of the driver's face. (C) 2001 Elsevier Science Ltd. All rights reserved.

  636.   Kosmopoulos, D, and Varvarigou, T, "Automated inspection of gaps on the automobile production line through stereo vision and specular reflection," COMPUTERS IN INDUSTRY, vol. 46, pp. 49-63, 2001.

Abstract:   One of the most difficult tasks in the later stages of automobile assembly is the dimensional inspection of the gaps between the car body and the various panels fitted on it (doors, motor-hood, etc.). The employment of an automatic gap-measuring system would reduce the costs significantly and would offer high flexibility. However, this task is still performed by humans and only a few - still experimental - automatic systems have been reported. In this paper, we introduce a system for automated gap inspection that employs computer vision. It is capable of measuring the lateral and the range dimension of the gap (width and flush, correspondingly). The measurement installation consists of two calibrated stereo cameras and two infrared LED lamps, used for highlighting the edges of the gap through specular reflection. The gap is measured as the 3D distance between the highlighted edges. This method has significant advantages against the laser-based, gap-measuring systems, mainly due to its color independency. Our approach has been analytically described in 2D and extensively evaluated using synthetic as well as real gaps. The results obtained verify its robustness and its applicability in an industrial environment. (C) 2001 Published by Elsevier Science B.V.

  637.   Delingette, H, and Montagnat, J, "Shape and topology constraints on parametric active contours," COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 83, pp. 140-171, 2001.

Abstract:   In recent years, the field of active contour-based image segmentation has seen the emergence of two competing approaches. The first and oldest approach represents active contours in an explicit (or parametric) manner corresponding to the Lagrangian formulation. The second approach represents active contours in an implicit manner corresponding to the Eulerian framework. After comparing these two approaches, we describe several new topological and physical constraints applied to parametric active contours in order to combine the advantages of these two contour representations. More precisely, we introduce three algorithms related to the control of the contour topology, geometry, and deformation. The first algorithm controls both vertex spacing and contour smoothness in an independent and intrinsic manner. The second algorithm controls the contour resolution (number of vertices) while the third algorithm automatically creates or fuses connected components on closed or opened contours. The efficiency of these algorithms is demonstrated on several images including medical images and a comparison with the level-sets method is also provided. (C) 2001 Academic Press.

  638.   Lorigo, LM, Faugeras, OD, Grimson, WEL, Keriven, R, Kikinis, R, Nabavi, A, and Westin, CF, "CURVES: Curve evolution for vessel segmentation," MEDICAL IMAGE ANALYSIS, vol. 5, pp. 195-206, 2001.

Abstract:   The vasculature is of utmost importance in neurosurgery. Direct visualization of images acquired with current imaging modalities, however, cannot provide a spatial representation of small vessels. These vessels, and their branches which show considerable variations, are most important in planning and performing neurosurgical procedures. In planning they provide information on where the lesion draws its blood supply and where it drains. During surgery the vessels serve as landmarks and guidelines to the lesion. The more minute the information is, the more precise the navigation and localization of computer guided procedures. Beyond neurosurgery and neurological study, vascular information is also crucial in cardiovascular surgery, diagnosis, and research. This paper addresses the problem of automatic segmentation of complicated curvilinear structures in three-dimensional imagery, with the primary application of segmenting vasculature in magnetic resonance angiography (MRA) images. The method presented is based on recent curve and surface evolution work in the computer vision community which models the object boundary as a manifold that evolves iteratively to minimize an energy criterion. This energy criterion is based both on intensity values in the image and on local smoothness properties of the object boundary, which is the vessel wall in this application. In particular, the method handles curves evolving in 3D, in contrast with previous work that has dealt with curves in 2D and surfaces in 3D. Results are presented on cerebral and aortic MRA data as well as lung computed tomography (CT) data. (C) 2001 Elsevier Science B.V. All rights reserved.

  639.   Goldenberg, R, Kimmel, R, Rivlin, E, and Rudzsky, M, "Fast geodesic active contours," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 1467-1475, 2001.

Abstract:   We use an unconditionally stable numerical scheme to implement a fast version of the geodesic active contour model. The proposed scheme is useful for object segmentation in images, like tracking moving objects in a sequence of images. The method is based on the Weickert-Romeney-Viergever (additive operator splitting) AOS scheme. It is applied at small regions, motivated by Adalsteinsson-Sethian level set narrow band approach, and uses Sethian's fast marching method for re-initialization. Experimental results demonstrate the power of the new method for tracking in color movies.

  640.   Fernandez-Caballero, A, Mira, J, Fernandez, MA, and Lopez, MT, "Segmentation from motion of non-rigid objects by neuronal lateral interaction," PATTERN RECOGNITION LETTERS, vol. 22, pp. 1517-1524, 2001.

Abstract:   The problem we are stating is the discrimination of non-rigid objects capable of holding our attention in a scene. Motion allows gradually obtaining all moving objects shapes. We introduce an algorithm that fuses spots obtained by means of neuronal lateral interaction in accumulative computation. (C) 2001 Elsevier Science B.V. All rights reserved.

  641.   Pardas, M, and Sayrol, E, "Motion estimation based tracking of active contours," PATTERN RECOGNITION LETTERS, vol. 22, pp. 1447-1456, 2001.

Abstract:   This paper addresses the application of active contours or snakes for tracking of contours in image sequences. We propose to use the dynamic programming implementation of the snakes in order to restrict the possible candidates for a given snaxel to those that have a high correlation with the corresponding snaxel in the previous frame. Besides, we claim that, in tracking applications, the motion compensation error has to be introduced in the external energy of the snake to be able to track generic contours. (C) 2001 Elsevier Science B.V. All rights reserved.

  642.   Ferrant, M, Macq, B, Nabavi, A, and Warfield, SK, "Deformable modeling for characterizing biomedical shape changes," DISCRETE GEOMETRY FOR COMPUTER IMAGERY, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1953, pp. 235-248, 2001.

Abstract:   We present a new algorithm for modeling and characterizing shape changes in 3D image sequences of biomedical structures. Our algorithm tracks the shape changes of the objects depicted in the image sequence using an active surface algorithm. To characterize the deformations of the surrounding and inner volume of the object's surfaces, we use a physics-based model of the objects the image represents. In the applications we are presenting, our physics-based model is linear elasticity and we solve the corresponding equilibrium equations using the Finite Element (FE) method. To generate a FE mesh from the initial 3D image, we have developed a new multiresolution tetrahedral mesh generation algorithm specifically suited for labeled image volumes. The shape changes of the surfaces of the objects are used as boundary conditions to our physics-based FE model and allow us to infer a volumetric deformation field from the surface deformations. Physics-based measures such as stress tensor maps can then be derived from our model for characterizing the shape changes of the objects in the image sequence. Experiments on synthetic images as well as on medical data show the performances of the algorithm.

  643.   Koozekanani, D, Boyer, K, and Roberts, C, "Retinal thickness measurements from optical coherence tomography using a Markov boundary model," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 900-916, 2001.

Abstract:   We present a system for detecting retinal boundaries in optical coherence tomography (OCT) B-scans. OCT is a relatively new imaging modality giving cross-sectional images that are qualitatively similar to ultrasound. However, the axial resolution with OCT is much higher. on the order of 10 mum. Objective, quantitative measures of retinal thickness may be made from OCT images. Knowledge of retinal thickness is important in the evaluation and treatment of many ocular diseases. The boundary-detection system presented here uses a one-dimensional edge-detection kernel to yield edge primitives. These edge primitives are rated, selected, and organized to form a coherent boundary structure by use of a Markov model of retinal boundaries as detected by OCT. Qualitatively, the boundaries detected by the automated system generally agreed extremely well with the true retinal structure for the vast majority of OCT images. Only one of the 1450 evaluation images caused the algorithm to fail. A quantitative evaluation of the retinal boundaries was performed as well, using the clinical application of automatic retinal thickness determination. Retinal thickness measurements derived from the algorithm's results were compared with thickness measurements from manually corrected boundaries for 1450 test images. The algorithm's thickness measurements over a 1-mm region near the fovea differed from the corrected thickness measurements by less than 10 mum for 74% of the images and by less than 25 mum (10% of normal retinal thickness) for 98.4% of the images. These errors are near the machine's resolution limit and still well below clinical significance. Current, standard clinical practice involves a qualitative, visual assessment of retinal thickness. A robust, quantitatively accurate system such as ours can be expected to improve patient care.

  644.   Torheim, G, Amundsen, T, Rinck, PA, Haraldseth, O, and Sebastiani, G, "Analysis of contrast-enhanced dynamic MR images of the lung," JOURNAL OF MAGNETIC RESONANCE IMAGING, vol. 13, pp. 577-587, 2001.

Abstract:   Recent studies have demonstrated the potential of dynamic contrast-enhanced magnetic resonance Imaging (MRI) describing pulmonary perfusion. However, breathing motion, susceptibility artifacts, and a low signal-to-noise ratio (SNR) make automatic pixel-by-pixel analysis difficult. In the present work, we propose a novel method to compensate for breathing motion. In order to test the feasibility of this method, we enrolled 53 patients with pulmonary embolism (N = 24), chronic obstructive pulmonary disease (COPD) (N = 14), and acute pneumonia (N = 15). A crucial part of the method, an automatic diaphragm detection algorithm, was evaluated in all 53 patients by two Independent observers. The accuracy of the method to detect the diaphragm showed a success rate of 92%. Furthermore, a Bayesian noise reduction technique was implemented and tested. This technique significantly reduced the noise level without removing important clinical information. In conclusion, the combination of a motion correction method and a Bayesian noise reduction method offered a rapid, semiautomatic pixel-by-pixel analysis of the lungs with great potential for research and clinical use. (C) 2001 Wiley-Liss, Inc.

  645.   Qatarneh, SM, Crafoord, J, Kramer, EL, Maguire, GQ, Brahme, A, Noz, ME, and Hyodynmaa, S, "A whole body atlas for segmentation and delineation of organs for radiation therapy planning," NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT, vol. 471, pp. 160-164, 2001.

Abstract:   A semi-automatic procedure for delineation of organs, to be used as the basis of a whole body atlas database for radiation therapy planning was developed. The Visible Human Male Computed Tomography (CT)-data set was used as a "standard man" reference. The organ of interest was outlined manually and then transformed by a polynomial warping algorithm onto a clinical patient CT. This provided an initial contour, which was then adjusted and refined by the semi-automatic active contour model to find the final organ outline. The liver was used as a test organ for evaluating the performance of the procedure. Liver outlines obtained by the segmentation algorithm on six patients were compared to those manually drawn by, a radiologist. The combination of warping and semi-automatic active contour model generally provided satisfactory segmentation results, but the procedure has to be extended to three dimensions. (C) 2001 Elsevier Science B.V. All rights reserved.

  646.   Biscay, RJ, and Mora, CM, "Metric sample spaces of continuous geometric curves and estimation of their centroids," MATHEMATISCHE NACHRICHTEN, vol. 229, pp. 15-49, 2001.

Abstract:   The metric sample space of Frechet curves (FRECHET, 1934, 1951, 1961) is based on a generalization of regular curves that covers continuous curves in full generality. This makes it possible to deal with both smooth and non-smooth, even non-rectifiable geometric curves in statistical analysis. In the present paper this sample space is further extended in two directions that are relevant in practice: to incorporate information on landmark points in the curves and to impose invariance with respect to an arbitrary group of isometric spatial transformations. Properties of the introduced sample spaces of curves are studied, specially those concerning to the generation and representation of random curves by random functions. In order to provide measures of central tendency and dispersion of random curves, centroids and restricted centroids of random curves are defined in a general metric framework, and methods for their consistent estimation are derived.

  647.   Aissaoui, R, Kauffmann, C, Dansereau, J, and de Guise, JA, "Analysis of pressure distribution at the body-seat interface in able-bodied and paraplegic subjects using a deformable active contour algorithm," MEDICAL ENGINEERING & PHYSICS, vol. 23, pp. 359-367, 2001.

Abstract:   In this paper, a semi-automatic method for segmenting pressure distribution image-based data at the body-seat interface is presented. The purpose of this work was to estimate the surface and the load supported by the ischial tuberosity (IT) region. The proposed method involves three steps: (1) detecting the IT region using a pressure-distribution image gradient; (2) estimating the contour of the IT region by an iterative active contour algorithm and finally (3) estimating the percentage of the surface and the weight-bearing of the IT region in a group of able-bodied (AB) and spinal-cord injury (SCI) subjects. It was found in this study that the weight bearing on the IT for the spinal-cord injured group is distributed on half the surface in comparison with the AB group or the powered wheelchair users groups. The findings of this study provide insights concerning pressure distribution in sitting for the paraplegic and able-bodied. (C) 2001 IPEM. Published by Elsevier Science Ltd. All rights reserved.

  648.   Jermyn, IH, and Ishikawa, H, "Globally optimal regions and boundaries as minimum ratio weight cycles," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 23, pp. 1075-1088, 2001.

Abstract:   We describe anew form of energy functional for the modeling and identification of regions in images. The energy is defined on the space of boundaries in the image domain and can incorporate very general combinations of modeling information both from the boundary (intensity gradients, etc.) and from the interior of the region (texture, homogeneity, etc.). We describe two polynomial-time digraph algorithms for finding the global minima of this energy. One of the algorithms is completely general, minimizing the functional for any choice of modeling information. It runs in a few seconds on a 256x256 image. The other algorithm applies to a subclass of functionals, but has the advantage of being extremely parallelizable. Neither algorithm requires initialization.

  649.   Yan, H, "Fuzzy curve-tracing algorithm," IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, vol. 31, pp. 768-780, 2001.

Abstract:   This paper presents a fuzzy clustering algorithm for the extraction of a smooth curve from unordered noisy data. In this method, the input data are first clustered into different regions using the fuzzy c-means algorithm and each region is represented by its cluster center. Neighboring cluster centers are linked to produce a graph according to the average class membership values. Loops in the graph are removed to form a curve according to spatial relations of the cluster centers. The input samples are then reclustered using the fuzzy c-means (FCM) algorithm, with the constraint that the curve must be smooth. The method has been tested with both open and closed curves with good results.

  650.   Nikou, C, Bueno, G, Heitz, F, and Armspach, JP, "A joint physics-based statistical deformable model for multimodal brain image analysis," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 1026-1037, 2001.

Abstract:   A probabilistic deformable, model for the representation of multiple brain structures is described. The statistically learned deformable model! represents the relative location of different anatomical surfaces in brain magnetic resonance images (MRIs) and accommodates their significant variability across different individuals. The surfaces of each anatomical structure are parameterized by the amplitudes of the vibration modes of a deformable spherical mesh. For a given MRI in the training set, a vector containing the largest vibration modes describing the different deformable surfaces is created. This random vector is statistically constrained by retaining the most significant variation modes of its Karhunen-Loeve expansion on the training population. By these means, the conjunction of surfaces are deformed according to the anatomical variability observed in the training set. Two applications of the joint probabilistic deformable model are presented: isolation of the brain from MRI using the probabilistic constraints embedded in the model; and deformable model-based registration of three-dimensional multimodal (magnetic resonance/single photon emission computed tomography) brain images without removing nonbrain structures. The multiobject deformable model may be considered as a first step toward the development of a general purpose probabilistic anatomical atlas of the brain.

  651.   Vincze, M, "Robust tracking of ellipses at frame rate," PATTERN RECOGNITION, vol. 34, pp. 487-498, 2001.

Abstract:   The critical issue in vision-based control of motion is robust tracking at real time. A method is presented that tracks ellipses at field rate using a Pentium PC. Robustness is obtained by integrating gradient information and mode (intensity) values for the detection of edgels along the contour of the ellipse and by using a probabilistic (RANSAC-like, Fischler and Bolles, Commun. ACM 24(6) (1981) 381) method to find the most likely ellipse-shaped object. Detailed experiments document the capabilities and limitations of the approach and the robustness achieved. (C) 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  652.   Son, JD, and Ko, HS, "Robust motion tracking of multiple objects with KL-IMMPDAF," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E84D, pp. 179-187, 2001.

Abstract:   This paper describes how the image sequences taken by a stationary video camera may be effectively processed to detect and track moving objects against a stationary background in real-time. Our approach is first to isolate the moving objects in image sequences via a modified adaptive background estimation method and then perform token tracking of multiple objects based oil features extracted from the processed image sequences. In feature based multiple object tracking, the most prominent, tracking issues are track initialization, data association, occlusions tills to traffic congestion, and object maneuvering. While there are limited past works addressing these problems, most relevant tracking systems proposed in the past are independently focused to either "occlusion" or "data association" only. In this paper, we propose the KL-IMMPDA (Kanade Lucas-Interacting Multiple Model Probabilistic Data Association) filtering approach for multiple-object tracking to collectively address the key issues. The proposed method essentially employs optical flow measurements for both detection and track initialization while the KL-IMMPDA filter is used to accept or reject measurements, which belong to other objects. The data association performed by the proposed KL-IMMPDA results in an effective tracking scheme, which is robust to partial occlusions and image clutter of object maneuvering. The simulation results show a significant performance improvement for tracking multi-objects in occlusion and maneuvering, when compared to other conventional trackers such as Kalman filter.

  653.   Luo, YH, and Nelson, BJ, "Fusing force and vision feedback for manipulating deformable objects," JOURNAL OF ROBOTIC SYSTEMS, vol. 18, pp. 103-117, 2001.

Abstract:   This article describes a framework that fuses vision and force feedback for the control of highly deformable objects. Deformable active contours, or snakes, are used to visually observe changes in object shape over time. Finite-element models of object deformations are used with force feedback to predict expected visually observed deformations. Our approach improves the performance of large, complex deformable contour tracking over traditional computer vision tracking techniques. This same approach of combining deformable active contours with finite-element material models is modified so that a vision sensor, i.e., a charge-coupled device (CCD) camera, can be used as a force sensor. By visually tracking changes in contours on the object, material deflections can be transformed into applied stress estimates through finite element modeling. Therefore, internal object stresses due to object manipulation can be visually observed and controlled, thus creating a framework for deformable object manipulation. A pinhole camera model is used to accomplish vision and force sensor feedback assimilation from these two sensing modalities during manipulation, (C) 2001 John Wiley & Sons, Inc.

  654.   Manh, AG, Rabatel, G, Assemat, L, and Aldon, MJ, "Weed leaf image segmentation by deformable templates," JOURNAL OF AGRICULTURAL ENGINEERING RESEARCH, vol. 80, pp. 139-146, 2001.

Abstract:   In order to improve weeding strategies in terms of pesticide reduction, spatial distribution and characterization of in-field weed populations are important. With recent improvements in image processing, many studies have focused on weed detection by vision techniques. However, weed identification still remains difficult because of outdoor scenic complexity and morphological variability of plants. A new method of weed leaf segmentation based on the use of deformable templates is proposed. This approach has the advantage of applying a priori knowledge to the object searched, improving the robustness of the segmentation stage. The principle consists of fitting a parametric model to the leaf outlines in the image, by minimizing an energy term related to internal constraints of the model and salient features of the image, such as the colour of the plant. This method showed promising results for one weed species, green foxtail (Setaria viridis). In particular, it was possible to characterize partially occluded leaves. This constitutes a first step towards a recognition system, based on leaf characteristics and their relative spatial position. (C) 2001 Silsoe Research Institute.

  655.   Fukuda, T, Morimoto, Y, Morishita, S, and Tokuyama, T, "Data mining with optimized two-dimensional association rules," ACM TRANSACTIONS ON DATABASE SYSTEMS, vol. 26, pp. 179-213, 2001.

Abstract:   We discuss data mining based on association rules for two numeric attributes and one Boolean attribute. For example, in a database of bank customers, "Age" and "Balance" are two numeric attributes, and "CardLoan" is a Boolean attribute. Taking the pair (Age, Balance) as a point in two-dimensional space, we consider an association rule of the form ((Age, Balance) is an element of P) double right arrow (CardLoan = Yes), which implies that bank customers whose ages and balances fall within a planar region P tend to take out credit card loans with a high probability. We consider two classes of regions, rectangles and admissible (i.e., connected and x-monotone) regions. For each class, we propose efficient algorithms for computing the regions that give optimal association rules for gain, support, and confidence, respectively. We have implemented the algorithms for admissible regions as well as several advanced functions based on them in our data mining system named SONAR (System for Optimized Numeric Association Rules), where the rules are visualized by using a graphic user interface to make it easy for users to gain an intuitive understanding of rules.

  656.   Kaygin, S, and Bulut, MM, "A new one-pass algorithm to detect region boundaries," PATTERN RECOGNITION LETTERS, vol. 22, pp. 1169-1178, 2001.

Abstract:   In this paper, active chain is introduced as a chain coded contour whose shape is changed during iterations while it stays closed, clockwise and 4 connected. The iterations of the proposed algorithm move the chain items toward the interior region. This behaviour is similar to the active contours (snakes). If the initial contour is counter-clockwise, the same algorithm causes the active chain to expand like a balloon and detect the inner boundaries of the regions. The chain coded contours of all the separate regions can be detected in one pass in O(NM) where N and M are the image dimensions in pixels. (C) 2001 Elsevier Science B.V. All rights reserved.

  657.   Tsai, A, Yezzi, A, and Willsky, AS, "Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 1169-1186, 2001.

Abstract:   In this work, we first address the problem of simultaneous image segmentation and smoothing by approaching the Mumford-Shah paradigm from a curve evolution perspective. In particular, we let a set of deformable contours define the boundaries between regions in an image where we model the data via piecewise smooth functions and employ a gradient flow to evolve these contours. Each gradient step involves solving an optimal estimation problem for the data within each region, connecting curve evolution and the Mumford-Shah functional with the theory of boundary-value stochastic processes. The resulting active contour model offers a tractable implementation of the original Mumford-Shah model (i.e., without resorting to elliptic approximations which have traditionally been favored for greater ease in implementation) to simultaneously segment and smoothly reconstruct the data within a given image in a coupled manner. Various implementations of this algorithm are introduced to increase its speed of convergence. We also outline a hierarchical implementation of this algorithm to handle important image features such as triple points and other multiple junctions. Next, by generalizing the data fidelity term of the original Mumford-Shah functional to incorporate a spatially varying penalty, we extend our method to problems in which data quality varies across the image and to images in which sets of pixel measurements are missing. This more general model leads us to a novel PDE-based approach for simultaneous image magnification, segmentation, and smoothing, thereby extending the traditional applications of the Mumford-Shah functional which only considers simultaneous segmentation and smoothing.

  658.   Hsu, CC, Lai, PH, Lee, C, and Huang, WC, "Automated nasopharyngeal carcinoma detection with dynamic gadolinium-enhanced MR imaging," METHODS OF INFORMATION IN MEDICINE, vol. 40, pp. 331-337, 2001.

Abstract:   Objectives. The purpose of this research is to develop an automatic medical diagnosis for segmenting nasopharyngeal carcinoma (NPC) with dynamic gadolinium-enhanced MR imaging. Methods: This system is a multistage process, involving motion correction, head mask generation, dynamic MR data quantitative evaluation, rough segmentation, and rough segmentation refinement. Two approaches, a relative signal increase method and a slope method, are proposed for the quantitative evaluation of dynamic MR data. Results. The NPC detection results obtained using the proposed methods had a rating of 85% in match percent compared with these lesions identified by an experienced radiologist. The match percent for the two proposed methods did not have significant differences. However, the computation cost for the slope method was about twelve times faster than the relative signal increase method. Conclusions. The proposed methods can identify the NPC regions quickly and effectively. This system can enhance the performance of clinical diagnosis.

  659.   Pitermann, M, and Munhall, KG, "An inverse dynamics approach to face animation," JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 110, pp. 1570-1580, 2001.

Abstract:   Muscle-based models of the human face produce high quality animation but rely on recorded muscle activity signals or synthetic muscle signals that are often derived by trial and error. This paper presents a dynamic inversion of a muscle-based model (Lucero and Munhall, 1999) that permits the animation to be created from kinematic recordings of facial movements. Using a nonlinear optimizer (Powell's algorithm), the inversion produces a muscle activity set for seven muscles in the lower face that minimize the root mean square error between kinematic data recorded with OPTOTRAK and the corresponding nodes of the modeled facial mesh. This inverted muscle activity is then used to animate the facial model. In three tests of the inversion, strong correlations were observed for kinematics produced from synthetic muscle activity, for OPTOTRAK kinematics recorded from a talker for whom the facial model is morphologically adapted and finally for another talker with the model morphology adapted to a different individual. The correspondence between the animation kinematics and the three-dimensional OPTOTRAK data are very good and the animation is of high quality. Because the kinematic to electromyography (EMG) inversion is ill posed, there is no relation between the actual EMG and the inverted EMG. The overall redundancy of the motor system means that many different EMG patterns can produce the same kinematic output. (C) 2001 Acoustical Society of America.

  660.   Ohtake, Y, and Belyaev, AG, "Mesh optimization for polygonized isosurfaces," COMPUTER GRAPHICS FORUM, vol. 20, pp. C368-C376, 2001.

Abstract:   In this paper we propose a method for improvement of isosurface polygonizations. Given an initial polygonization of an isosurface, we introduce a mesh evolution process initialized by the polygonization. The evolving mesh converges quickly to its limit mesh which provides with a high quality approximation of the isosurface even if the isosurface has sharp features, boundary, complex topology. To analyze how close the evolving mesh approaches its destined isosurface, we introduce error estimators measuring the deviations of the mesh vertices from the isosurface and mesh normals from the isosurface normals. A new technique for mesh editing with isosurfaces is also proposed. In particular it can be used for creating carving effects.

  661.   Latson, LA, Powell, KA, Sturm, B, Schvartzman, PR, and White, RD, "Clinical validation of an automated boundary tracking algorithm on cardiac MR images," INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING, vol. 17, pp. 279-286, 2001.

Abstract:   The goal of this research was to develop an automated algorithm for tracking the borders of the left ventricle (LV) in a cine-MRI gradient-echo temporal data set. The algorithm was validated on four patient populations: healthy volunteers and patients with dilated cardiomyopathy (DCM), left ventricular hypertrophy (LVH), or left ventricular aneurysm (LVA). A full tomographic set (similar to 11 slices/case) of short-axis images through systole was obtained for each patient. Initial endocardial and epicardial contours for the end-diastolic (ED) and end-systolic (ES) frames were manually traced on the computer by an experienced radiologist. The ED tracings were used as the starting point for the algorithm. The borders were tracked through each phase of the temporal data set, until the ES frame was reached (similar to7 phases/slice). Peak gradients along equally spaced chords calculated perpendicular to a centerline determined midway between the endocardial and epicardial borders were used for border detection. This approach was tested by comparing the LV epicardial and endocardial volumes calculated at ES to those based on the manual tracings. The results of the algorithm compared favorably with both the endocardial (r(2) = 0.72 - 0.98) and epicardial (r(2) = 0.96 - 0.99) volumes of the tracer.

  662.   Schmidt-Trucksass, A, Cheng, DC, Sandrock, M, Schulte-Monting, J, Rauramaa, R, Huonker, M, and Burkhardt, H, "Computerized analysing system using the active contour in ultrasound measurement of carotid artery intima-media thickness," CLINICAL PHYSIOLOGY, vol. 21, pp. 561-569, 2001.

Abstract:   Background and purpose B-mode measurement of the carotid intima-media (IM) thickness (T) based on manual tracing (MT) procedures are dependent on the subjectivity of the reader and the existing automatic tracing procedures often fail to detect the IM boundaries accurately. The purpose of this study was to compare the tracing results of the IM boundaries of the carotid wall with a new automatic identification (AI) procedure, based on an active contour model, and computer-assisted manual tracing (MT). Methods The detection of the IM boundaries was performed with both procedures in 126 ultrasound images [63 each of the common carotid artery (CCA) and carotid bulb] along the far wall of the distal CCA and the carotid bulb. Intra- and inter-reader variability for mean and maximum IMT with AI and MT and accuracy of identification of both IM boundaries were evaluated. Results Using MT the intra- and inter-reader variability amounted to 0.01-0.03 and 0.03-0.07 mm, respectively. The variability was slightly higher in the carotid bulb than in the CCA. Using AI the variability was almost eliminated. Mean and maximum IMT were measured systematically lower by AI compared with MT in all regions by 0.01 mm. The accuracy of identification was similar for both IM boundaries, but lower in the carotid bulb region than in the CCA. Conclusions The new AI procedure identifies both IM boundaries in the region of the far wall of the CCA and carotid bulb with high precision, and eliminates most of the intra- and inter-reader variability of the IMT measurement using MT.

  663.   Park, J, and Keller, JM, "Snakes on the watershed," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 23, pp. 1201-1205, 2001.

Abstract:   In this paper, we present a new approach for object boundary extraction, called the watersnake. It is a two-step snake algorithm whose energy functional is minimized by the dynamic programming method. It is more robust to local minima because it finds the solution by searching the entire energy space. To reduce the complexity of the minimization process, the watershed transformation and a coarse-to-fine strategy are used. The new technique is compared to standard methods for accuracy in synthetic data and is applied to segmentation of white blood cells in bone marrow Images.

  664.   Moghaddam, B, Nastar, C, and Pentland, A, "A Bayesian similarity measure for deformable image matching," IMAGE AND VISION COMPUTING, vol. 19, pp. 235-244, 2001.

Abstract:   We propose a probabilistic similarity measure for direct image matching based on a Bayesian analysis of image deformations. We model two classes of variation in object appearance: intra-object and extra-object. The probability density functions for each class are then estimated from training data and used to compute a similarity measure based on the a posteriori probabilities. Furthermore, we use a novel representation for characterizing image differences using a deformable technique for obtaining pixel-wise correspondences. This representation, which is based on a deformable 3D mesh in XYI-space, is then experimentally compared with two simpler representations: intensity differences and optical Row. The performance advantage of our deformable matching technique is demonstrated using a typically hard test set drawn from the US Army's FERET face database. (C) 2001 Elsevier Science B.V. All rights reserved.

  665.   Bhalerao, A, and Wilson, R, "Unsupervised image segmentation combining region and boundary," IMAGE AND VISION COMPUTING, vol. 19, pp. 353-368, 2001.

Abstract:   An integrated approach to image segmentation is presented that combines region and boundary information using maximum a posteriori estimation and decision theory. The algorithm employs iterative, decision-directed estimation performed on a novel multi-resolution representation. The use of a multi-resolution technique ensures both robustness in noise and efficiency of computation, while the model-based estimation and decision process is flexible and spatially local, thus avoiding assumptions about global homogeneity or size and number of regions. A comparative evaluation of the method against region-only and boundary-only methods is presented and is shown to produce accurate segmentations at quite low signal-to-noise ratios. (C) 2001 Elsevier Science B.V. All rights reserved.

  666.   Gulick, VC, Morris, RL, Ruzon, MA, and Roush, TL, "Autonomous image analyses during the 1999 Marsokhod rover field test," JOURNAL OF GEOPHYSICAL RESEARCH-PLANETS, vol. 106, pp. 7745-7763, 2001.

Abstract:   A Martian rover capable of analyzing images autonomously could traverse greater path lengths and return data with greater scientific content. A more intelligent rover could, for example, automatically select targets of interest (e.g., rocks, layers), return spectral or high-resolution image data of these targets at the same time, remove less interesting or redundant parts of images before transmitting them, and provide compact information or representations of its environment. Three prototype algorithms, a horizon detector, a rock detector, and a layer detector have been developed and tested during the 1999 Marsokhod rover field test in Silver Lake, California. The results are encouraging and demonstrate the potential savings in time as well as the potential increase in the amount of relevant science data returned in each command cycle.

  667.   Boykov, Y, Veksler, O, and Zabih, R, "Fast approximate energy minimization via graph cuts," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 23, pp. 1222-1239, 2001.

Abstract:   Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. In this paper, we consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy.

  668.   Vemuri, BC, Guo, YL, and Wang, ZZ, "Deformable pedal curves and surfaces: Hybrid geometric active models for shape recovery," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 44, pp. 137-155, 2001.

Abstract:   In this paper, we propose significant extensions to the "snake pedal" model, a powerful geometric shape modeling scheme introduced in (Vemuri and Guo, 1998). The extension allows the model to automatically cope with topological changes and for the first time, introduces the concept of a compact global shape into geometric active models. The ability to characterize global shape of an object using very few parameters facilitates shape learning and recognition. In this new modeling scheme, object shapes are represented using a parameterized function-called the generator-which accounts for the global shape of an object and the pedal curve (surface) of this global shape with respect to a geometric snake to represent any local detail. Traditionally, pedal curves (surfaces) are defined as the loci of the feet of perpendiculars to the tangents of the generator from a fixed point called the pedal point. Local shape control is achieved by introducing a set of pedal points-lying on a snake-for each point on the generator. The model dubbed as a "snake pedal" allows for interactive manipulation via forces applied to the snake. In this work, we replace the snake by a geometric snake and derive all the necessary mathematics for evolving the geometric snake when the snake pedal is assumed to evolve as a function of its curvature. Automatic topological changes of the model may be achieved by implementing the geometric snake in a level-set framework. We demonstrate the applicability of this modeling scheme via examples of shape recovery from a variety of 2D and 3D image data.

  669.   Harari, D, Furst, M, Kiryati, N, Caspi, A, and Davidson, M, "A computer-based method for the assessment of body-image distortions in anorexia-nervosa patients," IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, vol. 5, pp. 311-319, 2001.

Abstract:   A computer-based method for the assessment of body-image distortions in anorexia nervosa and other eating-disorder patients is presented in this paper. At the core of the method is a realistic pictorial simulation of lifelike weight changes, applied to a real source image of the patient. The patients, using a graphical user interface, adjust their body shapes until they meet their self-perceived appearance. Measuring the extent of virtual fattening or slimming of a body with respect to its real shape and size allows direct quantitative evaluation of the cognitive distortion in body image. In a preliminary experiment involving 33 anorexia-nervosa patients, 70% of the subjects chose an image with simulated visual weight gain between 8%-16% as their "real" body image, while only one of them recognized the original body image. In a second experiment involving 30 healthy participants, the quality of the weight modified images was evaluated by pairwise selection trials. Over a weight change range from -16% to +28%, in about 30% of the trials, artificially modified images were mistakenly taken as "original" images, thus demonstrating the quality of the artificial images. The method presented is currently in a clinical validation phase, toward application in the research, diagnosis, evaluation, and treatment of eating disorders.

  670.   Zoroofi, RA, Nishii, T, Sato, Y, Sugano, N, Yoshikawa, H, and Tamura, S, "Segmentation of avascular necrosis of the femoral head using 3-D MR images," COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 25, pp. 511-521, 2001.

Abstract:   Avascular necrosis of the femoral head (ANFH) is a common clinical disorder in the orthopedic field. Traditional approaches to study the extent of ANFH rely primarily on manual segmentation of clinical magnetic resonance images (MRI). However, manual segmentation is insufficient for quantitative evaluation and staging of ANFH. This paper presents a new computerized approach for segmentation of necrotic lesions of the femoral head. The segmentation method consists of several steps including histogram based thresholding, 3-D morphological operations, oblique data reconstruction, and 2-D ellipse fitting. The proposed technique is rapid and efficient. In addition, it is available as a Microsoft Windows free software package on the Internet. Feasibility of the method is demonstrated on the data sets of 30 patients (1500 MR images). (C) 2001 Elsevier Science Ltd. All rights reserved.

  671.   Dumitras, A, and Venetsanopoulos, AN, "Angular map-driven snakes with application to object shape description in color images," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 1851-1859, 2001.

Abstract:   We propose a method for shape description of objects in color images. Our method employs angular maps in order to identify significant changes of color within the image, which are then used to drive snake models. To obtain an angular map, the angle values of the vectors corresponding to color image pixels are first computed with respect to a reference vector, and organized in a two-dimensional matrix. To identify significant color changes within the original image, the edges of the angular map are next extracted. The resulting edge map is then presented to a snake model. Distance and gradient vector flow snake models have been employed in this work. Experimental results show, not only that the resulting object shape descriptions are accurate and quite similar, but also that our method is computationally efficient and flexible.

  672.   Barbosa, J, Tavares, J, and Padilha, AJ, "Parallel image processing system on a cluster of personal computers - Best student paper award: First prize," VECTOR AND PARALLEL PROCESSING - VECPAR 2000, LECTURE NOTES IN COMPUTER SCIENCE, vol. 1981, pp. 439-452, 2001.

Abstract:   The most demanding image processing applications require real time processing, often using special purpose hardware. The work herein presented refers to the application of cluster computing for off line image processing, where the end user benefits from the operation of otherwise idle processors in the local LAN. The virtual parallel computer is composed by off-the-shelf personal computers connected by a low cost network, such as a 10 Mbits/s Ethernet. The aim is to minimise the processing time of a high level image processing package. The system developed to manage the parallel execution is described and some results obtained for the parallelisation of high level image processing algorithms are discussed, namely for active contour and modal analysis methods which require the computation of the eigenvectors of a symmetric matrix.

  673.   Berthilsson, R, Astrom, K, and Heyden, A, "Reconstruction of general curves, using factorization and bundle adjustment," INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 41, pp. 171-182, 2001.

Abstract:   In this paper, we extend the notion of affine shape, introduced by Sparr, from finite point sets to curves. The extension makes it possible to reconstruct 3D-curves up to projective transformations, from a number of their 2D-projections. We also extend the bundle adjustment technique from point features to curves. The first step of the curve reconstruction algorithm is based on affine shape. It is independent of choice of coordinates, is robust, does not rely on any preselected parameters and works for an arbitrary number of images. In particular this means that, except for a small set of curves (e.g. a moving line), a solution is given to the aperture problem of finding point correspondences between curves. The second step takes advantage of any knowledge of measurement errors in the images. This is possible by extending the bundle adjustment technique to curves. Finally, experiments are performed on both synthetic and real data to show the performance and applicability of the algorithm.

  674.   Sahiner, B, Petrick, N, Chan, HP, Hadjiiski, LM, Paramagul, C, Helvie, MA, and Gurcan, MN, "Computer-aided characterization of mammographic masses: Accuracy of mass segmentation and its effects on characterization," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 1275-1284, 2001.

Abstract:   Mass segmentation is used as the first step in many computer-aided diagnosis (CAD) systems for classification of breast masses as malignant or benign. The goal of this paper was to study the accuracy of an automated mass segmentation method developed in our laboratory, and to investigate the effect of the segmentation stage on the overall classification accuracy. The automated segmentation method was quantitatively compared with manual segmentation by two expert radiologists (111 and 112) using three similarity or distance measures on a data set of 100 masses. The area overlap measures between R1 and R2, the computer and R1, and the computer and R2 were 0.76 +/- 0.13, 0.74 +/- 0.11, and 0.74 +/- 0.13, respectively. The interobserver difference in these measures between the two radiologists was compared with the corresponding differences between the computer and the radiologists. Using three similarity measures and data from two radiologists, a total of six statistical tests were performed. The difference between the computer and the radiologist segmentation was significantly larger than the interobserver variability in only one test. Two sets of texture, morphological, and spiculation features, one based on the computer segmentation, and the other based on radiologist segmentation, were extracted from a data set of 249 films from 102 patients. A classifier based on stepwise feature selection and linear discriminant analysis was trained and tested using the two feature sets. The leave-one-case-out method was used for data sampling. For case-based classification, the area A(x) under the receiver operating characteristic (ROC) curve was 0.89 and 0.88 for the feature sets based on the radiologist segmentation and computer segmentation, respectively. The difference between the two ROC curves was not statistically significant.

  675.   Ben-Arie, J, and Wang, ZQ, "Hierarchical shape description and similarity-invariant recognition using gradient propagation," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 15, pp. 1251-1261, 2001.

Abstract:   This paper presents a novel hierarchical shape description scheme based on propagating the image gradient radially. This radial propagation is equivalent to a vectorial convolution with sector elements. The propagated gradient field collides at centers of convex/concave shape components, which can be detected as points of high directional disparity. A novel vectorial disparity measure called Cancellation Energy is used to measure this collision of the gradient field, and local maxima of this measure yield feature tokens. These feature tokens form a compact description of shapes and their components and indicate their central locations and sizes. In addition, a Gradient Signature is formed by the gradient field that collides at each center, which is itself a robust and size-independent description of the corresponding shape component. Experimental results demonstrate that the shape description is robust to distortion, noise and clutter. An important advantage of this scheme is that the feature tokens are obtained pre-attentively, without prior understanding of the image. The hierarchical description is also successfully used for similarity-invariant recognition of 2D shapes with a multidimensional indexing scheme based on the Gradient Signature.

  676.   Han, C, Hatsukami, TS, Hwang, JN, and Yuan, C, "A fast minimal path active contour model," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 865-873, 2001.

Abstract:   A new minimal path active contour model for boundary extraction is presented. Implementing the new approach requires four steps 1) users place some initial end points on or near the desired boundary through an interactive interface; 2) potential searching window is defined between two end points; 3) graph search method based on conic curves is used to search the boundary; 4) "wriggling" procedure is used to calibrate the contour and reduce sensitivity of the search results on the selected initial end points. The last three steps are performed automatically. In the proposed approach, the potential window systematically provides a new node connection for the later graph search, which is different from the row-by-row and column-by-column methods used in the classical graph search. Furthermore, this graph search also suggests ways to design a "wriggling" procedure to evolve the contour in the direction nearly perpendicular to itself by creating a list of displacement vectors in the potential window. The proposed minimal path active contour model speeds up the search and reduces the "metrication error" frequently encountered in the classical graph search methods e,g,, the dynamic programming minimal path (DPMP) method.

  677.   Cohen, LD, and Deschamps, T, "Multiple contour finding and perceptual grouping as a set of energy minimizing paths," ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 2134, pp. 560-575, 2001.

Abstract:   We address the problem of finding a set of contour curves in an image. We consider the problem of perceptual grouping and contour completion, where the data is a set of points in the image. A new method to find complete curves from a set of contours or edge points is presented. Our approach is an extension of previous work on finding a set of contours as minimal paths between end points using the fast marching algorithm. Given a set of key points, we find the pairs of points that have to be linked and the paths that join them. We use the saddle points of the minimal action map. The paths are obtained by backpropagation from the saddle points to both points of each pair. We also propose an extension of this method for contour completion where the data is a set of connected components. We find the minimal paths between each of these components, until the complete set of these "regions" is connected. The paths are obtained using the same backpropagation from the saddle points to both components.

  678.   Yanai, K, and Deguchi, K, "A multi-resolution image understanding system based on multi-agent architecture for high-resolution images," IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. E84D, pp. 1642-1650, 2001.

Abstract:   Recently a high-resolution image that has more than one million pixels is available easily. However, such an image requires much processing time and memory for an image understanding system. In this paper, we propose an integrated image understanding system of multi-resolution analysis and multiagent-based architecture for high-resolution images. The system we propose in this paper has capability to treat with a high-resolution image effectively without much extra cost. We implemented an experimental system for images of indoor scenes.

  679.   Yoo, SK, Wang, G, Rubinstein, JT, and Vannier, MW, "Semiautomatic segmentation of the cochlea using real-time volume rendering and regional adaptive snake modeling," JOURNAL OF DIGITAL IMAGING, vol. 14, pp. 173-181, 2001.

Abstract:   The human cochlea in the inner ear is the organ of hearing. Segmentation is a prerequisite step for 3-dimensional modeling and analysis of the cochlea. It may have uses in the clinical practice of otolaryngology and neuroradiology, as well as for cochlear implant research. In this report, an interactive, semiautomatic, coarse-to-fine segmentation approach is developed on a personal computer with a real-time volume rendering board. In the coarse segmentation, parameters, including the intensity range and the volume of interest, are defined to roughly segment the cochlea through user interaction. In the fine segmentation, a regional adaptive snake model designed as a refining operator separates the cochlea from other anatomic structures. The combination of the image information and expert knowledge enables the deformation of the regional adaptive snake effectively to the cochlear boundary, whereas the real-time volume rendering provides users with direct 3-dimensional visual feedback to modify intermediate parameters and finalize the segmentation. The performance is tested using spiral computed tomography (CT) images of the temporal bone and compared with the seed point region growing with manual modification of the commercial Analyze software. Our method represents an optimal balance between the efficiency of automatic algorithm and the accuracy of manual work. Copyright (C) 2001 by WB. Saunders Company.

  680.   Montagnat, J, Delingette, H, and Ayache, N, "A review of deformable surfaces: topology, geometry and deformation," IMAGE AND VISION COMPUTING, vol. 19, pp. 1023-1040, 2001.

Abstract:   Deformable models have raised much interest and found various applications in the fields of computer vision and medical imaging. They provide an extensible framework to reconstruct shapes. Deformable surfaces, in particular, are used to represent 3D objects. They have been used for pattern recognition [Computer Vision and Image Understanding 69(2) (1998) 201; IEEE Transactions on Pattern Analysis and Machine Intelligence 19(10) (1997) 1115], computer animation [ACM Computer Graphics (SIGGRAPH'87) 21(4) (1987) 205], geometric modelling [Computer Aided Design (CAD) 24(4) (1992) 178], simulation [Visual Computer 16(8) (2000) 437], boundary tracking [ACM Computer Graphics (SIGGRAPH'94) (1994) 185], image segmentation [Computer Integrated Surgery, Technology and Clinical Applications (1996) 59; IEEE Transactions on Medical Imaging 14 (1995) 442; Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine (CVRMed-MRCAS'97) 1205 (1997) 13; Medical Image Computing and Computer-Assisted Intervention (MICCAI'99) 1679 (1999) 176; Medical Image Analysis 1(1) (1996) 19], etc. In this paper we propose a survey on deformable surfaces. Many surface representations have been proposed to meet different 3D reconstruction problem requirements. We classify the main representations proposed in the literature and we study the influence of the representation on the model evolution behavior, revealing some similarities between different approaches. (C) 2001 Elsevier Science B.V. All rights reserved.

  681.   Chen, CM, Lu, HHS, and Hsiao, AT, "A dual-snake model of high penetrability for ultrasound image boundary extraction," ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 27, pp. 1651-1665, 2001.

Abstract:   Most deformable models require the initial contour to be placed close to the boundary of the object of interest for boundary extraction of ultrasound (US) images, which is impractical in many clinical applications. To allow a distant initial contour, a new dual-snake model promising high penetrability through the interference of the noises is proposed in this paper. The proposed dual-snake model features a new far-reaching external force, called the discrete gradient flow, a connected component-weighted image force, and an effective stability evaluation of two underlying snakes. The experimental results show that, with a distant initial contour, the mean distance from the derived boundary to the desired boundary is less than 1.4 pixels, and most snake elements are within 2.7 pixels of the desired boundaries for the synthetic images with CNR greater than or equal to 1. For the clinical US images, the mean distance is less than 1.9 pixels, and most snake elements are within 3 pixels of the desired boundaries. (E-mail: chung@lotus.mc.ntu.edu.tw) (C) 2002 World Federation for Ultrasound in Medicine Biology.

  682.   Kamijo, S, Ikeuchi, K, and Sakauchi, M, "Segmentations of spatio-temporal images by spatio-temporal Markov random field model," ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 2134, pp. 298-313, 2001.

Abstract:   There have been many successful researches on image segmentations that employ Markov Random Field model. However, most of them were interested in two-dimensional MRF, or spatial MRF, and very few researches are interested in three-dimensional MRF model. Generally, 'three-dimensional' have two meaning, that are spatially three-dimensional and spatio-temporal. In this paper, we especially are interested in segmentations of spatio-temporal images which appears to be equivalent to tracking problem of moving objects such as vehicles etc. For that purpose, by extending usual two-dimensional MRF, we defined a dedicated three-dimensional MRF which we defined as Spatio-Temporal MRF model(S-T MRF). This S-T MRF models a tracking problem by determining labels of groups of pixels by referring to their texture and labeling correlations along the temporal axis as well as the x-y image axes. Although vehicles severely occlude each other in general traffic images, segmentation boundaries of vehicle regions will be determined precisely by this S-T MRF optimizing such boundaries through spatio-temporal images. Consequently, it was proved that the algorithm has performed 95% success of tracking in middle-angle image at an intersection and 91% success in low-angle and front-view images at a highway junction.

  683.   Golosio, B, Brunetti, A, and Amendolia, SR, "A novel morphological approach to volume extraction in 3D tomography," COMPUTER PHYSICS COMMUNICATIONS, vol. 141, pp. 217-224, 2001.

Abstract:   Extracting a region of interest from volumetric data represents an important task in the field of digital image analysis. Several approaches to this problem are proposed in literature. The present paper affords volume extraction for regions of interest whose characteristics are not known a-priori. This is the case, for instance, of cancerous tissues in medical tomography or defects in industrial tomography. The technique here described allows extraction of completely arbitrary shapes with a minimum interaction with the user. The volume of interest is defined through the semi-automatic selection of a small set of rail contours at different planes. Such contours are then blended through a morphing technique in order to interpolate the cutting surface. The overall technique demonstrates to be intuitive, efficient and robust. Some results are reported where the method has been applied to micro-tomographic measurements. (C) 2001 Elsevier Science B.V. All rights reserved.

  684.   Fortier, MFA, Ziou, D, Armenakis, C, and Wang, S, "Automated correction and updating of road databases from high-resolution imagery," CANADIAN JOURNAL OF REMOTE SENSING, vol. 27, pp. 78-91, 2001.

Abstract:   Our work addresses the correction and update of road map data from georeferenced aerial images. This task requires the solution of two underlying problems: the weak positional accuracy of the existing road location, and the detection Of new roads. To correct the position of the existing road network location from the imagery, we use an active contour ('snakes") optimization approach, with a line enhancement function. The initialization of the snakes is based on a priori knowledge derived from the existing vector road data coming from the National Topographic Database of Geomatics Canada, and from line junctions computed from the image by a new detector developed for this application. To generate hypotheses for new roads, a road following algorithm is applied in the image, starting from the line intersections, which are already in the existing road network. Experimental results on a georeferenced image of the Edmonton area, provided by Geomatics Canada, are presented to validate the approach and to demonstrate the interest of using line junctions in this kind of application.

  685.   Baxter, WW, and McCulloch, AD, "In vivo finite element model-based image analysis of pacemaker lead mechanics," MEDICAL IMAGE ANALYSIS, vol. 5, pp. 255-270, 2001.

Abstract:   Background: Fractures of implanted pacemaker leads are currently identified by inspecting radiographic images without making full use of a priori known material and structural information. Moreover, lead designers are unable to incorporate clinical image data into analyses of lead mechanics. Methods: A novel finite element/active contour method was developed to quantify the in vivo mechanics of implanted leads by estimating the distributions of stress, strain, and traction using biplane videoradiographic images. The nonlinear equilibrium equations governing a thin elastic beam undergoing 3-D large rotation were solved using one-dimensional isoparametric finite elements. External forces based on local image greyscale values were computed from each pair of images using a perspective transformation governing the relationship between the image planes. Results: Cantilever beam forward solution results were within 0.2% of the analytic solution for a wide range of applied loads. The finite element/active contour model was able to reproduce the principal curvatures of a synthetic helix within 3% of the analytic solution and estimates of the helix's geometric torsion were within 20% of the analytic solution. Applying the method to biplane videoradiographic images of a lead acutely implanted in an anesthetized dog resulted in expected variations in curvature and bending stress between compliant and rigid segments of the lead. Conclusions: By incorporating knowledge about lead geometric and material properties, the 3-D finite element/active contour method regularizes the image reconstruction problem and allows for more quantitative and automatic assessment of implanted lead mechanics. (C) 2001 Elsevier Science B.V. All rights reserved.

  686.   Deschamps, T, and Cohen, LD, "Fast extraction of minimal paths in 3D images and applications to virtual endoscopy," MEDICAL IMAGE ANALYSIS, vol. 5, pp. 281-299, 2001.

Abstract:   The aim of this article is to build trajectories for virtual endoscopy inside 3D medical images, using the most automatic way. Usually the construction of this trajectory is left to the clinician who must define some points on the path manually using three orthogonal views. But for a complex structure such as the colon, those views give little information on the shape of the object of interest. The path construction in 3D images becomes a very tedious task and precise a priori knowledge of the structure is needed to determine a suitable trajectory. We propose a more automatic path tracking method to overcome those drawbacks: we are able to build a path, given only one or two end points and the 3D image as inputs. This work is based on previous work by Cohen and Kimmel [Int. J. Comp. Vis. 24 (1) (1997) 57] for extracting paths in 2D images using Fast Marching algorithm. Our original contribution is twofold. On the first hand, we present a general technical contribution which extends minimal paths to 3D images and gives new improvements of the approach that are relevant in 2D as well as in 3D to extract linear structures in images. It includes techniques to make the path extraction scheme faster and easier, by reducing the user interaction. We also develop a new method to extract a centered path in tubular structures. Synthetic and real medical images are used to illustrate each contribution. On the other hand, we show that our method can be efficiently applied to the problem of finding a centered path in tubular anatomical structures with minimum interactivity, and that this path can be used for virtual endoscopy. Results are shown in various anatomical regions (colon, brain vessels, arteries) with different 3D imaging protocols (CT, MR). (C) 2001 Elsevier Science B.V. All rights reserved.

  687.   Yu, SX, Lee, TS, and Kanade, T, "A hierarchical Markov random field model for figure-ground segregation," ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, LECTURE NOTES IN COMPUTER SCIENCE, vol. 2134, pp. 118-133, 2001.

Abstract:   To segregate overlapping objects into depth layers requires the integration of local occlusion cues distributed over the entire image into a global percept. We propose to model this process using hierarchical Markov random field (HMRF), and suggest a broader view that clique potentials in MRF models can be used to encode any local decision rules. A topology-dependent multiscale hierarchy is used to introduce long range interaction. The operations within each level are identical across the hierarchy. The clique parameters that encode the relative importance of these decision rules are estimated using an optimization technique called learning from rehearsals based on 2-object training samples. We find that this model generalizes successfully to 5-object test images, and that depth segregation can be completed within two traversals across the hierarchy. This computational framework therefore provides an interesting platform for us to investigate the interaction of local decision rules and global representations, as well as to reason about the rationales underlying some of recent psychological and neurophysiological findings related to figure-ground segregation.

  688.   Ray, N, Chanda, B, and Das, J, "A fast and flexible multiresolution snake with a definite termination criterion," PATTERN RECOGNITION, vol. 34, pp. 1483-1490, 2001.

Abstract:   This paper. describes a fast process of parametric snake evolution with a multiresolution strategy. Conventional parametric evolution method relies on matrix inversion throughout the iteration intermittently, in contrast the proposed method relaxes the matrix inversion which is: costly and time consuming in cases where the resulting snake is flexible. The proposed method also eliminates the input of snake rigidity parameters when the snake is flexible. Also, a robust and definite termination criterion for both conventional and proposed methods is demonstrated ill this paper. (C) 2001 pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

  689.   Gomes, J, and Faugeras, O, "Using the vector distance functions to evolve manifolds of arbitrary codimension," SCALE-SPACE AND MORPHOLOGY IN COMPUTER VISION, PROCEEDINGS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 2106, pp. 1-13, 2001.

Abstract:   We present a novel method for representing and evolving objects of arbitrary dimension. The method, called the Vector Distance Function (VDF) method, uses the vector that connects any point in space to its closest point on the object. It can deal with smooth manifolds with and without boundaries and with shapes of different dimensions. It can be used to evolve such objects according to a variety of motions, including mean curvature. If discontinuous velocity fields are allowed the dimension of the objects can change. The evolution method that we propose guarantees that we stay in the class of VDF's and therefore that the intrinsic properties of the underlying shapes such as their dimension, curvatures can be read off easily from the VDF and its spatial derivatives at each time instant. The main disadvantage of the method is its redundancy: the size of the representation is always that of the ambient space even though the object we are representing may be of a much lower dimension. This disadvantage is also one of its strengths since it buys us flexibility.

  690.   Liu, ZC, Zhang, ZY, Jacobs, C, and Cohen, M, "Rapid modeling of animated faces from video," JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, vol. 12, pp. 227-240, 2001.

Abstract:   Generating realistic 3D human face models and facial animations has been a persistent challenge in computer graphics. We have developed a system that constructs textured 3D face models from videos with minimal user interaction. Our system takes images and video sequences of a face with an ordinary video camera. After five manual clicks on two images to tell the system where the eye corners, nose top and mouth corners are, the system automatically generates a realistic looking 3D human head model and the constructed model can be animated immediately. A user with a PC and an ordinary camera can use our system to generate his/her face model in a few minutes. Copyright (C) 2001 John Wiley Sons, Ltd.

  691.   Elmoataz, A, Schupp, S, and Bloyet, D, "Fast and simple discrete approach for active contours for biomedical applications," INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 15, pp. 1201-1212, 2001.

Abstract:   In this paper, we present a fast and simple discrete approach for active contours. It is based on discrete contour evolution, which operates on the boundary of digital shape, by iterative growth processes on the boundary of the shape. We consider a curve to be the boundary of a discrete shape, We attach at each point of the boundary a cost function and deform this shape according to that cost function. The method presents some advantages. It is a discrete method, which takes an implicit representation and uses discrete algorithm with a simple data structure.

  692.   Tang, M, and Ma, SD, "General scheme of region competition based on scale space," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 23, pp. 1366-1378, 2001.

Abstract:   In this paper, we propose a general scheme of region competition (GSRC) for image segmentation based on scale space. First, we present a novel classification algorithm to cluster the image feature data according to the generally defined peaks under a certain scale and a scale space-based classification scheme to classify the pixels by grouping the resultant feature data clusters into several classes with a standard classification algorithm. Second, to reduce the resultant segmentation error, we develop a nonparametric probability model from which the functional for GSRC is derived. Third, we design a general and formal approach to automatically determine the initial regions. Finally, we propose the kernel procedure of GSRC which segments an image by minimizing the functional. The strategy adopted by GSRC is first to label pixels whose corresponding regions can be determined in large likelihood, and then to fine-tune the final regions with the help of the nonparametric probability model, boundary smoothing, and region competition. GSRC quantitatively controls the segmentation extent with the scale space-based classification scheme. Although the description of the scheme is nonparametric in this paper, GSRC can also work parametrically if all nonparametric procedures in this paper are substituted with the parametric counterparts.

  693.   Hueber, E, Bigue, L, Refregier, P, and Ambs, P, "Optical snake-based segmentation processor with a shadow-casting incoherent correlator," OPTICS LETTERS, vol. 26, pp. 1852-1854, 2001.

Abstract:   What is believed to be the first incoherent snake-based optoelectronic processor that is able to segment an object in a real image is described. The process, based on active contours (snakes), consists of correlating adaptive binary references with the scene image. The proposed optical implementation of algorithms that are already operational numerically opens attractive possibilities for faster processing. Furthermore, this experiment has yielded a new, versatile application for optical processors. (C) 2001 Optical Society of America.

  694.   Wang, ZQ, and Ben-Arie, J, "Detection and segmentation of generic shapes based on affine modeling of energy in eigenspace," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 1621-1629, 2001.

Abstract:   This paper presents a novel approach for detection and segmentation of man made generic shapes in cluttered images. The set of shapes to be detected are members of affine transformed versions of basic geometric shapes such as rectangles, circles etc. The shape set is represented by its vectorial edge map transformed over a wide range of affine parameters. We use vectorial boundary instead of regular boundary to improve robustness to noise, background clutter and partial occlusion. Our approach consists of a detection stage and a verification stage. In the detection stage, we first derive the energy from the principal eigenvectors of the set. Next, an a posteriori probability map of energy distribution is computed from the projection of the edge map representation in a vectorial eigen-space. Local peaks of the posterior probability map are located and indicate candidate detections. We use energy/probability based detection since we find that the underlying distribution is not Gaussian and resembles a hypertoroid. In the verification stage, each candidate is verified using a fast search algorithm based on a novel representation in angle space and the corresponding pose information of the detected shape is obtained. The angular representation used in the verification stage yields better results than a Euclidean distance representation. Experiments are performed in various interfering distortions, and robust detection and segmentation are achieved.

  695.   Tsap, LV, Goldgof, DB, and Sarkar, S, "Fusion of physically-based registration and deformation modeling for nonrigid motion analysis," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 10, pp. 1659-1669, 2001.

Abstract:   In our previous work, we used finite element models to determine nonrigid motion parameters and recover unknown local properties of objects given correspondence data recovered with snakes or other tracking models. In this paper, we present a novel multiscale approach to recovery of nonrigid motion from sequences of registered intensity and range images. The main idea of our approach is that a finite element (FEM) model incorporating material properties of the object can naturally handle both registration and deformation modeling using a single model-driving strategy. The method includes a multiscale iterative algorithm based on analysis of the undirected Hausdorff distance to recover correspondences. The method is evaluated with respect to speed and accuracy. Noise sensitivity issues are addressed. Advantages of the proposed approach are demonstrated using man-made elastic materials and human skin motion. Experiments with regular grid features are used for performance comparison with a conventional approach (separate snakes and FEM models). It is shown, however, that the new method does not require a sampling/correspondence template and can adapt the model to available object features. Usefulness of the method is presented not only in the context of tracking and motion analysis, but also for a burn scar detection application.

  696.   Lamard, M, and Cochener, B, "Modeling the eye with a view to refractive surgery simulation," JOURNAL FRANCAIS D OPHTALMOLOGIE, vol. 24, pp. 813-822, 2001.

Abstract:   Purpose: To achieve three-dimensional modelizing of the eyeball (morphological and mechanical behavior) in order to simulate the impact of various refractive surgery techniques and to study the normal and pathological states of the eye. Method: Rebuilding the ocular shell is based on different kinds of imaging (MR[, ultrasound) including information provided by video topography. Image data are treated using suitable numerized filters that allow automatic segmentations of ocular globus edges. Reconstruction is based on specific mathematical functions (B-splines). The mechanical behavior of a reconstructed model is simulated by solving equations of linearized elasticity with the finited elements method. Results: Numerous simulations mimed different refractive surgical techniques and, then validated the model. In addition, simulations of various pathologies allowed us to verify certain clinical hypotheses. Conclusion: This work, although still experimental, demonstrates the advantages of such simulations and will allow novice physicians an easier approach to different surgical techniques and will help them understand their effect. Furthermore, it might be useful for simulation of new surgical concepts even before their in vivo evaluation.

  697.   Liu, RJ, and Yuan, BZ, "Automatic eye feature extraction in human face images," COMPUTING AND INFORMATICS, vol. 20, pp. 289-301, 2001.

Abstract:   This paper presents a fuzzy-based method to extract the eye features in a head-shoulder image with plain background. This method is comprised of two stages, namely the face region estimation and the eye features extraction. In the first stage, a region growing method is adopted to estimate the face region. In the second stage, the coarse eye area is firstly determined based on the location of the nasion, then the deformable template algorithm is completed in two steps to extract the features of the eyes. Experimental results show the efficiency and robustness of this method.

  698.   Mishra, A, Dutta, PK, and Ghosh, MK, "Non-rigid cardiac motion quantification from 2D image sequences based on wavelet synthesis," IMAGE AND VISION COMPUTING, vol. 19, pp. 929-939, 2001.

Abstract:   Motion quantification from 2D sequential cardiac images has been performed on axial images of the left ventricle (LV) obtained from two different imaging modalities (MRI and Echocardiography images). The detail point wise motion vectors were evaluated by establishing shape correspondence between the consecutive contours after reconstructing curvature information by wavelet synthesis filters at multiple levels. We present a simple approach that optimizes the shape correspondence taking the non-uniform contour variation in to account. The shape matching is done by maximizing the correlation between the approximation coefficient vectors at certain levels. The algorithm has been tested over sets of 2D images and the results are compared with that obtained from a bending energy model. Some experimental results have also been presented for validation of the algorithm. (C) 2001 Elsevier Science B.V. All rights reserved.

  699.   Schoepflin, T, Chalana, V, Haynor, DR, and Kim, Y, "Video object tracking with a sequential hierarchy of template deformations," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 11, pp. 1171-1182, 2001.

Abstract:   We have developed a new contour-based tracking algorithm that uses a sequence of template deformations to model and track generic video objects. We organize the deformations into a hierarchy: globally affine deformations, piecewise (locally) affine deformations, and arbitrary smooth deformations (snakes). This design enables the algorithm to track objects whose pose and shape change in time compared to the template. If the object is not a rigid body, we model the temporal evolution of its shape by updating the entire template after each video frame; otherwise, we only update the pose of the object. Experimental results demonstrate that our method is able to track a variety of video objects, including those undergoing rapid changes. We quantitatively compare our algorithm with its constituent pieces (e.g., the snake algorithm) and show that the complete algorithm can track objects with moving parts for a longer duration than partial versions of the hierarchy. It could be benefited from a higher level algorithm to dynamically adjust the parameters and template deformations to improve the segmentation accuracy further. The hierarchical nature of this algorithm provides a framework that offers a modular approach for the design and enhancement of future object-tracking algorithms.

  700.   Velasco, FA, and Marroquin, JL, "Robust parametric active contours: the Sandwich Snakes," MACHINE VISION AND APPLICATIONS, vol. 12, pp. 238-242, 2001.

Abstract:   Snakes are active contours that minimize an energy function. We present a new kind of active contours called "Sandwich Snakes". They are formed by two snakes, one inside and the other outside of the curve that one is looking for. They have the same number of particles, which are connected in one-to-one correspondence. At the minimum the two snakes have the same position. We also present here a multi-scale system, where Sandwich Snakes are adjusted at increasing resolutions, and an interactive tool that permits one to easily specify the initial position for the Sandwich Snakes. Sandwich Snakes exhibit very good perfomance detecting contours with complex shapes, where the traditional methods fail. They are also very robust with respect to noise.

  701.   Angelini, ED, Laine, AF, Takuma, S, Holmes, JW, and Homma, S, "LV volume quantification via spatiotemporal analysis of real-time 3-D echocardiography," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, pp. 457-469, 2001.

Abstract:   This paper presents a method of four-dimensional (4-D) (3-D + Time) space-frequency analysis for directional denoising and enhancement of real-time three-dimensional (RT3D) ultrasound and quantitative measures in diagnostic cardiac ultrasound. Expansion of echocardiographic volumes is performed with complex exponential wavelet-like basis functions called brushlets, These functions offer good localization in time and frequency and decompose a signal into distinct patterns of oriented harmonics, which are invariant to intensity and contrast range. Deformable-model segmentation is carried out on denoised data after thresholding of transform coefficients, This process attenuates speckle noise while preserving cardiac structure location, The superiority of 4-D over 3-D analysis for decorrelating additive white noise and multiplicative speckle noise on a 4-D phantom volume expanding in time is demonstrated, Quantitative validation, computed for contours and volumes, is performed on in vitro balloon phantoms. Clinical applications of this spaciotempo