NLM Home Page VHP Home Page


Next: Results Up: Title Page Previous: Introduction Index: Full Text Index Contents: Conference Page 

Methods

Texture segmentation
     The goal is to map the textures obtained above from the Visible Human and the optical Colonoscopy onto the patient CT data. To achieve this task, we have to segment the Visible Human dataset and CT image. The most sophisticated segmentation is out of the scope of this work. We utilize simple approaches, yet point out the problems and the references for solutions.
     For multichannel segmentation, we apply 3D region growing with different range thresholds on red, green and blue channels for the Visible Human physical slices dataset to obtain different textures (bone, fat, muscle, etc.). We obtain texture of the inner colon wall from a sample image from optical colonoscopy.
     Although the CT dataset is a monochannel segmentation to differentiate colon wall and muscle, it is not trivial due to similar gray tons. We refer to Chen, et al. ([7]) to solve this problem. In this implementation, we our experiences to choose thresholds to segment out tissues such as air, fat, muscle, bone and colon wall. Since the boundary area between two or more materials is usually fuzzy in the CT image, the standard binary segmentation introduces artifacts (jagged edges) by winner-take-all policy; a more sophisticated approach - probabilistic segmentation - is proposed by Chiou et al. ([8]) to solve this problem.

Texture Analysis
     After segmentation, we need to extract colorful texture features from segmented visible human dataset. We apply Gaussian filter and subsampling to decompose a dataset in a multiresolution pyramid. The Laplacian filter and steerable filters ([9]) are also applied to obtain non-oriented and oriented features of each RGB channel. This multi-filter scheme is efficient to capture the texture features, but it costs a lot memory for storage and computational speed. The alternative solution is to utilize wavelet transform.
     Although it is computationally efficient and in-place representation (without extra memory), this standard wavelet transform only captures features with orientations parallel to axes. We are interested in extracting features of the textures selected by segmentation. The wavelet transform is not helpful in this case because it can not applied directly to a region of interest. One of the solutions is to apply non-separable filters. Kovacevic, et el. has been working on non-separable filter design for wavelet transform. Recently, a lifting scheme to build filter banks for wavelets transform in any dimension has been proposed ([10]). The approach involves two lifting steps: prediction and updating. The prediction/updating aims to design a filter bank, which is created by the Boor-Ron algorithm for multidimensional polynomial interpolation ([11]). This filter bank can detect the texture feature on the regions of a segmented visible human dataset with small modifications.

Texture Modeling
     After obtaining the texture features, we create models to describe the features. We utilize a non-parametric multi-scale statistical model as opposed to the traditional approach based on the Markov Random Field [3]. The basis for using this model is based on effectively estimating and manipulating the entropy of non-Gaussian distributions (natural textures). We adopt Parzen window density estimation for the following advantages: (1)Parzen estimate is computed directly from the sample: there is no search for parameters; (2) The derivative of the entropy of the Parzen estimate is simple to compute. In addition, this model can estimate cross-scale distributions which allow us to detect a parent-child relationship by co-occurence of wavelets at many scales. For these reasons, well defined structure in natural textures can be modeled and reproduced in texture synthesis. We also take advantage of flexible histogram introduced by Bonet et al ([12]) to improve computational performance.

Texture Matching
     Texture matching in the pure material regions is straightforward after segmentation of the CT and Visible Human in classes. The corresponding classes are easily matched. As we discussed earlier, texture is not well defined in the material-mixture regions since they closer to the boundary in CT dataset are fuzzy . Intuitively, if texture is fuzzy the texture matching is not trivial, but, the probabilistic segmentation provides a percentage of texture for each material or tissue. We utilize this percentage to set the importance of matching. This matching can be relaxed in accordance to the weight based on the value of the percentage. This way, we can solve texture matching in a fuzzy boundary nicely.
     The texture matching for the non-parametric multi-scale statistical model is more complicated . We can apply the cross entropy or Kullback-Leibler divergence ([13]) from information theory to measure the difference between two distributions.

Texture Synthesis
     After matching, we will fuse the textures from the visible human/optical colonoscopy dataset into the patient CT dataset. We sample texture directly from the visible human dataset in the segmented CT dataset if the texture is un isoptropic pattern such as bone. However, the multiresolution sampling procedure is required for unisotropic textures such as colon mucosa. Bonet [6] showed the power of this procedure by keeping track of relationships between parent and children in the multiresolution model. The selective resampling for homogeneous and heterogeneous regions is the secret of this method. Nevertheless, the Bonet method is only applied to reproduce larger texture (rectangle size). We are developing a new method where texture is resampled from different source datasets. This approach does not only reproduce the original texture, but can also be used to perform resampling in the target region. We also consider fuzzy texture on the contrast region where textures are resampled by composition task, Fusing different pure textures in a mixture texture.


Next: Results Up: Title Page Previous: Introduction Index: Full Text Index Contents: Conference Page