NLM Home Page VHP Home Page


Next: Conclusions Up: Title Page Previous: The immersive environment Index: Full text index Contents: Conference Page 

Ray & data space segmentation

     In this system visualization and segmentation are not distinct activities occurring in isolation. Rather they occur together under the direct control of the user. Segmentation brings structures into view and all structures visualized are effectively segmented. Segmentation can be performed in two distinct spaces: the ray space and the data space. The ray space consists of the collection of rays joining the viewer's eye position with the image pixels displayed on the table surface. These ray segments pass through the data volume as determined by the data orientation relative to the table surface. The data space is the coordinate system of the data volume itself.

     Within the ray space, segmentation and visualization occur as the user moves the probe through the data volume. As the user initiates a visualization action, the system determines the appropriate opacity mapping for data voxels, by sampling the data set in the vicinity of the probe. The system tracks three categories of data: completely opaque, semi-transparent and transparent. An incremental cluster analysis is used in assigning voxel values to these categories. A spherical region around the probe's position defines the active region of visualization. The ray segments formed by intersecting these regions with the ray space are cast through the data set and imaged according to the category information. During the course of work several samples, representing different tissue types, may be built up and selectively used in different roles. In this space a structure is determined by the ray segments swept out by the probe and the mapping of voxel values to opacity values. Figure 1 and figure 2 show examples of structures segmented in ray space. In animation 1 the segmentation process of the mandible is showed.

     Alternatively, instead of using the probe position to include data into the scene, the probe can be used to exclude previously imaged data from the projected scene. The resulting interface gives the user the illusion that they are selectively "sculpting" the data out of the volume concentrating their attention on the regions of greatest interest and guiding the work by their interpretation of what they see. While the ray space method is relatively coarse in its ability to segment structure it is nonetheless an effective tool for exploring data sets.

     Segmentation is also possible in the coordinate space of the data set itself using a 3D implementation of the "digital staining" algorithm [2,3]. In this space a structure is directly represented as a collection of voxels. The user "injects" into the data set a "stain" which flows out from the probe position. This stain is used to designate regions and modify the mapping of data voxel values to color and opacity. As the staining action is initiated the system builds a category description of the material to stain by sampling voxels in the neighborhood of the probe. Alternatively, an existing category description can be used to initialize the process. During the action the stain preferentially flows into voxels similar to the category description and well as preferentially flowing towards the user's current position. The stain also incorporates a variable viscosity. Figure 3 and figure 4 show examples of structures segmented using the staining technique. In animation 2  the segmentation process of the brain is showed.

     In the Visible Human data sets a number of structures exhibit a high degree of variability in their color. Because of this a single, simple category description cannot adequately cover the entire structure. To handle this situation it is possible to evolve a category description over time by retiring sample values as they become unused.

     Visualized data is segmented. Each method can produce a fully segmented three dimensional structure which can be saved for future use, including the construction of polygonal meshes (Figure 5).


Next: Conclusions Up: Title Page Previous: The immersive environment Index: Full text index Contents: Conference Page