NLM Home Page VHP Home Page


Next: Results Up: Title Page Previous: Introduction  Index: Full Text Index Contents: Conference Page 

Learning and projecting trace

      What we are trying to do is develop a flexible system architecture and learning method so that a tracer can begin an initial piece of a tracing task, and the system can then learn it and take over the repetitive parts of the task.  A set of human generated contours, or traces, is our ground truth; we use some part of one trace as a training set, and test the system against the remainder.  (The nouns "trace" and "contour" are used synonymously in this report).

     The following diagram (figure 8) illustrates the choices evaluated, as an existing trace is extended, one pixel at a time:

choices to be considered
     In the three scenarios above, the pixels marked  indicate a trace in progress, from left to right.  The yellow pixels indicate three possible choices for continuing forward in the direction already established.  The circles ahead of the choice pixels indicate places where samples of the image will be taken; these samples will be used in an evaluation function to determine which of the three options will be chosen as the next point on the trace.  Once a new point is added to the trace, the cycle starts again as the next possible steps forward are identified and evaluated.

     The "samples" made at the circle points can be simple pixel values at those points, or more complicated functions of some local neighborhood about those points.

     We use a multi-layer neural network with standard backpropagation algorithms to learn the evaluation function.  The network's output can be loosely interpreted as the probability of a pixel being an edge pixel.  In other work ([3]), we have previously discussed many of the issues surrounding the architecture of the neural network, and experiments with a variety of input representations centered at the sample points.

     Figure 9 shows the initial interaction of human and system, in tracing the outer surface of the skull. A human tracer has specified a representative piece of a contour. Using this as an exemplar, a training set is constructed, and the neural network is run through its training regimen.

     Figure 10 shows the pull-down menu used to set and monitor parameters of the system; the alternating red & green boxes are graphical depictions of the weights between the neural net's layers.

     After training, in the system's interactive mode, the tracer toggles into "auto-trace" mode and the system extends the contour at a speed appropriate for a user to monitor its progress. When the network veers off from an acceptable path (this may happen when an image area was not represented in the training set), the user intervenes, backs up over the problematic area, and returns to "auto-trace" mode when the contour once again is in a more standard region.

     Figure 11 shows an example of a semi-automated trace.  More than 90% of the pixels identified in blue were specified by the network, the remainder by the human tracer.  The time spent to generate the semi-automated trace is an order of magnitude less than the time required to manually trace it.  Quantitative comparisons follow in the Results section.


Next: Results Up: Title Page Previous: Introduction  Index: Full Text Index Contents: Conference Page