This work demonstrates the effectiveness of detailed analysis of ear images for personal identification, using the NIST database of police mugshots. Two innovative methods are applied to boundary analysis. First, edge analysis is performed only along rays emanating from a point near the center of the ear, with time and quality advantages over traditional methods. The innovative concept of "interpretation breeding" is introduced : a synthesis is formed from two contrasting methods for finding the ear boundary. Well over 70% of the images can be segmented, an extremely good figure for data of this quality. Ear images are then cut out and standardized in several ways to compensate for image variations. For identification, a neural network is used to compute a composite distance criterion. Individual distances include one based on components of an "eigenear" basis similar to Pentland's eigenfaces, and one based on comparison of the most robust portion of the boundary curve. The best match to a random query is found 58% of the time, and the correct match is among the top five 77% of the time. These results compare favorably with those for frontal images from the NIST mugshot database.