Real-Time Shape-from-Silhouette

Dan Small

M.S., Computer Science, University of New Mexico, 2001

ABSTRACT

The computer vision field has undergone a revolution of sorts in the past five years. Moore's law has driven real-time image processing from the domain of dedicated, expensive hardware, to the domain of commercial off-the-shelf computers. This thesis describes our work on the design, analysis and implementation of a Real-Time Shape-from-Silhouette Sensor (RTS^3). The system produces time varying volumetric data at real-time rates (10-30Hz). The data is in the form of binary volumetric images. Until recently, using this technique in a real-time system was impractical due to the computational burden. In this thesis we review the previous work in the field, and derive the mathematics behind camera calibration, silhouette extraction, and shape-from-silhouette. For our sensor implementation, we use four color camera/frame-grabber pairs and a single high-end Pentium III computer. The color cameras were configured to observe a common volume. This hardware uses our RTS^3 software to track volumetric motion. Two types of shape-from-silhouette algorithms were implemented and their relative performance was compared. Lastly, an application of our sensor to the problem of generating synthetic views was explored.

dan.ps.Z (6595K compressed postscript).