3D model reconstruction using photo consistency

Abstract

rviodel reconstruction using photoconsistency refers to a method that creates a photohull,

an approximate computer model, using multiple calibrated camera views of an object. The

term photoconsistency refers to the concept that is used to calculate the photohull from the

camera views. A computer model surface is considered photoconsistent if the appearance of

that surface agrees with the appearance of the surface of the real world object from all camera

viewpoints.

This thesis presents the work done in implementing some concepts and approaches described

in the literature. A photoconsistent voxel b&':>ed method was used to generate the photohull.

An algorithm b&':>ed on this method calculates the geometry of the visual hull by removing

inconsistent voxels from a initial spherical volume until the resultant appearance of the volume

is consistent with all the camera views. A photo consistency cost function is used to determine

the consistency of a voxel. This cost function is based on the colours of the pixels of the camera

views that correspond to the portion of the surface that a particular voxel is representing.

The primary cost function used in this thesis is the maximum RMS error between the colour

of the voxel, determined by the mean of all the pixel colours from all camera views that

can see the voxel, and the pixel colours obtained from each camera view. A threshold is

used to determine whether the photo consistency error of a voxel indicates if it is consistent or

inconsistent. An estimation algorithm is used to determine an approximation to the threshold

that would correspond to the best model reconstruction results.

The accuracy of the constructed photohull is determined by comparing a rendering of the

obtained photohull and a camera image that was not used in the reconstruction process. A

silhouette is used to remove background from the images.