The goal of this research project is to investigate and develop the necessary adaptations to classic 3D reconstruction methods from the area of computer vision in order to apply them to underwater images. Applications can be found in the areas of Geology and Archaeology. Thus, the project is in collaboration with the Geomar Helmholtz Centre for Ocean Research Kiel, the scientific divers group of Kiel University and the group for maritime and limnic archaeology of Kiel University. In addition to the DFG financing, parts of the project have been financed by the Future Ocean Excellence Cluster. The objective of the cluster is to gain knowledge about a whole range of topics concerning the so far largely unknown deep ocean.
Some the image data to be examined coming for the area of Geology has been captured in great water depths, for example using the ROV Kiel 6000 (Remotely Operated Vehicle) that can reach water depths of 6000 m.
Equipped with several cameras, one of them being a HDTV camera, it is used to examine black smokers, a type of hydrothermal vent, found for example at the bottom of the Atlantic Ocean (Wikipedia link).
Because of limited diving time during which scientists need to complete a variety of examinations, the task of computer vision is to compute 3D reconstructions of the black smokers. In order to examine and measure the vents after the dive, a 3D model including the absolute scale needs to be determined. The 3D reconstructions are computed with a state-of-the-art structure from motion approach, that has been adapted to the special conditions of the underwater environment.
Special characteristics of the underwater imaging environment in general and the black smokers specifically, that need to be considered include:
- optical path/refraction causes errors in geometry estimation
- scattering and absorption of light cause green or blue hue in images and low contrast and therefore impede feature matching, and
- floating particles, moving animals and smoke violate the rigid scene constraint.
While still traveling through the water, light is attenuated and scattered depending on the distance traveled, causing the typical green or blue hue and low contrast and visibility in underwater images. The Jaffe-McGlamery model can be used to model these effects. Simplifications of the model equations are applied in many color correction algorithms in the literature. Usually, the distance the light traveled through the water needs to be known. After running the SfM algorithm, and after computing the final 3D model, those distances are known. This allows to apply a physics-based, simplified model equation for color correction to the texture image:
Refraction at the underwater housing causes the light rays to change their direction when entering the air within the housing. To be exact, light rays are refracted twice, once when entering the glass and again, when entering the air. In the literature, the perspective pinhole camera model including distortion is used for computing the reconstruction. A calibration below water causes focal length, principal point, and radial distortion parameters to absorb part of the error, hence the perspective calibration can approximate the effects. However, a systematic model error caused by refraction remains due to the single view point model being invalid.
In the image, this can be observed by tracing the rays in water while ignoring refraction (dashed lines) - they do not intersect in the center of projection. It can be easily shown that this model error leads to an accumulating error in pose estimation, when using the perspective model for pose computation.
Therefore, refraction has been modeled explicitly in the whole reconstruction pipeline:
- calibration of underwater housing glass port, assuming the camera's intrinsics are known
- Structure-from-Motion algorithm that explicitly models refraction, and
- dense depth computation using a refractive Plane Sweep method.
The corresponding publications for all three components can be found here.
The complete pipeline allowed for the first time to reconstruct 3D models from multiple images captured by monocular or stereo cameras with explicitly modeled refraction at the underwater housing and the major conclusion was that the systematic model error caused by using the perspective camera model can be eliminated completely by using the proposed refractive reconstruction.
The following figure shows results on real data captured in a tank in a lab. From left to right: exemplary input image, segmented input image, and results for two different camera-glass configurations. Note that the red camera trajectory and point cloud are the result of perspective reconstruction and the blue camera trajectory and point cloud were computed using the proposed refractive method. The result in the right image shows that perspective reconstruction failed, while the refractive method did not.
Input image and resulting camera path and 3D point cloud from an underwater volcano near the Cape Verdes:
In an underwater cave system in Yucatan, Mexico, archaeologists found a skull, which resulted in the following reconstruction: