(February 2004 - June 2006)
Artesas (Augmented Reality for TEchnical Service ApplicationS) aimed to provide Augmented Reality (AR) Systems to be used in complex industrial service situations. Using a see-through display as shown below the mechanic should be provided with additional information about the things he sees in a perspectively correct manner. As a follow-up of the "ARVIKA" project the main goal in Artesas project was to replace the marker based tracking system with a marker-less one. The official home page can be found at www.artesas.de.
Flash player not available.



To achive the goals of robust, fast and reliable marker-less tracking in uncooperative environments the following approaches have been used in combination:

  • a model-based tracking system which uses prepared CAD-models of the objects to track as seen on the left,
  • a realtime Structure-from-Motion system using a fisheye camera which ensures longterm stability and robustness against motion blur and occlusions,
  • an inertial sensors which is used to measure head rotation,
  • a motion model which predicts position and orientation,
  • a sophisticated sensor fusion which combines all data and calculates the final position and orientation of the users head.
Image Image


Our main focus in this project was the developement of the realtime Structure-from-Motion approach to estimate the camera movement. Besides precision, the realtime aspect was crucial in the project. With a standard laptop we achieved 20 fps whith our modelbased tracking approach.

There are 3 steps in the tracking approach which need to be solved:

  1. Registration in the scene
  2. Tracking over sequential images
  3. Reinitialization in case of failure
Image Image

1. Registration

The model-based pose tracking approach starts with a line-based registration of the current image to the model of the object.

Image Image Image

Therefor the gradient image is generated and a line model of the object is fitted to the gradient image in an iterative optimization process. This way the initial camera pose relative to the object's model is estimated.

2. Tracking

The tracking is based on KLT-Features and standard pose estimation. First KLT-features are detected and for every feature a 3D point is generated by projecting the 2D feature onto the model. The distance of the point is taken from a rendered view of the model.

Image Image

The gradient based intensity features are tracked over time and the pose is simultaneously estimated using the 2D/3D correspondences.

3. Reinitialization

Through occlusion, broken images or multiple other reasons the tracking may fail. For this case a reinitialization procedure has to be available. In the project the SIFT features have been used for this purpose. While tracking, certain images have been recorded as key images. For these key images SIFT features have been calculated and stored in a database. In the reinitialization step SIFT features of the current image are compared to the database features and the closest match gives the current camera's position.

Image Image

Created by tos. Last Modification: Wednesday 24 of March, 2010 12:21:33 CET by ischiller.