(February 2004 - June 2006)
Artesas (Augmented Reality for TEchnical Service ApplicationS) aimed to provide Augmented Reality (AR) Systems to be used in complex industrial service situations. Using a see-through display as shown below the mechanic should be provided with additional information about the things he sees in a perspectively correct manner. As a follow-up of the "ARVIKA" project the main goal in Artesas project was to replace the marker based tracking system with a marker-less one. The official home page can be found at www.artesas.de.
Flash player not available.
To achive the goals of robust, fast and reliable marker-less tracking in uncooperative environments the following approaches have been used in combination:|
Our main focus in this project was the developement of the realtime Structure-from-Motion approach to estimate the camera movement. Besides precision, the realtime aspect was crucial in the project. With a standard laptop we achieved 20 fps whith our modelbased tracking approach.|
There are 3 steps in the tracking approach which need to be solved:
The model-based pose tracking approach starts with a line-based registration of the current image to the model of the object.
Therefor the gradient image is generated and a line model of the object is fitted to the gradient image in an iterative optimization process. This way the initial camera pose relative to the object's model is estimated.
The tracking is based on KLT-Features and standard pose estimation. First KLT-features are detected and for every feature a 3D point is generated by projecting the 2D feature onto the model. The distance of the point is taken from a rendered view of the model.
The gradient based intensity features are tracked over time and the pose is simultaneously estimated using the 2D/3D correspondences.
Through occlusion, broken images or multiple other reasons the tracking may fail. For this case a reinitialization procedure has to be available. In the project the SIFT features have been used for this purpose. While tracking, certain images have been recorded as key images. For these key images SIFT features have been calculated and stored in a database. In the reinitialization step SIFT features of the current image are compared to the database features and the closest match gives the current camera's position.