You can find the official Project Homepage here.
An increasing number of video- and film productions combines virtual components like avatars with real scenes and real actors. Origami aims to simplify these production by providing new tools and technologies. Our main contribution to the project is the developement of tools to generate 3D background models of real scenes from camera images and video streams.
This can be devided into three parts:
- Plenoptic sampling with uncalibrated cameras (recording)
- Camera calibration
- Multiview stereo scene reconstruction and 3D model generation
- Plenoptic modelling and rendering
To record large and complex scenes efficiently, we build a hand-held multi-camera rig which can be operated by a single person. The prototype system consists of 4 Cameras (Sony DFW X-700) which are connected to two laptop computer via IEEE1394 (aka FireWire, i-Link). This system could be extended with more cameras, more laptops or, for indoor use, with high-performance PCs to increase the framerate. Synchronisation of the computer is done via standard ethernet with specialized protocol. The cameras are synchronized with external trigger equippment connected to the parallel port of one PC. The left image gives an overview of the system, while the right image shows the usage. Click the images to enlarge them. With this system, a real scene can be scanned with a simple walk-by in four different heights. All cameras are uncalibrated which reduces setup time and problems from miss-calibration. The rigid coupling of the cameras helps to calibrate them afterwards even for difficult scenes.
Dinosaur in National History Museum (Overview)
In October 2002, we recorded some scenes in the entrance hall of the National History Museum in London, figure 3 gives an overview over the dinosaur skeleton, which we scanned. Figure 4 shows one of 214 images of the Dino sequence.
From these depth map, a 3D surface model can be obtained by ajusting a triangle mesh
according to the depth values. Along with the original image as texture, this mesh can be stored as VRML model and can be used or modified with standard tools.
Please note the complex geometry with many occlusions.
Download movie with depth maps from one camera (4MB)
For large and complex scenes it is most often impossible to generate one consistent mesh from several (hundred) depth maps. But using images, depth maps and calibration, novel views can be rendered by generating local geometry and apply textures. This is called Image Based Rendering (IBR). We developed different IBR methods to use these so called plenoptic models as background for virtual productions.
Download movie rendered with adaptive quads (47MB)