Improving Indoors Positioning with Augmented Reality

By James Anderson •  Updated: 09/22/12 •  4 min read

Satellite based navigation systems can help you get from point to point if you are walking, biking or driving, so long as you have a GPS signal. The ubiquitous positioning technology habitually works great, whether in urban, suburban or rural outdoor areas, but it isn’t much use indoors.

If you have to find your way around a big and complicated building like a hospital or an airport, you usually have to depend on vague or even unhelpful signage. But don’t worry, researchers from the Technische Universität München (TUM) have devised a new technology that does not depend on GPS.

The researchers call it the NAVVIS system, which is an acronym for Navigation anhand visueller Informationen zur erweiterten Wahrnehmung der Umgebung, or in English, Navigation based on visual information for extended perception of the environment.

The system uses visual information and realistic 3D images to point users in the right direction. Researchers had to develop a special location recognition system, starting by taking photos of a building and simultaneously mapping prominent features like stairs and signs.

You Are Here

A smartphone app then shows users the map images to find their current location. All users have to do is take a photo of their surroundings.

NAVVIS smartphone app

NAVVIS uses realistic 3D images to point users in the right direction. Credit: G. Schroth/TUM

The program, in a fraction of a second, will compare the photo with the images in its database and figure out the user’s exact position, down to the nearest meter, and what direction they are facing. The app then displays arrows that point the way in a 3D view.

So it’s basically like any other augmented reality app, except where Wikitude or Layar use GPS for geo-location, NAVVIS proposes to use a gigantic photo database. This would seem to limit each app to a single building.

NAVVIS is now being tested at TUM: “With multiple floors and winding corridors, the main campus is something of a maze after several decades of expansion. This makes it an ideal testing ground for NAVVIS,” declares Georg Schroth, project head.

The database for the TUM 1st floor alone is 9438 images, taking up 1. GB, plus metadata point cloud files are also needed. I hope the database is stored on a server, not in the app itself.

Other Potential NAVVIS Uses

NAVVIS has other potential uses besides navigation, explains his colleague Robert Huitl:

“The software can also be used for augmented reality applications if you add on special programs. So for instance, visitors to the Louvre would not only be able to locate the Mona Lisa, but also view information about the painting or find directions to other works by da Vinci.”

Another possible application could be virtual tours on a PC or smartphone.

NAVVIS is appropriate technology for any location beyond the reach of satellite navigation. Wireless network signals can also be used to help establish approximate positioning. There is a catch, however, in that some building interiors are constantly changing.

Signs are sometimes removed and large buildings will have construction work going on from time to time. Georg Schroth explains how NAVVIS can stay up to date:

“The system doesn’t just position the user, it also utilizes the user’s photos to record changes in the interior and overwrite obsolete data.”

To map buildings, the researchers are using a mapping trolley. The trolley includes two laser scanners, single lens reflex cameras and a high resolution 360 degree camera.

When the trolley passes along a corridor, the two lasers scan the dimensions horizontally and vertically and create a virtual map using three-dimensional point clouds. Software is then used to lay the photos over the pixels. This produces a realistic three-dimensional view.

Reference: Robert Huitl, Georg Schroth, Sebastian Hilsenbeck, Florian Schweiger, Eckehard Steinbach, TUMindoor: an extensive image and point cloud dataset for visual indoor localization and mapping, In IEEE International Conference on Image Processing (ICIP), Orlando, September 2012