It's 2008. You're flying in to Los Angeles for the first time. As soon as you deplane at the LAX airport you walk over to the NaviSpecs kiosk.
After filling out a short online survey to indicate your demographic and psychographic profiles, you hold your smart eyeglasses in front of the NaviSpecs beaming pad. In a few seconds, a green light blinks and a speaker emits a cheerful tone, assuring you that the latest information and reviews have been uploaded to your NaviSpecs. You slip them on and head out of the airport with the confidence of a native Angelino.
Later that day, after you've checked into your hotel room, you decide to grab a bite to eat. As you walk down the Sunset Strip wearing your voice-controlled NaviSpecs, you ask them to display ratings for the different restaurants you see on the street.
Based on the preferences survey you filled out earlier, your NaviSpecs will draw red X's over the restaurants that have gotten bad reviews by other people who match your profile, and they'll draw green arrows directing you to check out the kinds of places people who share your tastes have given the thumbs up to.
It'll be a number of years before we begin to see something along the lines of NaviSpecs, but several companies are already creating simpler forms of a technology that's known as "augmented reality."
You've heard of virtual reality, the technology of generating artificial worlds within a computer, and allowing people to interact with objects and other users through input/output devices that give people the ability to see, hear, and feel things in the virtual environment. (Massive multiplayer games on the Web are a crude form of virtual reality.)
But augmented reality isn't about artificial environments. It's about taking real things and real places and adding a layer of computer generated information on top of them (like the red X's in the NaviSpecs example above).
By combining the real environment with a computer-generated one that contains contextual and geographical information, augmented reality can provide a better understanding of the real world, and will be useful in a tremendous variety of wireless applications - such as multiplayer games, navigation, context-aware advertising, and remote controlled robotics - both in the near future and in decades to come.
Some of the first augmented reality applications for mobile devices will be for "social navigation." Currently, when you are preparing to travel to another country for the first time, you probably call your friends who've already been there and ask them for recommendations on where stay, what to eat, where to shop, what to see, and who to visit.
Social navigation takes this kind useful information and makes it available in real time. In the same way that search engines place the most frequently-visited sites at the top of a search request, social navigation applications harness the power of many people's likes and dislikes in the real world and pushes the most suitable data to the top of a traveler's request for information.
The simplest form of social navigation can be thought of as a kind of virtual graffiti. Say you're standing on a street corner in Florence admiring the magnificent cupola on the Santa Maria del Fiore Cathedral. With a social navigation application on your location-sensing wireless device, you can read other people's comments about the cathedral, or leave one of your own by entering a short reviews so that other people who come along will be able to read your comments.
The Swedish Institute of Computer Science, SICS, an independent non-profit research organization based in Kista, has developed just such a program called GeoNotes, which allows users to "mass-annotate" geographical locations with virtual sticky notes. The application allows you to choose whose comments you want to see by creating a "friends" list.
Another, more lighthearted, example of social navigation is HaikuHaiku, a service that allows mobile device users to read and write three line descriptions about specific geographical locations around the world. People calling up HaikuHaiku from a bench in London's Hyde Park might read something like: "Roasted chestnut cart / Northwest of Speaker's Corner / Tastiest ones yet."
As of now, users must enter their location manually, but the wireless application development company that created HaikuHaiku, San Francisco-based Neoku, has plans to make the service location-based.
On a grander scale, researchers at the University of Siena in Italy are working on a project funded by the European Commission called HIPS (Hyper-Interaction within Physical Space) to develop a handheld electronic tour guide that will detect visitors' locations and provide them with information about attractions as they walk around a city.
Before social navigation services can really take off, we'll need easy to use, wireless wearable computing equipment. Today, wearable computing systems are too bulky and cumbersome, making users look like equipment-laden cyborgs.
The heavy, clunky headmounted displays that were used in virtual reality research in the early '90s were a literal pain-in-the-neck to use. Since then, advances in display technology have made the helmet-like headmount an antique.
Kaiser Electro-Optics, based in Carlsbad, California, offer a number of full-color, high-resolution headmounted displays that aren't much bigger than an ordinary pair of carpenter's goggles. And tiny microvision displays, made by companies like Boulder, Colorado-based Zight, offer heads-up displays that are just a little larger than a pair of ordinary eyeglasses.
Besides high-resolution heads-up glasses, a useful augmented reality system will have to establish were users are located it can provide the appropriate information. Typically, GPS location sensing has an accuracy of about 50 feet. There are ways to improve resolution to about 10 feet, but that requires the use of a second, stationary GPS device to correct the positioning errors.
In metropolitan areas, other technologies could be used, such as infrared and radio triangulation, to pinpoint a user's position.
The ultimate wireless device
Right now, we're limited in what we can do with augmented reality. But as hardware improves, a whole new kind of mobility will open up.
We can get a glimpse of what's in store by looking at the Avatar-Pro, which looks like a cross between a miniature snowplow and Star Wars' R2D2. This bright red telepresence robot represents the ultimate in wireless mobility. Think of it as a videophone on wheels, with plenty of extras.
Users can control the Avatar-pro from any Web-enabled computer anywhere in the world, provided they have the correct password. Mounted on a movable boom, the robot's video camera can be panned, tilted, or zoomed. Its speaker and microphone allow you to use the robot to have conversations with people. Its six-wheeled platform is designed to make tight turns while navigating offices and hallways. The gripping hand and other features are controlled using the Ethernet 802.11 LAN (Wi-Fi) standard.
The Avatar-Pro is being made by the iRobot Corporation, based in Massachusetts and founded by three roboticists from MIT. While the Avatar-Pro was built to work as a kind of remote-controlled artificial body for telecommuters, specialized telepresence robots will eventually be used in situations that are normally hazardous to humans - places like radioactive sites, bomb locations, and construction sites under the sea and in outer space.
Such robots could also be used in a variety of recreational and personal situations. You could virtually stroll through the galleries of the Louvre, attend a baseball game, or even go shopping at a market.
And all the while, you would be controlling the robot from your home computer - or better yet, your mobile device.
Mark Frauenfelder is a writer and illustrator from Los Angeles.