Welcome to the Metaverse
By Mark Frauenfelder, Wed Jun 12 00:00:00 GMT 2002
Suddenly, my image is replaced by that of a slightly manga-esque looking young Japanese woman with a sensible haircut and an expensive business suit...
I just sat down in front of
the computer in Eyematic's laboratory in Los Angeles, California
and I'm not quite sure what to expect. There is a little webcam
mounted above the PC, pointed at my head. My live video image is being
displayed in the PC's window. In the lab with me are Orang
Dialameh, Eyematic's CEO, Hartmut Neven the chief technology
officer, and chief scientist Jaron Lanier, the famous virtual reality
Dialameh taps a couple of buttons on the keyboard and
suddenly a pattern of dots, connected by lines, is floating over the
image of my head. It looks as if the kabalistic tree of life has been
tattooed on my face. Actually, these dots are positioned over certain
facial features, such as my nose, eyes, eyebrows, and the middle and
corners of my lips. If I turn my head or move it from side to side, the
dots followed me like a swarm of flies.
Dialameh clicks a
couple of keys again. Suddenly, my image is replaced by that of a
slightly manga-esque looking young Japanese woman with a sensible
haircut and an expensive business suit. I start to laugh. And like some
kind of magic mirror, the Japanese woman laughs too. I raise an eyebrow.
So does she. I open my mouth. I wink. I nod. I roll my eyes. She copies
my every move. She is a kind of virtual marionette, but instead of being
controlled by strings, she is responding to the changes in my face.
She's an avatar, not one in the sense of an incarnated Hindu deity,
but in the way science fiction writer Neal Stephenson described them, in
his now classic cyberpunk novel Snow
Snow Crash, Stephenson envisioned a high-fidelity
virtual world dubbed the Metaverse, which was like a fully immersive 3D
version of the World Wide Web. People could buy off-the-shelf animated
characters, or avatars, that they could use to interact with other
people visiting the Metaverse. As Stephenson explains: "Your
avatar can look any way you want it to, up to the limitations of your
equipment. If you're ugly, you can make your avatar beautiful. If
you've just gotten out of bed, your avatar can still be wearing
beautiful clothes and professionally applied makeup."
the novel, the most popular avatar for women was called the
"Brandy," and the men favored the "Clint" model.
Here's how Stephenson described the avatars: "Brandy has a
limited repertoire of facial expressions: cute and pouty; cute and
sultry; perky and interested; smiling and receptive; cute and spacy.
Her eyelashes are half an inch long, and the software is so cheap that
they are rendered as solid ebony chips. When a Brandy flutters her
eyelashes, you can almost feel the breeze." And Clint is "the
male counterpart of Brandy. He is craggy and handsome and has an
extremely limited range of facial expressions."
Stephenson pretty much nailed the future. Here at Eyematic,
the choice in avatars is even better than the ones available in the
Metasphere. To demonstrate Dialameh grabs the mouse and moves it to a
pull-down menu, selecting a new character. This time, I'm a caped
superhero with a jaw the size of a toaster. Then, I become a fat
cartoon dragon. This is fun! It becomes even more fun when an Eyematic
employee in another part of the building has a videoconference with me,
using avatars instead of our real video images. He is some kind of
purple monster, and I am Humphrey Bogart, or at least someone who looks
just like him. About all I can tell him was, "This is really
even neater thing about Eyematic's avatar technology is the way it
can be used in low bandwidth situations, such as over mobile phone
connections. That's because Eyematic's software isn't
actually sending fully rendered 3-D animation over the wires - that
would bring most wireless networks to their knees. Instead, the
real-time facial tracking software analyzes the movements of the
user's face, extracts the relevant information, then sends the
instructions to move the corresponding features on the avatar, which has
been previously downloaded onto the device. It's like pulling the
marionette strings over a distance instead of sending the marionette
itself. The company claims that its "lightweight" data format
requires less than 1/100th the bandwidth of standard
Eyematic hopes to get wireless operators interested in
using this technology, and is offering a package of applications they
call "Eyematic Synthetic Video" that provide an end-to-end
suite of animation services for low bandwidth applications. The
technology is a combination of authoring applications, services on the
backend that take care of content delivery, and the player software that
actually runs on the phones themselves.
So far, Eyematic has
managed to grab the attention of several high-profile telecom-related
investment firms, who have ponied up $16 million in funding in January.
Investors include Texas Instruments, Deutsche Telecom's T-Telematik
Venture Holding GmbH, and Japanese investment firm INTEC, which is
developing rich media delivery for Japan's mobile Internet content
In March, Eyematic and Texas Instruments announced that
they were working together to add Eyematic's software capabilities
to 2.5 and 3G cellular phones and mobile handhelds that use TI's
high-performance, ultra-low power OMAP application processors and
wireless modems. Steve Hartford, Eyematic's VP of marketing, says
the OMAP processors can process multimedia content at close to desktop
quality without sucking phone batteries dry in a matter of
Eyematic has ported its multimedia engine over to 22
devices. In some cases, the software is embedded in the phones'
firmware, and in other cases, it's downloaded on demand into the
phone's memory. Eyematic's technology is already running in
commercial applications in Japan and should be up and running in a major
wireless carrier by the end of June.
The current applications
include a horoscope avatar that reads your fortune every day, a 3D
multimedia messaging application that lets you select a character to
announce your messages, and a 3D adventure game. They've also been
running tests on a perky avatar assistant that works in conjunction with
your scheduling program to appear on your phone's display and
cheerfully announces upcoming appointments.
In the coming years,
Eyematic will work closely with device manufacturers, because
they're interested in the potential for avatar videoconferencing
using video cameras mounted in telephones. For now, however, Eyematics
avatars can be triggered by speech alone, or even by plain old text that
runs through a text-to-speech generator.
I'm a Veeper!
About 400 miles north of
Eyematic's Los Angeles headquarters, I find myself in the offices
of another company that's creating avatars for a wide range of
purposes. Pulse Entertainment, a privately held company with recent
investments from Softbank, AOL/TimeWarner, Autodesk and El Dorado
Ventures, is set in a funky brick building on the upscale side of Market
Street in San Francisco. I'm here on a sunny Friday morning, posing
against a whiteboard while Pulse's laid back and genial CEO, Fred
Angelopoulos, snaps my picture with a digital camera.
you're going to make your own Veeper," he announces, steering
me to a laptop. (Veeper, as I find out, is what the software engineers
were using to designate the VPs, or "virtual persons" they
were developing.) After transferring my photograph and loading up the
program, I use the mouse to designate some key "data points",
such as my head outline, eyes, and mouth. I follow a few more
step-by-step instructions, and in the space of about one minute,
I'm staring at a reproduction of myself. The image bobs its head
slightly, and blinks every once in a while. It feels alive. I can press
different buttons and make it appear happy or sad. Next, I say a few
words into a microphone and hit the return key. The avatar repeats them,
moving its lips in sync with the words. Freaky!
Then, just for
fun, Angelopoulos has me type some words into a text box and asks me to
select one of the different text-to-speech generators available from a
pull-down menu, such as a male Scotsman, a valley girl, and an
enthusiastic sports announcer. Suddenly, a brogue is coming out of my...
er... my avatar's lips.
Like Eyematics, Pulse's
technology is optimized for low bandwidth conditions. Even though it
uses very little bandwidth, Angelopoulos says "people think
it's streaming video." In fact, that's one of the things
Pulse has trouble with when they demo the technology. People think
they've seen this before, but they are mistaken. Very little data
is going over the wires when a Veeper is in action.
Eyematics, Pulse is working with Texas Instruments to develop Veepers
for multimedia messaging, enterprise, entertainment and instant
messaging wireless applications on 2.5G and 3G mobile devices that use
TI's OMAP application processors.
Pulse showed me a couple
of demos on mobile devices. One was a color 3D avatar running on a
Pocket PC, which looked as good as the desktop demos I'd seen
earlier. But even more impressive was the simple 2-bit graphics playing
on the monochrome screen of an ordinary mobile phone. The character had
a sense of life to it that transcended the clunky pixels on the screen.
And it will only get better when color smartphones hit the
After seeing the demos at Eyematic and Pulse, I left
impressed, and convinced that 3D characters on wireless devices to
enhance e-mail, voicemail, games, and other services will definitely be
part of the future of the mobile Internet.
Want to play with my
Veeper? You can visit my veeper page at http://demo.pulse3d.com/wired_markf/.
(Just don't make me say anything that could get me in
Frauenfelder is a writer and illustrator from Los