Privacy is a Balancing Act
By Mark Frauenfelder, Wed Aug 04 07:00:00 GMT 2004

How much of your privacy are you willing to spend in return for location-based services? Not as much as you may think. One Berkeley reseacher has a real bargain for you.


Every technology has tradeoffs. Cars made it possible for individuals to travel long distances on their own schedule, but they also made smog and traffic jams. Television delivers entertainment into our homes, but it also creates millions of unhealthy couch potatoes. Mobile phones are incredibly useful communication tools, but they also encourage an always-on culture in which we are expected to be reachable anywhere and everywhere. These tradeoffs are something most of us put up with, however, because we feel that there's a real benefit to be gained for the price we pay.

But what about location-aware services, ubiquitous computing and mobile social software? What's the price we will have to pay -- in terms of loss of privacy -- to reap the benefits of these services (such as recommendations for restaurants or locating people with shared interests)? Are there ways to lower the costs without losing the benefits?

To get answers to these questions, I spoke to John Canny, a professor in the Computer Science Division at the University of California, Berkeley. For the last few years, Canny and his colleagues have been studying the issues of privacy as they relate to ubiquitous computing networks, and have been developing algorithms and other methods to tip the cost-benefit balance in users' favor.

Giving as Little as You Can to Get as Much as You Can

Canny looks at privacy as an equation with an individual on one side and another actor on the other. Both are looking for information and have some incentive to use it. "Most of our work has been on that balance -- providing the other actor with what they want and at the same time protecting -- as much as possible -- everything else about the user. In other words it's sort of a 'minimum disclosure' approach."

The primary goal of Cannyís privacy algorithms is to disclose as little personal information about you to as few people as possible in order to receive whatever it is you want. For example, itís very wasteful to continuously broadcast your location to everyone on a network. That kind of system makes it easy for would-be stalkers to locate you. A better system, says Canny, discloses your location only to those people nearby. This miserly approach to parsing out your personal data doesn't eliminate the ability for stalkers to find you, but it certainly makes their job a lot tougher.

The algorithms that Canny and his team have developed are complex, and the papers describing them are loaded with impressive-sounding terms like "orthonormal matrix," "cryptographic homomorphism" and "k-dimensional linear space," but Canny says their ultimate purpose is to "basically hide almost everything that's unique or distinguishing about user data and at the same time compute the things that you typically want in a community that's sharing information about goods, locations, etc.," he says. "If you're looking for people with similar interests or are in a similar location, you can discover that without disclosing very much about your personal information."

Keeping the Data Off the Central Server

One important aspect of Canny's privacy system is where the data is stored. Most companies keep information about their customers on their servers. Besides being a tempting target for abuse by the company, it puts your privacy at risk should the company go bankrupt. If that were to happen, another company could acquire the user database, and its privacy policy might be terrible. Canny's system sidesteps this issue by storing the data on user's devices. One methods stores the data in little pieces across many different usersí devices, making it next to impossible for a hacker (or an unscrupulous service provider) to collect it and then crack the encryption.

Another method takes advantage of "the natural incentives that occur in peer communities, as manifest in things like Napster and Gnutella," says Canny. "It does seem within a community you have a few altruistic people who will, for whatever reason, help the community by providing the service, and from a privacy perspective you can do a lot if you can identify some users who are willing to leave a machine online that provides some privacy protection. The rest of the people in the community can use that machine. They don't have to trust the owner of the machine because the algorithm is set up so that the owner of that machine can't get access to that machine anyway, but if they provide this service, they can protect their peers' information from the service provider."

How do you convince service providers to accept such a system? Canny says you have to first convince communities, or more precisely, convince the active leaders of communities, to employ these kinds of systems early on into their mobile social networks. Then, these groups can approach service providers and use its clout to demand that the service be delivered in a way that works with their privacy system.

Canny is currently working with some student groups at UC Berkeley, and will conduct a test of the system later this year. I'll let you know how it goes.