Measuring the privacy cost of "free" services.

There was an interesting pair of pieces on this Sunday's "On The Media."The first was "The Cost of Privacy," a discussion of Facebook's new privacy settings, which presumably makes it easier for users to clamp down on what's shared. A few points that resonated with us:

  1. Privacy is a commodity we all trade for things we want (e.g. celebrity, discounts, free online services).
  2. Going down the path of having us all set privacy controls everywhere we go on internet is impractical and unsustainable.
  3. If no one is willing to share their data, most of the services we love to get for free would disappear. Randall Rothenberg.
  4. The services collecting and using data don't really care about you the individual, they only care about trends and aggregates. Dr. Paul H. Rubin.

We wish one of the interviewees had gone even farther to make the point that since we all make decisions every day to trade a little bit of privacy in exchange for services, privacy policies really need to be built around notions of buying and paying where what you "buy" are services and how you pay for them are with "units" of privacy risk (as in risk of exposure).[pullquote]

  1. Here's what you get in exchange for letting us collect data about you."
  2. Here's the privacy cost of what you're getting (in meaningful and quantifiable terms).

[/pullquote](And no, we don't believe that deleting data after 6 months and/or listing out all the ways your data will be used is an acceptable proxy for calculating "privacy cost." Besides, such policies inevitably severely limit the utility of data and stifle innovation to boot.)Gaining clarity around privacy cost is exactly where we're headed with the datatrust. What's going to make our privacy policy stand out is not that our privacy "guarantee" will be 100% ironclad.[pullquote]We can't guarantee total anonymity. No one can. Instead, what we're offering is an actual way to "quantify" privacy risk so that we can track and measure the cost of each use of your data and we can "guarantee" that we will never use more than the amount you agreed to.[/pullquote]This in turn is what will allow us to make some measurable guarantees around the "maximum amount of privacy risk" you will be exposed to by having your data in the datatrust.


The second segment on privacy rights and issues of due process vis-a-vis the government and data-mining.

Kevin Bankston from EFF gave a good run-down how ECPA is laughably ill-equipped to protect individuals using modern-day online services from unprincipled government intrusions.One point that wasn't made was that unlike search and seizure of physical property, the privacy impact of data-mining is easily several orders of magnitude greater. Like most things in the digital realm, it's incredibly easy to sift through hundreds of thousands of user accounts whereas it would be impossibly onerous to search 100,000 homes or read 100,000 paper files.This is why we disagree with the idea that we should apply old standards created for a physical world to the new realities of the digital one.[pullquote]Instead, we need to look at actual harm and define new standards around limiting the privacy impact of investigative data-mining.[/pullquote]Again, this would require a quantitative approach to measuring privacy risk.(Just to be clear, I'm not suggesting that we limit the size of the datasets being mined, that would defeat the purpose of data-mining. Rather, I'm talking about process guidelines for how to go about doing low-(privacy) impact data-mining. More to come on this topic.)

Previous
Previous

Governing the Datatrust: Answering the question, "Why should I trust you with my data?"

Next
Next

Ten Things We Learned About Communities