The downside to mass data storage in the cloud

The ability to access Dropcam video footage in the cloud is indicative of a broader trend in cloud computing that is eating away at privacy.

privacy info protect ts

The cloud can be an enormously cost-effective way to increase storage and computing musculature, and also, sadly, a way to further add misery to those seeking privacy—or who just want to be left alone. It's rare to see organizations stand up and shout, "we'll not give your data to anyone!" or "the life of all stored data, except opt-in assets you want us to store, is always 90 days!" or "yes, we can determine in absolute certainty that your data has been erased to protect you and your identity."

The cloud, in some warrens, has become a storage ground for the various factories of "big data," whose ideals are generally to sell things to consumers and businesses. Correlating facts is huge. Ask Target, whose insight into discovering pregnancies helped them capture a nicely profitable market in the pregnancy and new mother world. Smart, you say. There is a downside to this.

Striking while the iron is hot is a great idea. This means harvesting information on searches to be correlated into ads at the next site you visit. Facebook and Amazon are famous for this, and it's a huge amount of Google's total business model. Google's purchase of Nest last year, which gleefully rats out your utility use patterns, also meant the acquisition of Dropcam.

As ace reporter Sharon Fisher reported at TechTarget, Dropcam's users allow cameras to send their data into Dropcam's cloud, where it is archived seemingly indefinitely, to the delights of users, police warrants, and security monitoring individuals, who see the surveillance results at will, from any reasonable IP address. It's inferred that some users monitor Airbnb suites (shouldn't they disclose this?) and apparently users forget there's a camera on and do, well, silly things that they may not want captured on digital film.

Google's storing this sort of info, Amazon will be listening with Echo, and who knows what Siri knows but isn't saying. This amounts to a comparative heap of very personal information, as though these were robots whose knowledge base was contained inside the physical unit we see on-premises, but it's not—it's in the cloud and not only hack-able, but perhaps being used to analyze us, sell us something, or maybe worse, refuse to sell us something or to used against us in a court of law.

Is this data tagged so someone knows to kill it? Is there a metadata tag saying this file or this datablock expires on April 19, 2017? Often it's tied to an account. Does this data get reused somehow? Video, audio conversations scrubbed for keywords? Much is up to the user agreement, and what happens if you're, say, a medical provider that's amassing large quantities of personal medical data? Can that be used? Yes, an attorney would say, "stop right here, and let's disambiguate these questions." Clear as mud.

The average civilian has no "bill of rights" that's common to these online personal information services, whose data is accumulated in cloudy locations. Murky might be a better way to think about it. You want to trust data storage providers – one wants to believe that data sources are somehow bulletproof – but with huge, emblematic recent breaches of retailers, insurance providers, and university alumni databases, that's not so easy. In reality, some have already been hacked and we just haven't discovered it yet because no one's offering the information on dark markets….at least right now.

Is there a way for the app industries to have a common agreement about what can be shared, what is a reasonable life expectancy for personal data, how and to what extent personal data can be actually anonymized, and how data destruction can be audited to even a private detective's satisfaction? I wish there were answers.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2015 IDG Communications, Inc.