- Silicon Valley's 19 Coolest Places to Work
- Is Windows 8 Development Worth the Trouble?
- 8 Books Every IT Leader Should Read This Year
- 10 Hot Hadoop Startups to Watch
The CIO-level business angle on the latest tech
In the Fifth Annual State of the Network Global Study published by Network Instruments in March 2012, 74% of the survey respondents indicate that their top concern about cloud computing is "security for corporate data." This survey is consistent with many others that always seem to identify data security concerns as a leading inhibitor to the adoption of cloud computing.
Actually, I think "security" is a catch-all phrase that also encompasses concerns about data residency (i.e., where is the data?) and data privacy (i.e., who can see our personally identifiable information, or PII?). Many organizations are fearful of or prohibited from placing data in the cloud due to restrictions on access to data or compliance with government or industry regulations.
For example, European data protection laws prohibit personal data that can be linked to a specific person from moving outside of European Union (EU) or even specific country borders. Such laws can prohibit organizations from storing or processing data in the cloud because infrastructure providers may store, process or back up data in multiple global locations. In the U.S., regulations such as the Health Insurance Portability and Accountability Act (HIPAA) require maintaining security and privacy around personal health information (PHI). The complexity of doing so may dissuade healthcare providers from using cost-effective public cloud-based solutions that could slow the rising cost of healthcare.
One way to get around the issues of data security, residency and privacy is to obfuscate the data that goes into the cloud. Two common methods of obfuscation are encryption and tokenization. Using either of these approaches ensures that data remains undecipherable to prying eyes while the organization enjoys the benefits of cloud-based applications.
You are familiar with encryption, the process of using algorithmic schemes to transform plain text information into a non-readable ciphertext. A key (or algorithm) is required to decrypt the information and return it to its original plain text format.
Tokenization is an increasingly popular approach for the protection of sensitive data. It involves the use of data substitution with a token (or alias) as a replacement for the real values. Unlike encryption, which uses a mathematical process to transform data, tokenization uses random characters to substitute for the actual data. There is no "key" that can decipher the token and turn it back into real data.
In the process of tokenization, the sensitive data is sent to a centralized and highly secure server called a "vault" where it is stored securely. At the same time, a random unique set of characters (the token) is generated and returned to your systems for use in place of the real data. The vault manager maintains a reference database that allows the token value to be exchanged for the real data when it is needed again. Meanwhile the token value, which has no meaning whatsoever to prying eyes or cyberthieves, can be used in various business applications as a reliable substitute for the real data.