Will Healthcare Ever Take IT Security Seriously?

A recent threat intelligence study reports widespread security vulnerabilities in healthcare organizations, many of which went unnoticed for months. In December, a developer pulled unencrypted data from a 'certified' mobile health app in less than a minute. Why is it so hard for healthcare to get security right?

In the years since the HITECH Act, the number of reported healthcare data breaches has been on the rise - partly because organizations have been required to disclose breaches that, in the past, would have gone unreported and partly because healthcare IT security remains a challenge.

Recent research from Experian suggests that 2014 may be the worst year yet for healthcare data breaches, due in part to the vulnerability of the poorly assembled Healthcare.gov.

"The volume of IPs detected in this sample can be extrapolated to assume that there are millions of compromised healthcare organizations, applications, devices and systems sending malicious packets from around the globe."

Hacks and other acts of thievery get the attention, but the root cause of most healthcare data breaches is carelessness: Lost or stolen hardware that no one bothered to encrypt, protected health information emailed or otherwise exposed on the Internet, paper records left on the subway and so on.

What will it take for healthcare to take data security seriously?

Healthcare IT So Insecure It's 'Alarming'

Part of the problem is that healthcare information security gets no respect; at most healthcare organizations, security programs are immature at best, thanks to scarce funding and expertise. As a result, the majority of reported data breaches are, in fact, avoidable events.

[Related: Healthcare IT Security Is Difficult, But Not Impossible]

The recent SANS Health Care Cyber Threat Report underscores this point all too well. Threat intelligence vendor Norse, through its global network or honeypots and sensors, discovered almost 50,000 unique malicious events between September 2012 and October 2013, according to the SANS Institute, which analyzed Norse's data and released its report on Feb. 19. The vast majority of affected institutions were healthcare providers (72 percent), followed by healthcare business associates (10 percent) and payers (6 percent).

SANS uses the words "alarming" and "troubling" often in its analysis. "The sheer volume of IPs detected in this targeted sample can be extrapolated to assume that there are, in fact, millions of compromised health care organizations, applications, devices and systems sending malicious packets from around the globe," writes Senior SANS Analyst and Healthcare Specialist Barbara Filkins.

[ Tips: How to Prevent (and Respond to) a Healthcare Data Breach ]

More than half of that malevolent traffic came from network-edge devices such as VPNs (a whopping 33 percent), firewalls (16 percent) and routers (7 percent), suggesting "that the security devices and applications themselves were either compromised ... or that these 'protection' systems are not detecting malicious traffic coming from the network endpoints inside the protected perimeter," Filkins writes, noting that many vulnerabilities went unnoticed for months. Connected endpoints such as radiology imaging software and digital video systems also accounted for 17 percent of malicious traffic.

Norse executives say this stems from a disconnect between compliance and regulation. Simply put, says Norse CEO Sam Glines, "There is no best practice applied." Many firewall devices with a public-facing interface, for example, still use the factory username and password. The same is true of many surveillance cameras and network-attached devices such as printers - the default passwords for which can be obtained not through hacking but through a simple Internet search. "It's just not good enough in today's market."

The United States would do well to heed the European Union's data breach laws, Glines says, as they take a "categorically different" approach and include specific language about what is and isn't compliant. This could include, for example, specific policies for managing anything connected to an IP address or basic password and access control management measures, he says.

Mobile Health Security Especially Suspect

In the absence of such regulation, though, patient privacy is a myth. Data is shared freely in a hospital setting, for starters, and clinical systems favor functionality over privacy, so much so that privacy and security are often an afterthought in the development lifecycle. This is especially true in the growing mobile health marketplace, which largely places innovation before security.

[Related: Healthcare.gov Still has Major Security Problems, Experts Say]

Harold Smith discovered this all too quickly in December. After Happtique, a mobile health application certification group, released its first list of approved applications, Smith, the CEO of software firm Monkton Health, decided to check out a couple apps.

It wasn't pretty. He installed one on a jailbroken iPhone and, in less than a minute, pulled electronic protected health information (ePHI) from a plain text, unencrypted HTML5 file. He also found that this data - specifically, blood glucose levels - was being sent over HTTP, not HTTPS. "That was the first hint that something was wrong," he says. "That's a pretty big 'Security 101' thing to miss."

A second app, which Smith tested a few days later, also stored ePHI in unencrypted, plain text files. Though this app uses HTTPS, Smith notes in his blog that it sends usernames and passwords in plain text. "That was another big problem," he says.

[Slideshow: 12 Tips to Prevent a Healthcare Data Breach]

Happtique suspended its application certification program in light of Smith's findings, but the application developers themselves (as well as healthcare IT news sites and blogs) glossed over the issue. This irked Smith: "As someone who develops mobile health software, if someone tells me they've found a vulnerability, I get to them right away."

At the very least, Smith says, mobile health applications need a pin screen and data encryption. The bigger issue, though, is the tendency for developers treat mobile apps like desktop apps. That doesn't work in the "whole new Wild West" of mobile development, Smith says, where databases aren't encrypted and passwords need to be hashed. Five years after the release of the Apple SDK, he adds, people are still trying to figure it all out.

Is Red Tape to Blame for Poor Healthcare IT Security?

The same is true of healthcare's privacy and security regulations - which, to put it mildly are conflicting. Sharon Klein, head of the privacy, security and data protection practice at the law firm Pepper Hamilton, notes that, in the United States, there are 47 different sets of (inconsistent) data breach regulations and multiple regulatory frameworks.

[ Study: Healthcare Industry CIOs, CSOs Must Improve Security ]

If there are overarching standards, they come from the National Institute of Standards and Technology, Klein says, noting the Office for Civil Rights and Department of Health and Human Services have "consistently" used NIST standards. At the same time, other agencies are getting involved:

  • The Federal Trade Commission emphasizes privacy by design in the collection, transmission, use, retention and destruction of data;
  • The Food and Drug Administration's guidance on cybersecurity in medical devices and hospital networks pinpoints data confidentiality, integrity and availability, and
  • The Federal Communications Commission, in the wake of weak 802.11 wireless security, has issued disclaimers regarding text messaging and geolocation with implications for clinical communications.

[ Related: Solving Healthcare's Big Data Analytics Security Conundrum ]

Given the regulatory inconsistencies, Klein says it's best to document everything you're doing and conduct vigorous training and awareness programs for all staff. "Minimum necessary" policies, which limit who get to see which data and, critically, which change as an individual employee's role evolves, can eliminate unnecessary security holes, as does the appropriate de-identification of data.

Software developers have additional priorities. If anything is regulated, isolate it, Klein says, and make sure you disclose to consumers what data you are obtaining, what you intend to do with it, what third-parties will have access to it and whom to contact if there is an issue. Startups want to be first to market, she admits, but in the process - as Smith found - they can put security on the back burner, only to scramble to fill the gaps once vulnerabilities are discovered.

Balancing Healthcare IT Security and Accessibility

Experts largely agree that a cogent approach to health data security must balance security and accessibility, whether it's patients, physicians or third parties who want the data. This is especially important as the healthcare industry emphasizes more widespread health information exchange as part of a larger goal to provide more coordinated care.

"Security has for a long time been an afterthought. Now it has to be part of the build," says Glines - adding that, if it isn't, an app simply shouldn't be released.

Smith suggests that developers and security professionals hack iOS apps, as he did, and see for themselves how easy it is. Then, he says, they should ask, "If it's not that difficult, and [I'm] storing all that data on the phone, what can I do beyond what the OS offers?"

As it turns out, there are a "whole litany of things" that application developers can do, even in an ever-changing field. Specifically, Smith points to viaForensics' secure mobile development best practices, which apply to iOS and Android.

Given the findings of the Norse and SANS Institute study, Glines says it's worth having two conversations. One is with network administrators about the "basic blocking and tackling" work, such as actually changing default device passwords, which can bring about "simple, powerful change." The other is with executive staff about the implications of lax security - a conversation unfortunately made easier in the wake of the Target breach, which it turns out stemmed from systemic failures and not a single point of attack.

Regulators won't cut you any slack if a breach occurs, Klein says, especially if you knew vulnerabilities existed and didn't fix them. Under the new HIPAA Omnibus Rule, which went into effect in September 2013, firms face fines of up to $1.5 million in the event of the "willful neglect" of security issues."

Glines says boardrooms are beginning to shift their security mentality, but this will take time to trickle down. "In the next eight to 12 months, we will continue to see more front-page news" about data breaches, he says.

Brian Eastwood is a senior editor for CIO.com. He primarily covers healthcare IT. You can reach him on Twitter @Brian_Eastwood or via email. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.

Read more about health care in CIO's Health care Drilldown.

This story, "Will Healthcare Ever Take IT Security Seriously?" was originally published by CIO.

Insider Tip: 12 easy ways to tune your Wi-Fi network
Join the discussion
Be the first to comment on this article. Our Commenting Policies