CIOs have to learn the new math of analytics

1 2 Page 2
Page 2 of 2

In December, the cold, hard math collided with high emotion: Uber's algorithm automatically jacked up rates in Sydney, Australia, as people tried to get away from a downtown café where an armed man held 17 people hostage. Three people, including the gunman, died. Uber later apologized for raising fares, which reportedly were up to quadruple normal rates, and made refunds. "It's unfortunate that the perception is that Uber did something against the interests of the public," a local Uber manager said in a blog post. "We certainly did not intend to."

Problems are most likely to arise when algorithms make things happen automatically, without human intervention or oversight. Control is critical, says Alistair Croll, a consultant and author of Lean Analytics: Use Data to Build a Better Startup Faster. "If algorithms are how you run your business and you haven't figured out how to regulate your algorithms," he says, "then by definition you're losing control of your business."

Uber is working on a global policy to cap prices in times of disaster or emergency, a spokeswoman says.

Other unintended consequences involve the liability of knowing too much.

For example, say a hospital uses patient data to identify people who may be headed toward an illness, then calls them to schedule preventive care. If the math is imperfect, the hospital might overlook someone who later contracts an illness or dies. Or a whole group of people could get overlooked. "There's concern about who are the winners and losers and can the company stand by it later, when exposed," Pasquale says.

In another scenario, a company could open itself up to discrimination claims if it keeps too much data and insights about its employees, he says. Someone might be able to prove the company knew about, say, a health condition before letting him go.

Or if a car insurance company discovers there's a higher chance a customer will get into a crash after driving a certain number of miles, it may find itself in a "duty to warn" situation, Pasquale says. That's when a party is legally obligated to warn others of a potential hazard that they otherwise couldn't know about. It usually applies to manufacturers in product liability cases, or to mental health professionals in situations involving dangerous patients. And as the use of revelation-producing algorithms spreads, Pasquale says, people in other sectors could be subject to a similar standard--at least ethically, if not legally.

"At what point will things be a liability for you by knowing too much about your customers?" he asks.

Sometimes companies don't set out to uncover uncomfortable truths. They just happen upon them.

Insurance company executives, for example, should think carefully about results that could emerge from algorithms that help with policy decisions, says Croll, the consultant and author. That's true even when a formula looks at metadata -- descriptions of customer data, not the data itself. For example, an algorithm could find that families of customers who had changed their first names were more likely to file claims for suicide, he speculates. Further analysis could conclude that it is likely those customers were transgender people who couldn't cope with their changes.

An algorithm that identified that pattern would have uncovered a financially valuable piece of information. But if it then suggested that an insurer turn down or charge higher premiums to applicants who had changed their first names, the company might appear to be guilty of discrimination if it did so, Croll says.

The CIO's Best Role

The best way a CIO can support data science is to choose technologies and processes that keep data clean, current and available, says Chris Pouliot, vice president of data science at Lyft, a competitor of Uber. Before joining Lyft in 2013, Pouliot was director of algorithms and analytics at Netflix for five years and a statistician at Google.

CIOs should also create systems to monitor changes in how data is handled or defined that could throw off the algorithm, he says. Another key: CIOs should understand how best to use algorithms, even if they can't build algorithms of their own.

For example, if a payment service needs to figure out whether pending transactions could be fraudulent, it might hard-code an algorithm into its payment software. Or the algorithm could be run offline, with the results of the calculations applied after the transaction, potentially preventing future transactions. The CIO has to understand enough about what the service is and how the algorithm works to make such decisions, Pouliot says.

CIOs should, of course, provide the technology infrastructure to run corporate algorithms, and the data they require, says Mark Katz, CIO of the American Society of Composers, Authors and Publishers, which licenses, tracks and distributes royalties to songwriters, composers and music publishers.

Katz meets regularly with ASCAP's legal department to make sure the results of the algorithms comply with the organization's charter and pertinent regulations.

"We're all information brokers at the end of the day," he says.

CIOs can expect increasing scrutiny of analytics programs. The Federal Trade Commission, in particular, is watching the use of algorithms by banks, retailers and other companies that may inadvertently discriminate against poor people. An algorithm to advise a bank about home loans, for example, might unfairly predict that an applicant will default because certain characteristics about that person place him in a group of consumers where defaults are high.

Or online shoppers might be shown different prices based on criteria such as the devices they use to access an e-commerce site, as has happened with Home Depot, Orbitz and Travelocity. While companies may think of it as personalization, customers may see it as an unfair practice, Luca says.

The Consumer Federation of America recently expressed concern that, in the auto insurance industry, pricing optimization algorithms could violate state insurance regulations that require premiums to be based solely on risk factors, not profit considerations.

Consumers, regulators and judges might start asking exactly what's in your algorithm, and that's why algorithms need to be defensible. In a paper published last year in the Boston College Law Review, researchers Kate Crawford and Jason Schultz proposed a system of due process that would give consumers affected by data analytics the legal right to review and contest what algorithms decide.

The Obama administration recently called on civil rights and consumer protection agencies to expand their technical expertise so that they'll be able to identify "digital redlining" and go after it. In January, President Obama asked Congress to pass the Consumer Privacy Bill of Rights, which would give people more control over what companies can do with their personal data. The president proposed the same idea in 2012, but it hasn't moved forward.

Meanwhile, unrest among some consumers grows. "Customers don't like to think they are locked in some type of strategic game with stores," Pasquale says. CIOs should be wary when an algorithm suddenly produces outliers or patterns that deviate from the norm, he warns. Results that seem to disadvantage one group of people, he says, are also cause for concern. Even if regulators don't swoop in to audit the algorithms, customers may start to feel uneasy.

As Harvard's Luca puts it, "Almost every type of algorithm someone puts in place will have an ethical dimension to it. CIOs need to have those uncomfortable conversations."

This story, "CIOs have to learn the new math of analytics" was originally published by CIO.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:
1 2 Page 2
Page 2 of 2
Now read: Getting grounded in IoT