The best CTOs of 2010

A recession didn't stop these technology leaders from upping the ante on their technology or using it to survive tough times

1 2 3 4 Page 2
Page 2 of 4

Therein, Burke saw an opportunity in January 2010 to build and deploy a prepaid electricity billing and messaging system that offers consumers a lower cost than traditional prepaid electricity plans. Three months later, the new prepaid system became the first of its kind to market in Texas.

While building a new system is daunting for many companies, Burke previously constructed Ambit Energy's billing systems from the ground up, making them smart-meter-ready before smart meters were distributed. In addition to supporting the prepaid business, the system was developed to process gigabits of usage data while simultaneously linking all customer information. This enables customers to manage their electricity over the Internet quickly and easily. The client-facing Web platform offers call center support screens and a SMS system with real-time notification of usage balances; it also integrates with real-time processing for cash payment centers to better assist with prepaid client needs.

Bringing SSDs into the core of the data center

Chad Burney

Assistant Vice President for IT, COCC

Chad Burney spearheaded a project to implement SSD (solid-state disk) technology in the production data center at COCC, a financial processing firm. This project had the potential to greatly improve COCC's performance, but it could also damage the company's reputation if SSD's earlier performance issues were not addressed. Cost was also a factor: SSD continues to be regarded as too expensive to implement in the data center.

Burney proved the reliability of the SSD installation in the face of industry skepticism and developed a cost/performance model that predicted a positive return on investment in just three months. The key to Burney's innovation was his ability to see how SSD technology could eliminate the need for new computer hardware and the accompanying enterprise database software fees.

Before the SSD installation, COCC had limited its storage to 25 databases per production server in order to maintain service-level agreements. Due to record customer growth, COCC's SLA model would have required an additional server and more storage to be purchased at a cost of $106,800, plus $150,000 in enterprise database licensing fees. Burney recognized that the enormous increase in processing speed from installing SSD technology would eliminate the need to purchase the extra server and storage. The savings of $256,800 more than offset the $212,000 that he proposed spending for SSD.

In August 2009, Burney's team migrated 80 percent of its production databases from Tier 1 Fibre Channel storage arrays to the new RamSan 620 SSD technology produced by Texas Memory Systems. The SSD technology not only generated the savings as predicted, it also reduced power consumption and footprint requirements by 80 percent, processed nightly production 85 percent faster, and accelerated transaction processing speed by 90 percent. The improvements enabled COCC to eliminate plans for additional hardware and software license purchases for the next two years.

Driving a hotel group to financial and energy efficiency

Tom Conophy

CIO, InterContinental Hotels Group

Since becoming CIO of IHG in 2006, which manages more than 4,800 hotels in 100 countries, Tom Conophy has replaced many of the costly, legacy systems using more leading-edge approaches. For example, Conophy made a considerable investment to upgrade IHG's Call Center technology to make use of cloud computing globally to support "any agent, any call" routing.

Today, Conophy is leading an effort called Green Engage, which assist hotels in learning about their energy consumption and implementing best practices to reduce energy usage and IHG's carbon footprint. Green Engage had a successful pilot implementation and is now being rolled out across the organization.

To manage the many initiatives and ensure they deliver on efficiency and innovation goals, Conophy set up a cross-functional team charged with ensuring IHG has a defined plan for building enterprise-level modular, reusable software services that support numerous consumers, functions, and best practices. This team makes the hard technology decisions that set up IHG for long-term technical, business, and financial success.

Immersive 3-D improves training and operations

Phiroz P. Darukhanavala

CTO, BP

Phiroz "Daru" Darukhanavala heads a team at energy firm BP whose mission is to introduce external technology innovation to solve business problems that defy traditional IT solutions. Daru engages in at least one "game changing" technology introduction each year in which value is expected to exceed $50 million. In the past year, the game-changer focus has been 3-D virtual environments, used for training, collaboration, events, marketing, and operations.

An example of the technology's use is 3-D immersive training developed and deployed to 1,200 Arco AM/PM minimarket sites; research showed that trainees learned safety practices, food-handling standards, and baking steps with significantly less training time, greater retention of material, and improved consistency in baking products versus a control group. Likewise, another effort used 3-D technology to create a more efficient and effective way to plan and conduct corrosion inspections in Alaska operations.

When deciding to pursue the 3-D initiative, Darukhanavala recognized the significance of three converging developments. First, he saw that technology advances had made high-end computer graphics available on ordinary desktops and that the bandwidth necessary for the rich media was plentiful. Second, he realized that existing 3-D data from CAD, laser scanning, and photogrammetry tools could be used. Third, he saw that new suppliers and products had sprung up, creating many new 3-D business applications and an extensive ecosystem of suppliers and knowledge experts in 3-D immersive technologies.

Standardizing while making a major merger work

Scott Dillon

Head of Technology Infrastructure Services, Wells Fargo & Co.

The 2009 merger between Wells Fargo and Wachovia -- one of the largest in financial services history -- presented significant challenges in integrating the legacy infrastructures. Scott Dillon took on that effort, and the resulting infrastructure encompasses more than 60 petabytes of storage, includes more than 1 million square feet of data center space, and exceeds 200 MIPS in production. At the same time, it minimizes risk to production environments, maintains high availability and security for customers, and provides quicker time to market and increased efficiencies in the data centers.

To drive efficiencies, Dillon applied the approach of stabilize, standardize, and optimize. His team has successfully used standardized service offerings and all three kinds of virtualization (in server, storage and network), with more than 10,000 virtual devices currently in place. Doing so saved as much as $250 million by negating the need to create a new data center alone, while increasing computing power and reducing energy consumption. Under Dillon's leadership, Wells Fargo is headed to a common infrastructure with common technologies in place.

One of the early challenges Dillon faced in the integration was simply trying to ensure that infrastructure was recognized as a key contributor to a successful merger. Many times in mergers, companies neglect conducting thorough evaluations of the newly combined companies' individual backbones. Some companies might opt for a patchwork approach, which can drive up cost and degrade overall performance. Instead, Dillon kept infrastructure an integral part of the merger, ensuring it stays 6 to 12 months ahead of expected growth, while constantly re-evaluating what is needed for upcoming integration activities.

He guided his team to evaluate, transition, and leverage the best technologies from both companies, which has resulted in a well-integrated infrastructure, ready to support future growth. At the management level, Dillon has assembled a leadership team that is an exact 50/50 split between the legacy companies, to create a unified technology group in the aftermath of the merger.

Switching to open source for lower costs, increased flexibility

Mark Friedgan

CIO, Enova Financial

Over the past 18 months Enova Financial CIO Mark Friedgan has moved much of the company's technology from proprietary systems to open source ones. For example, he replaced a call center platform without significantly changing the user experience, so the company didn't have to retrain the call center staff. The switch in workstations from Windows to Linux also let Friedgan reuse his existing PC hardware, deploying a single boot image despite the use of several types of PCs. Furthermore, the switch to Linux lets Friedgan's team update and change workstations in real time over the network, only rarely requiring a reboot.

Enova now also uses an open source software PBX, which eliminates per-seat licensing fees. Plus, Enova can now use features such as least-cost routing, voicemail, and statistical tracking that would cost extra on a traditional PBX. And because of the PBX's open source nature, Enova has been able to write its own applications to interface with it and provide new functionality such as call recording and automated dialing.

The key to this project was choosing technologies that both satisfied the business needs of the users and prevented vendor dependence while keeping maintenance and deployment easy.

Parallelizing NFS file sharing

Garth Gibson

CTO, Panasas

Garth Gibson has been instrumental to the instigation, incubation, and adoption of Parallel NFS (pNFS) into version 4.1 of NFS, an IETF industry standard for file sharing. NFS v4.1 was offered to the IETF by the Network File System Working Group in late 2008, then approved and published as RFC 5661-5664 in January 2010.

NFS 4.1 introduces into the NFS standard mechanisms for parallel access, enabling a cluster of servers (exporting either file, object or block services) to satisfy client data requests in parallel without store-and-forward copying through an NFS metadata server. Known as Parallel NFS, or pNFS, parallel access enables an NFS service to scale single-system performance to meet the needs of large collections of high-performance clients.

Gibson has been a driving force behind the idea and adoption of pNFS, born in 2003, out of a conversation between Garth Gibson, Gary Grider of Los Alamos National Laboratory, and Lee Ward of Sandia National Laboratory. As a grad student at the University of California at Berkeley in 1988, Gibson did the groundwork research and cowrote the seminal paper on RAID.

With pNFS now incorporated into the NFS standard, Gibson is focused on gaining widespread adoption, which depends on the availability of client code in popular client operating systems, and Gibson and his Panasas team continue to lead in the development of a reference Linux implementation and its adoption into the Linux core. pNFS is expected to be deployed in Linux distributions and offered by multiple vendors by 2011. Getting pNFS in the NFS standard has required a lengthy process involving a community of storage technology leaders, including Panasas, IBM, EMC, Network Appliance, Sun Microsystems, and University of Michigan's Center for Information Technology Integration (CITI).

Restoring a company's faith in technology -- and in IT

Kris Herrin

CTO, Heartland Payment Systems

CTO Kris Herrin began transforming IT at Heartland Payment Systems from a startup-style company to a mature ITIL-oriented service organization during his tenure as CSO when he drove the response to the criminal intrusion of Heartland's card processing environment.

When Herrin took over as CTO in August 2009, he laid out three core principles for the IT service delivery and operations teams: security, reliability, and excellent service delivery. As fate would have it, within two weeks of Herrin's taking on the CTO role, Heartland experienced a core network switch hardware failure that cascaded into the main data center and brought the major revenue-generating systems offline.

Herrin set out a bold goal for his teams to rally behind: He announced that in November, he would personally pull the plug on a core switch to simulate the catastrophic failure. The project aimed to ensure the security and reliability of company's revenue-generating processing platforms and validate the ability of IT to deliver excellent information technology services. On November 17, two months after announcing the mission, Herrin did as promised and pulled the plug on the key switch.

This time, there was no disaster because the IT team executed on the efforts that Herrin set up just three months prior: analysis, design, and implementation of a new active/passive real-time processing environment, from the network layer through the many critical applications, that was designed to ensure card processing availability would meet the stringent needs of the business. The dramatic procedure helped restore morale of the IT service teams, who were demoralized by years of unmanaged growth, a major security breach in March 2009, and the switch failure in August 2009. It also illustrated to both the IT teams and the corporation the importance of the work the IT teams do every day to plan and execute initiatives that are essential to the ongoing operations of the company.

Just one year to put an IT infrastructure in place -- and cut costs in half

Dennis Hodges

CIO, Inteva Products

1 2 3 4 Page 2
Page 2 of 4
SD-WAN buyers guide: Key questions to ask vendors (and yourself)