- 10 Hot Big Data Startups to Watch
- 11 Unique Uses for Google Glass, Demonstrated by Celebs
- How to Export Your Google Reader Account
- How to Better Engage Millennials (and Why They Aren't Really so Different)
Network World - Who needs steel when you've got storage? Not Pittsburgh transplant Greg Ganger, that's for sure.
Ganger, a Carnegie Mellon University lab director, has grand plans for creating highly automated and cost-effective ways to manage large-scale storage infrastructures. The backdrop for his research will be Carnegie Mellon's new Data Center Observatory (DCO), the latest evidence that high-tech is becoming as important to Pittsburgh as steel once was.
Run by the university's Parallel Data Laboratory (PDL), a renowned storage research center, the DCO will serve another role. Various campus constituencies will use DCO computing and storage resources for computationally intensive projects. After all, says the affable Ganger, to figure out how to build large-scale self-managing (-healing, and -tuning) storage systems that work for complex, sprawling infrastructures, you need firsthand access to such an environment - complete with live production data.
Bringing in some university computing operations and creating a full-blown data center made sense, Ganger says. "If the computational clusters were elsewhere, then . . . a lot of the real problems weren't going to show up," he explains.
In taking on responsibility for that user environment, Ganger dons a new hat - that of IT go-to guy. He adds it to an impressive existing array, including researcher, communicator, industry liaison, adviser, mentor and professor (he teaches classes on distributed systems, operating systems and storage systems). His first task as IT guy has been to lobby university research groups to forgo running their own compute clusters in favor of tapping into DCO's newfangled shared infrastructure. These are people involved with scientific visualization, earthquake simulation, nanotechnology research and other big data-mining projects, for example.
Ganger and the DCO team are supporting those first users on generic servers and storage systems filling 12 IT racks and sitting in their own enclosure along with the necessary power and cooling systems (from American Power Conversion). Over the next three years, the team will add three enclosures to the DCO, bringing the number of IT racks to 40. Ganger expects to then be supporting more than a petabyte of storage and 4,000GHz of processing power for the university constituents. The IT systems will consume 774 kilowatts of energy. Ganger equates that to the rate of consumption of 750 average-sized homes.