FlightAware's business is soaring. In the year since its launch, the company that tracks private and commercial air traffic in the United States has been serving up about half a million requests a day, with demand doubling every few weeks.It's a huge load for the small Houston company and one that required its founders to think creatively when it came to building out the infrastructure to support its rapid growth. One innovative move: FlightAware last summer became an early adopter of new dual-core x86-based servers."All the TV networks turn to FlightAware to track flights whenever there are aviation incidents, which poses a problem for us, because it's a phenomenal amount of load," says Daniel Baker, FlightAware's CEO.By moving its two PostgreSQL databases from Intel Pentium 4 systems and onto 64-bit capable, dual-core Opteron-based servers, FlightAware can handle huge spikes in traffic without increasing its number of servers."So while our load doubles every few weeks, our performance stays about the same," Baker says. "I consider the fact we haven't had a decrease in performance a win."Dual-core processors are the first wave in an industry move toward multicore chip designs as a way to get around heat and power issues associated with faster running processors. Rather than pumping up clock speed, these chips squeeze multiple processing engines on a single piece of silicon, enabling more work to be done at lower clock speeds and with less heat output and lower power demands. These chips are also multithreaded, meaning that they can simultaneously handle multiple application instructions.While IBM has had a dual-core processor since 2001, an industrywide shift is only now beginning. Sun and HP introduced dual-core Unix processors in 2004, and last year Intel and Advanced Micro Devices (AMD) moved x86 servers into the multicore arena with dual-core Opteron and Xeon processors. Sun began shipping an eight-core UltraSPARC server at the end of last year, and start-ups such as Azul Systems are designing their own multicore systems. Intel and AMD both say they will have quad-core processors shipping by early next year.Analysts say initial interest in multicore servers focuses on the fact that they provide more power in smaller - and fewer -packages, resulting in easier management, less cabling, lower power demands and reduced heat output. According to IDC, nearly one-quarter of the $12.5 billion spent on servers in the third quarter of last year was spent on dual-core systems. In the fourth quarter, spending on AMD- and Intel-based dual-core systems more than doubled, compared with the previous quarter, IDC says.While adoption is steady, as with any transition, there are growing pains. One of the biggest issues has been how "per CPU" software will be licensed as the definition of a CPU is muddied with multiple processing units fitting into a single CPU socket.Independent software vendors have made progress during the past year with plans to either charge per socket, which Microsoft and VMware are doing, or to charge a small premium for multicore systems. Oracle, for example, has menulike pricing for the different multicore platforms, considering each x86 core as a half a processor for licensing purposes and each core on Sun's eight-core UltraSPARC T-1 chip as a quarter of a processor, for example.Nevertheless, most early adopters are running open source or custom-built software on these multicore servers, making licensing a non-issue, at least for now. Ironing out the licensing tangle to make it easier for IT buyers to understand the costs associated with multicore servers should result in more widespread adoption, analysts say.There is movement in that direction. HP and Novell, for example, late last year announced a hardware-software bundling package that enables customers to buy SuSE Linux licenses based on the number of servers, regardless of whether they are single- or dual-core and regardless of the number of virtualized images that might be running within the physical machine.Another issue is that while some applications, such as those written in Java, are designed to take advantage of multithreaded environments, others aren't, meaning they can't take full advantage of the new architecture."But today you won't be losing a step with dual-core, just like [the x86] 64-bit processors will run 32-bit applications fine and dandy," says Charles King, principal analyst with Pund-IT. "It's not going to cut down performance; it's just you won't be able to take full advantage of the platform until optimized operating systems and applications are available."Matthias Schorer, chief architect at Fiducia IT in Munich, says updates to Java and Solaris made his company's Java-based application run even better on Sun's new eight-core Sun Fire T2000 servers, code-named Niagara. The company provides infrastructure services to about 900 banks in Germany, supporting some 100,000 workstations and 20,000 automated teller machines.The company runs more than 800 single-core UltraSPARC-based systems but plans to make a transition to the T2000.Seeing double Dual-core and multicore processors offer more processing power in energy-efficient packages. Things to consider when deploying them:\u2022Independent Software Vendor impact: Progress has been made, but ISVs are still feeling their way when it comes to licensing on multicore systems. Make sure you understand what the costs will be.\u2022Application applicability: All applications and operating systems are not yet tuned for the new multicore platforms. Figure out which applications will see the biggest performance boost \u2014 such as those with compute-intensive, number-crunching workloads \u2014 and start your migration with those.\u2022Single point of failure: Consolidating multiple workloads on a single physical system can mean trouble if there is a glitch. Provide redundancy to avoid problems.\u2022Bothersome bottlenecks: With more processing engines working in a single socket, the transfer of data between memory, I\/O and other CPUs can get bogged down. Make sure you know how the multicore chip is designed to handle this issue."You have to use the right Java virtual machine. We saw double the throughput" when compared to the less-optimized version of Java, Schorer says. "Sun has put a lot of effort into optimizing Java for Solaris 10 to run smoothly on Niagara."In addition to the performance increase, Schorer likes the multicore design of the T2000 because of the ability to reduce space and lower heat output and power demands. His current UltraSPARC servers consume about 1.3 kilowatts per hour compared with 0.35 kilowatts per hour for Niagara servers, he says."That's a big thing given the fact that we could replace four [UltraSPARC servers] with one Niagara," he says.Stephen Smith, manager of automation and systems integration at Starz Entertainment Group in Englewood, Colo., liked the performance boost his digital encoding application got with dual-core Opteron-based servers from HP. By shifting from single-core Xeon servers to the dual-core boxes, Smith saved on hardware by reducing the number of servers he needed by four. In addition, he cut costs on cabling, drives and other items associated with single servers."Every machine that needs to be able to talk to the storage needs two Fibre Channel HBAs, space on the blades of the Fibre switch, the actual switch itself and the ports going in to the storage. Plus all of the software licensees that go on top of that," he says. "We reduced our costs by more than a quarter by going with the AMD-powered systems."A downside is that consolidating multiple servers onto fewer boxes can create a single point of failure, early adopters say."Absolutely, you get easier management with dual-core servers, but that has downsides, too," FlightAware's Baker says. "If one of them goes down, you've lost that much more of your capacity."As a result, FlightAware is taking its move into the dual-core world slowly, running only its databases on that platform. The front-end servers, which include Web servers and a server that generates the maps charting an airplane's progress toward its destination, run on single-core Opteron systems."So if they do fail, it's less of a percentage of our total capacity," he says.