Best of Interop 2009 Winners Announced, any surprise winners?

Best of Interop 2009 Winners

Best of Interop – Overall:

Winners in each category:

In addition to these eight category winners, two special awards were also awarded:

Best of Interop – Overall: VMware - VMware vSphere 4

With over 160 products vying for recognition, trying to decide the top winner for BOI is always a challenge. Interestingly enough, the choice for Best of Interop this year became a knock-down, drag-out challenge between two companies whose products actually fit hand in glove. Cisco’s new UCS and the newest iteration of VMware’s venerable virtualization platform, vSphere 4 were top-flight contenders to the very end. Though it was almost a chicken and egg argument, our BOI winner – vSphere 4 emerged as the winner because of its outstanding innovation, industry-wide impact and substantial improvement over previous generations.

Cloud Computing & Virtualization: VMware - VMware vSphere 4

Category Judges: InformationWeek

Charles Babcock –

Steven Schuchart – Current Analysis, Inc.

The adoption of server virtualization is growing by leaps and bounds every day and VMware has really stepped up in delivering the next generation platform offering advanced features and capabilities that will propel x86-based server virtualization well into the next decade. The new and beefier vSphere 4 can manage up to 1,280 virtual machines on 32 servers - or an average 40 VMs per server - although there's no reason vSphere can't manage more than 40 if the customer wishes to move in that direction.

Each server in a vSphere cluster or private cloud may have up to 64 cores and each VMware host under vSphere 4 can support up to 32 terabytes of RAM. But it's not the statistical dimensions of vSphere 4 that are as important as the way that it changes the way we think about the data center. VMware’s new vSphere 4 gives enterprises new levels of reliability and new features that competitors in this market space cannot currently match.

For example, the innovative Fault Tolerance feature of vSphere 4 allows enterprises to designate two identical blades within a chassis as a fault tolerant pair. When one blade fails, the second blade has an identical copy of the first, all the way down to memory and internal IO operations. This allows for instant fail over, a feature that has in past only been found on specialty servers designed for full fault tolerance. VMware Fault Tolerance can also automatically bring another identical blade into sync, re-creating a fault tolerant pair if such a blade is available.

Then there's vShield Zones, which address the security vulnerability of virtual machines by giving them a zone definition of what security measures must accompany their operation. This definition then follows the virtual machine around when it is moved by vMotion from one server to another. VMware Host Profiles in vSphere 4 make it much easier to determine what combination of components is a balanced one for the workload under consideration. And for added resource conservation, VMware Distributed Power Management tracks which servers are underutilized and could be shut down if their virtual machines were migrated to another server.

Under vSphere 4, the fully virtualized data center is really beginning to take shape and vSphere 4 is a big step forward in server virtualization; bringing new features and value along with general upgrades that make it truly Best of Interop. – Charles Babcock

Collaboration & UC: Cisco - WebEx Node on Cisco ASR 1000 Series Routers

Category Judges: InformationWeek

Nick Hoover –

Brad Shimmin - Current Analysis, Inc.

While cloud computing and software as a service continue to gain traction in the enterprise, bandwidth and performance remain ongoing concerns of IT pros. Cisco's WebEx Node on ASR 1000 Series routers is a blade that runs WebEx software like an on premises extension of Cisco's hosted service, greatly improving performance while decreasing bandwidth requirements.

Without a WebEx Node, all of a company's WebEx sessions connect over the Internet via disparate streams, potentially using up large amounts of bandwidth, especially with WebEx's new video and voice capabilities.

The WebEx Node acts as a point of presence at the edge router, meaning that internal meetings are hosted and switched on site at the closest available node to consolidate required bandwidth so that, in the case of a company with a huge meeting, there's only one stream instead of hundreds. Since the WebEx Node is embedded in the ASR, network admins can also continue to maintain controls over network policies via ASR features like deep packet inspection.

One success story: according to Cisco, one WebEx Node customer had a need to have monthly meetings with thousands of employees, but didn't want to go purchase all the bandwidth required for once-a-month meetings, and instead rented microwave links that didn't provide the best user experience. The WebEx Node has saved enough bandwidth that the company has turned on other pieces of WebEx like video that the company was hesitant to bring on board before. Cisco estimates companies using the WebEx Node can decrease bandwidth use by WebEx by up to 90% and WAN costs by up to 67%.

The 12-core processor, 4 Gbytes of memory, 256 Gbyte hard drive of the WebEx Node can accelerate the performance of up to 500 WebEx sessions via a single blade. When the node reaches capacity, it simply automatically overflows to the Web. Meetings that include both internal and external participants traverse the Web via encrypted SSL traffic.

Look for more from Cisco on this: the company hopes to partner with third parties to build something similar for other services. – Nick Hoover

Data Center & Storage: Cisco - Unified Computing System

Category Judges: InformationWeek

Steven Hill –

Ray Lucchesi - Silverton Consulting

Throughout the beginning of 2009 there was a load of speculation regarding whether or not networking giant Cisco would make a move into the general computing hardware space. Well, the waiting is finally over because on April 16th Cisco announced their first computing platform, the Cisco Unified Computing System (UCS). Based on a blade server core, the UCS offers all of the efficiencies inherent in blade server technology - but ups the ante by adopting some pretty interesting twists on conventional server/network relationships.

It’s obvious that Cisco was targeting the high-density virtualization market when it came to designing the system, the UCS chassis and blades take a more streamlined approach than others in the industry by reducing the number of components in each chassis without a huge compromise in processor density at rack level. At present each 6-U chassis can support up to 8- B200 M1 half-blade, or 4- B250 M1 full-blade servers. Both blade designs have two sockets for new Intel Xeon 5500-Series (Nehalem) quad-core processors, with the key difference between the two being RAM capacity. The B200 can hold a substantial 12 DIMMs (96GB) of DDR3 memory, but the B250 offers an eye-popping 48 DIMMs – or up to 384GB of memory per full-width blade. This would allow configurations from 64 cores and 768GB per chassis, to only 32 cores but 1,536GB per chassis for those apps that could benefit from a little more memory.

With memory no longer being an issue the next big trick was supplying those blades with enough IO, so each half-blade is designed to support 20Gbps of redundant IO throughput, while the full blade has 40Gbps of bandwidth. At present Cisco offers converged fabric blade adapters from both Emulex and QLogic, as well as their own M81KR Virtual Interface mezzanine IO card based on their Unified Fabric design. All three cards are dual-10Gb adapters and capable of offering numerous virtual interfaces that can be configured for either Ethernet or Fibre-Channel traffic.

To manage all this bandwidth at the backplane of the chassis there’s room for dual UCS 2104XP Fabric Extenders that can provide 80Gbps of throughput, aggregate all IO traffic and provide the interface for managing IO and blade server configuration. Breaking from conventional wisdom, Cisco has no management modules in their chassis and instead utilizes external Cisco UCS 6100-series Fabric Interconnects to aggregate the IO from and provide role-based management for all devices within the UCS chassis, as well as provide outside connectivity for FC and Ethernet. This allows each chassis and blade to remain stateless and supports the dynamic transfer of identities such as MAC addresses, World Wide Names and IP addresses to all components within the system when they require modification or replacement.

It’s clear that Cisco really examined the server virtualization challenge carefully when they designed the UCS. Everywhere you look, you can see how they focused on removing virtualization bottlenecks and enabling flexible device management at every level - plus they’ve left plenty of room for growth. That’s why the UCS is our choice for the BOI Data Center and Storage Category winner for 2009. – Steven Hill

Infrastructure: Juniper Networks - SRX650 Services Gateway

Category Judges: InformationWeek

Mike Fratto –

William Terrill - Current Analysis, Inc.

Juniper SRX650: Branch Office Swiss Army Knife - Multi-service branch boxes that combine multiple features like switch ports, routing, firewall, VPN, and so on, into a single system is nothing new. But Junipers SRX650 packs a bunch of horsepower and features with room for expansion into a single unit. Priced at $16000 for the chassis which includes routing, switching, firewall, IPSec VPN, and content filtering all running on JunOS and managed through ob board GUI or Junipers Network and Security Manager.

The modular SRX650 offers a handful of common interface options for T1/E1 and 16 or 24 port Ethernet module with or with out PoE. Juniper plans on adding DS3, OC3, and OC12 modules in the future. The switch fabric can push 120 GBps per switch/route engine (SRE) and dual, hot swappable, SRE’s in active/active failover is planned for the future. Each SRE is a high capacity computing system featuring a 12 core processor, 2GB of ram, hardware cryptographic acceleration and hardware based UTM signature matching.

The impressive hardware performance features 7 Gb/s firewall with NAT, 1.5 Gbps VPN, 900 Mbps AV. Two SRX650’s can be installed in an active/active fail-over mode maximizing your investment utilizing both units. Other security features like IPS, Web filtering, anti-virus and anti-spam are an additional software license. Dual 645W power supply can 32 PoE ports at 15.4 watts per port or more ports if each uses fewer watts.

The SRX-650 comes with a wealth of software features such as firewall, VPN, IDP/IPS, content filtering, Unified Access Control, routing and switching at a relatively low cost coupled with Junipers plans to add more high availability features, add new processing cards for application acceleration and integration with Junipers Network Security Manager (NSM) make the SRX650 an excellent candidate for a branch office appliance is why was selected as the Best of Interop Infrastructure award. – Mike Fratto

Network Management: ScienceLogic - EM7 G3

Category Judges: InformationWeek

Andrew Conry-Murray –

Bruce Boardman - Syracuse University

Science Logic's EM7 G3 is all about scalability. With a target audience of large enterprises and service providers, the network and business service monitoring system provides a package of features designed for large-scale deployments, including service monitoring for private and public clouds.

EM7 G3 takes a mostly agentless approach to monitoring. Customers deploy collector appliances at key points on the network. These appliances can be configured to monitor a variety of devices and software, from network hardware to applications, operating systems and servers, both physical and virtual. Collectors gather event data from logs and SNMP traps, performance statistics and configuration data, including hardware and software configurations. To speed discovery, the G3 uses fingerprints to discover device attributes, including which ports are open and what applications may be running on the device.

The company claims a single collector appliance can monitor 200 to 500 devices, and accept updates on 250 items per device per minute. Collectors process much of the information themselves rather than send raw data back to the central database. Collectors can be configured only to send changes back to the database. The database normalizes and stores information from the collectors.

1 2 Page 1
Page 1 of 2
IT Salary Survey: The results are in