While Exchange 2007 introduced a plethora of reliability and scalability features, Exchange 2010 helps to clean up what was really a confusing set of options. E-mail managers looking for guidance on building distributed Exchange networks will be pleased to see what has been pushed into Exchange 2010. (We weren't able to test most of these features, so our analysis is based on using the management GUI and reading the reviewers' guide provided by Microsoft.)
The biggest change for administrators in Exchange 2010 is an extended but simplified capability to distribute different user mailboxes across different servers, and keep those servers highly available. The binding of message stores to physical servers has been loosened considerably, and network managers should now easily be able to replicate a message store — up to 16 copies are supported — across multiple servers, with automatic failover capability between servers.
We built two different mailbox servers (the "role-based" server system introduced in Exchange 2007 is essentially unchanged in 2010) and then created a "database availability group", a new concept in Exchange 2010 that replicates a message store database between two servers. Then, we disconnected one of the servers from the network and tested that the other copy of the database was still available. Compared with the more confusing set of options for local and remote replication in Exchange 2007, this was much easier to set up and had an easier recovery path.
Of course, having two copies of the message store doesn't help if you don't build other resiliency, such as multiple client access servers, into your deployment, so this new feature isn't going to be the last word in simplified reliability. Clustering can be an important part of a high availability solution as well, and Exchange 2010 should simplify that tremendously. In Exchange 2007, Windows Server clustering was managed very separately from Exchange itself, which required additional expertise and a different skill set from what the standard e-mail manager holds. Exchange 2010 doesn't entirely solve this (based on the documentation we received), but does move cluster management directly into the Exchange management system. We've observed that some e-mail managers are reluctant to use clustering because they don't understand it and don't know how to manage and control it; by moving this important part of a high-reliability system directly into Exchange, Microsoft makes it more likely that managers will be able to use clustering successfully.
A nice improvement on the scalability front is the ability to move a mailbox between databases without shutting it down. In Exchange 2007, moving message store mailboxes from one database to another could require a significant amount of downtime. In Exchange 2010, you move a mailbox which is in active use between message store databases. This lets the e-mail manager balance the load across servers and disk subsystems without making mailboxes unavailable. Having this feature will also let e-mail managers resist the temptation to build many small message databases rather than a few larger ones, because there's no need to try and predict how big each mailbox and message store database is going to be for load-balancing purposes.
Exchange 2010 is also internally more resilient to failures, with the ability to automatically route around and retransmit messages lost by a malfunctioning transport hub.
As with Exchange 2007, Microsoft is requiring 64-bit hardware and operating systems for Exchange 2010. Our beta copy was 64-bit only; no 32-bit version was made available. This more efficient use of hardware is coupled with better use of the I/O subsystem. Microsoft claims that Exchange 2010 not only does fewer I/O operations for the same workload, but also smoothes out the workload so that Exchange 2010 will behave better on lower-speed SATA drives. In the documentation, Microsoft claims these new features will allow larger Exchange deployments to use less-expensive hardware.
However, be careful not to under-specify the hardware: Exchange 2010 may use less disk, but it uses more memory and modern multi-core CPUs to do the job. Our test system was based on VMware, but even with two CPU cores and 2GB of memory, Exchange 2010 was perceptibly slower than Exchange 2007 in VMware. Quick advice: buy more memory!
Overall, these many reliability and scalability features add up to a significant shift in thinking on how to build large and reliable mail systems. Rather than focus on very expensive SANs and ultra-huge servers, the combination of clustering, replication, and low-cost disk support means that reliability and scalability can be based on replicating small, inexpensive servers both within a data center and between data centers. E-mail managers thinking of deploying Exchange 2010 should step back and evaluate closely these new grid-style architectural approaches — and be sure that your Exchange team has adequate time to re-think and re-evaluate commonly held beliefs on how to build large Exchange networks.
Learn more about this topic
Based on comments by unnamed sources, an article about Avaya weighing bankruptcy has triggered a...
Yann LeCun, artificial intelligence pioneer and head of Facebook’s AI research group, explains machine...
In 2010, Jim Gettys, a veteran computer programmer who currently works at Google, was at home uploading...
Google's DeepMind Lab AI 3D game development project was released to accelerate deep reinforced...
Amazon Web Services is easy to work with -- but can easily compromise your environment with a single...
The 17th annual Network World holiday gift guide has something for every techie (and techie-wanna-be)...
Here be dragons: These gnarly corners of the coding world can be formidable foes, even for seasoned...