MOF Does Not Replace ITIL
Microsoft believes that ITIL is the leading body of knowledge of best practices. For that reason, it uses ITIL as the foundation for MOF. Instead of replacing ITIL, MOF complements it and is similar to ITIL in several ways:
MOF (incorporating MSF) spans the entire IT life cycle.
Both MOF and ITIL are based on best practices for IT management, drawing on the expertise of practitioners worldwide.
The MOF body of knowledge is applicable across the business community (from small businesses to large enterprises). MOF also is not limited only to those using the Microsoft platform in a homogenous environment.
As is the case with ITIL, MOF has expanded to be more than just a documentation set. In fact, MOF is now intertwined thoroughly with several System Center components, Service Manager, Configuration Manager, and Operations Manager!
In addition, Microsoft and its partners provide a variety of resources to support MOF principles and guidance, including self-assessments, IT management tools that incorporate MOF terminology and features, training programs and certification, and consulting services.
COBIT: A Framework for IT Governance and Control
Control Objectives for Information and related Technology (COBIT) is an IT governance framework and toolset developed by ISACA, the Information Systems Audit and Control Association. COBIT enables managers to bridge the gap between control requirements, technical issues, and business risks. It emphasizes regulatory compliance and helps organizations increase the value they obtain from IT. COBIT was first released in 1996, and is now at level 4.1, with COBIT 5 set for release in late 2011. Service Manager, which is the focal point in System Center for IT compliance, implements IT governance and compliance through the IT GRC Process management pack, discussed in Chapter 13.
Total Quality Management: TQM
The goal of Total Quality Management (TQM) is to continuously improve the quality of products and processes. It functions on the premise that the quality of the products and processes is the responsibility of everyone involved with the creation or consumption of the products or services offered by the organization. TQM capitalizes on the involvement of management, workforce, suppliers, and even customers, to meet or exceed customer expectations.
Six Sigma
Six Sigma is a business management strategy, originally developed by Motorola, which seeks to identify and remove the causes of defects and errors in manufacturing and business processes. The Six Sigma process improvement originated in 1986 from Motorola’s drive toward reducing defects by minimizing variation in processes through metrics measurement. Applications of the Six Sigma project execution methodology have since expanded to incorporate practices common in TQM and Supply Chain Management; this includes customer satisfaction and developing closer supplier relationships.
CMMI
Capability Maturity Model Integration (CMMI) is a process improvement approach providing organizations with the essential elements of effective processes. It can be used to guide process improvement—across a project, a division, or an entire organization—thus helping to integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for a quality processes, and provide a point of reference for appraising current processes. Here are some benefits you can realize from CMMI:
Linking your organization’s activities to your business objectives
Increasing your visibility into your organization’s activities, helping to ensure that your service or product meets the customer’s expectations
Learning from new areas of best practice, such as measurement and risk
Business Process Management
Business Process Management (BPM) is a management approach focused on aligning all aspects of an organization with the wants and needs of clients. It is a holistic management approach, promoting business effectiveness and efficiency while striving for innovation, flexibility, and integration with technology. BPM attempts to improve processes continuously and can be considered a process-optimization process. It is argued that BPM enables organizations to be more efficient, more effective, and more capable of change than a functionally focused, traditional hierarchical management approach. BPM can help organizations gain higher customer satisfaction, product quality, delivery speed, and time-to-market speed.
Service Management Mastery: ISO 20000
You can think of ITIL and ITSM as providing a framework for IT to rethink the ways in which it contributes to and aligns with the business. ISO 20000, which is the first international standard for ITSM, institutionalizes these processes. The ISO 20000 helps companies to align IT services and business strategy and create a formal framework for continual service improvement and provides benchmarks for comparison to best practices.
Published in December 2005, ISO 20000 was developed to reflect the best practice guidance contained within ITIL. The standard also supports other ITSM frameworks and approaches, including MOF, CMMI, and Six Sigma. ISO 20000 consists of two major areas:
Part 1 promotes adopting an integrated process approach to deliver managed services effectively that meets business and customer requirements.
Part 2 is a “code of practice” describing the best practices for service management within the scope of ISO 20000-1.
These two areas—what to do and how to do it—have similarities to the approach taken by the other standards, including MOF.
ISO 20000 goes beyond ITIL, MOF, Six Sigma, and other frameworks in providing organizational or corporate certification for organizations that effectively adopt and implement the ISO 20000 code of practice.
Optimizing Your Infrastructure
According to Microsoft, analysts estimate that more than 70% of the typical IT budget is spent on infrastructure—managing servers, operating systems, storage, and networking. Add to that the challenge of refreshing and managing desktop and mobile devices, and there’s not much left over for anything else. Microsoft describes an Infrastructure Optimization (IO) Model that categorizes the state of one’s IT infrastructure, describing the impacts on cost, security risks, and the ability to respond to changes. Using the model shown in Figure 1.7, you can identify where your organization is and where you want to be:
Basic: Reactionary, with much time spent fighting fires
Standardized: Gaining control
Rationalized: Enabling the business
Dynamic: Being a strategic asset
The Infrastructure Optimization Model.
Although most organizations are somewhere between the basic and standardized levels in this model, typically one would prefer to be a strategic asset rather than fighting fires. Once you know where you are in the model, you can use best practices from ITIL and guidance from MOF to develop a plan to progress to a higher level. The IO Model describes the technologies and steps organizations can take to move forward, whereas the MOF explains the people and processes required to improve that infrastructure. Similar to ITSM, the IO Model is a combination of people, processes, and technology.
You can find more information about Infrastructure Optimization at http://www.microsoft.com/technet/infrastructure.
About the IO Model - Not all IT shops will want or need to be dynamic. Some will choose, for all the right business reasons, to be less than dynamic! The IO Model includes a three-part goal:
Communicate that there are levels
Target the desired levels
Provide reference on how to get to the desired levels
Realize that infrastructure optimization can be by application or by function, rather than a single ranking for the entire IT department.
Items that factor into an IT organization’s adoption of the IO Model include cost, ability, and whether the organization fits into the business model as a cost center versus being an asset, along with a commitment to move from being reactive to proactive.
From Fighting Fires to Gaining Control
At the basic level, your infrastructure is hard to control and expensive to manage. Processes are manual, IT policies and standards are either nonexistent or not enforced, and you don’t have the tools and resources (or time and energy) to determine the overall health of your applications and IT services. Not only are your desktop and server management costs out of control, but you are also in reactive mode when it comes to security threats and user support. In addition, you tend to use manual rather than automated methods for applying software deployments and patches.
Does this sound familiar? If you can gain control of your environment, you may be more effective at work! Here are some steps to consider:
Develop standards, policies, and controls.
Alleviate security risks by developing a security approach throughout your IT organization.
Adopt best practices, such as those found in ITIL, and operational guidance found in the MOF.
Build IT to become a strategic asset.
If you can achieve operational nirvana, this will go a long way toward your job satisfaction and IT becoming a constructive part of your business.~
From Gaining Control to Enabling the Business
A standardized infrastructure introduces control by using standards and policies to manage desktops and servers. These standards control how you introduce machines into your network. For example, you could use directory services to manage resources, security policies, and access to resources. Shops in a standardized state realize the value of basic standards and some policies but still tend to be reactive. Although you now have a managed IT infrastructure and are inventorying your hardware and software assets and starting to manage licenses, your patches, software deployments, and desktop services are not yet automated. Security-wise, the perimeter is now under control, although internal security may still be a bit loose. Service management becomes a recognized concept and your organization is taking steps to implement it.
To move from a standardized state to the rationalized level, you need to gain more control over your infrastructure and implement proactive policies and procedures. You might also begin to look at implementing service management. At this stage, IT can also move more toward becoming a business asset and ally, rather than a burden.
From Enabling the Business to Becoming a Strategic Asset
At the rationalized level, you have achieved firm control of desktop and service management costs. Processes and policies are in place and beginning to play a large role in supporting and expanding the business. Security is now proactive, and you are responding to threats and challenges in a rapid and controlled manner.
Using technologies such as lite-touch and zero-touch operating system deployment helps you to minimize costs, deployment time, and technical challenges for system rollouts. Because your inventory is now under control, you have minimized the number of images to manage, and desktop management is now largely automated. You also are purchasing only the software licenses and new computers the business requires, giving you a handle on costs. Security is now proactive with policies and control in place for desktops, servers, firewalls, and extranets. You have implemented service management in several areas and are taking steps to implement it more broadly across IT.
Mission Accomplished: IT as a Strategic Asset
At the dynamic level, your infrastructure is helping run the business efficiently and stay ahead of competitors. Your costs are now fully controlled. You have also achieved integration between users and data, desktops and servers, and the different departments and functions throughout your organization.
Your IT processes are automated and often incorporated into the technology itself, allowing IT to be aligned and managed according to business needs. New technology investments are able to yield specific, rapid, and measurable business benefits. Measurement is good—it helps you justify the next round of investments!
Using self-provisioning software and quarantine-like systems to ensure patch management and compliance with security policies allows you to automate your processes, which in turn improves reliability, lowers costs, and increases your service levels. Service management is implemented for all critical services with SLAs and operational reviews.
According to IDC Research (October 2006), very few organizations achieve the dynamic level of the Infrastructure Optimization Model—due to the lack of availability of a single toolset from a single vendor to meet all requirements. Through execution on its vision in DSI, Microsoft aims to change this. To read more about this study, visit http://download.microsoft.com/download/a/4/4/a4474b0c-57d8-41a2-afe6-32037fa93ea6/IDC_windesktop_IO_whitepaper.pdf.
Microsoft Infrastructure Optimization Helps Reduce Costs - The April 21, 2009, issue of BizTech magazine includes an article by Russell Smith about Microsoft’s Infrastructure Optimization Model. Russell makes the following points:
Although dynamic or fully automated systems that are strategic assets to a company sometimes seem like a far-off dream, infrastructure optimization models and products can help get you closer to making IT a valuable business asset.
Microsoft’s Infrastructure Optimization is based on Gartner’s Infrastructure Maturity Model and provides a simple structure to evaluate the efficiency of core IT services, business productivity, and application platforms.
Though the ultimate goal is to make IT a business enabler across all three areas, you will need to concentrate on standardizing core services: moving your organization from a basic infrastructure (in which most IT tasks are carried out manually) to a managed infrastructure with some automation and knowledge capture.
A 2006 IDC study of 141 enterprises with 1,000 to 20,000 users found that PC standardization and security management could save up to $430 per user annually; standardizing systems management servers could save another $46 per user.
For additional information and the complete article, see http://www.biztechmagazine.com/article.asp?item_id=569.
Overview of Microsoft System Center
At the Microsoft Management Summit (MMS) in 2003, Microsoft announced System Center, envisioned as a future solution for providing customers with complete application and system management for enterprises of all sizes. (See http://www.microsoft.com/presspass/press/2003/mar03/03-18mssystemcenterpr.mspx for the original press release.) The first phase was anticipated to include Microsoft Operations Manager (MOM) 2004—later released as MOM 2005—and Systems Management Server (SMS) 2003.
What Is System Center? - System Center is an umbrella or brand name for Microsoft’s systems management family of products, and as such has new products and components added over time. System Center is not a single integrated product; it represents a means to integrate system management tools and technologies to help you with systems operations, troubleshooting, and planning.
Different from the releases of Microsoft Office (another Microsoft product family), Microsoft has released System Center in “waves”; the components are not released simultaneously. The first wave initially included SMS 2003, MOM 2005, and System Center Data Protection Manager 2006; 2006 additions included System Center Reporting Manager 2006 and System Center Capacity Planner 2006.
The second wave included Operations Manager 2007, Configuration Manager 2007, System Center Essentials 2007, Virtual Machine Manager 2007, and new releases of Data Protection Manager and Capacity Planner. Next released were updates to Virtual Machine Manager (version 2008) Operations Manager 2007 R2, Configuration Manager 2007 R2 and R3, DPM 2010, System Center Essentials 2010, and Service Manager 2010. Think of these as rounding out the second wave.
Microsoft has also widened the System Center product suite with recent acquisitions of Opalis and AVIcode. Organizations licensed for Microsoft System Center Server Management Suite Enterprise (SMSE) or Microsoft System Center Server Management Suite Datacenter (SMSD) may obtain Opalis and AVIcode as part of that license. AVIcode 5.7 is also available without charge to companies with the Core Infrastructure Server Enterprise and/or to those with a Core Infrastructure Server Datacenter license with Software Assurance.
A third wave includes the “v.Next” versions of System Center products: Operations Manager 2012, Configuration Manager 2012, and Virtual Machine Manager 2012. The wave also includes a new version of Opalis, rebranded as System Center Orchestrator, and Service Manager 2012. System Center Advisor, previously code-named Atlanta, promises to offer configuration-monitoring cloud service for Microsoft SQL Server and Windows Server deployments. Expect the list of monitored products to grow over time.
System Center builds on Microsoft’s DSI, introduced in the “Dynamic Systems Initiative” section, which is designed to deliver simplicity, automation, and flexibility in the data center across the IT environment. Microsoft System Center products share the following DSI-based characteristics:
Ease of use and deployment
Based on industry and customer knowledge
Scalability (both up to the largest enterprises and down to the smallest organizations)
Figure 1.8 illustrates the relationship between the System Center components and MOF.
MOF with System Center applications.
Reporting and Trend Analysis
The data gathered by the System Center products is collected in self-maintaining data warehouses, enabling numerous reports to be viewable. By using the SQL Server Reporting Services (SRS) engine, you can export reports to a Report Server file share. SRS’s Web Archive format retains links. You are also able to schedule and email reports, enabling users to open these reports without accessing the product console.
The Service Manager data warehouse is installed in a management group separate from the other Service Manager components, leading to speculation that this will ultimately be a unified data warehouse used by all products in the System Center suite.
Operations Management
The design pillars of Operations Manager 2012, currently in development, include a holistic view of application health, reduced TCO, and decreased time to value for partners:
Taking a holistic view of application health means OpsMgr 2012 will not only monitor an application from inside the infrastructure up to the application itself but also from the end-user perspective. Microsoft’s acquisition of AVIcode will help in this endeavor, telling one where the problems are down to the specific line of code in the application.
TCO is reduced through a simplified infrastructure with elimination of the root management server (RMS), reliable monitoring, increased scale and performance, and operational continuity.
To achieve decreased time to value for partners, Microsoft is looking at module extensibility, downloading dependencies required by management packs, adding templates, and additional dashboards.
Operations Manager 2012 also adds extensively to those network monitoring capabilities available with OpsMgr 2007 R2 by incorporating EMC Smarts technology.
In 2010, Gartner Group placed Operations Manager in its Magic Quadrant for IT Event Correlation and Analysis.
Configuration and Change Management
System Center Configuration Manager is Microsoft’s systems management solution for change and configuration management. In 2009, Gartner Group placed Configuration Manager in its Magic Quadrant for software change and configuration management for distributed platforms.
With ConfigMgr 2012, Microsoft’s first foray into systems management gets a new look and feel by replacing its MMC console with the standard System Center UI Framework (Outlook-style user interface), similar to other products in the System Center suite. In addition, Microsoft redesigned software distribution and the site server hierarchy, making it easier to implement and use Configuration Manager. The 2012 release also targets management at the user, not the device—delivering the right application in the right way to the right user under the right condition. This enables the user to be productive anywhere and anytime, yet maintains IT control while balancing the need for end-user empowerment.
Service Management
Using System Center Service Manager implements a single point of contact for all service requests, knowledge, and workflow. Service Manager 2010 incorporates processes such as incident, problem, change, and change management.
Service Manager fills a gap in Operations Manager: What occurs when OpsMgr detects a condition that requires human intervention and tracking for resolution? Until Service Manager, the answer was to create a ticket or incident in one’s help desk application. Now, within the System Center framework, OpsMgr can hand off incident management to Service Manager.
Protecting Data
System Center’s Data Protection Manager (DPM) is a disk-based backup solution for continuous data protection supporting Windows servers such as SQL Server, Exchange, SharePoint, virtualization, and file servers—as well as Windows desktops and laptops. DPM provides byte-level backup as changes occur, utilizing Microsoft’s Virtual Disk Service and Shadow Copy technologies.
DPM 2010 includes the ability for roaming laptops to get centrally managed policies around desktop protection. It also provides native site-to-site replication for disaster recovery to another DPM server or an off-site cloud provider. DPM includes centrally managed system state and bare metal recovery.
To support virtual machines, DPM supports host-based backups of virtual machines using a single agent on the host. To support branch office and low-bandwidth scenarios, DPM’s advanced de-duplication technology and block-level filter technology only moves changed data during full backups. Additional cloud capabilities are planned for DPM 2012.
Virtual Machine Management
System Center Virtual Machine Manager (VMM) is Microsoft’s management platform for heterogeneous virtualization infrastructures. VMM provides centralized management of virtual machines across several popular platforms, specifically Windows Server 2008 and 2008 R2 Hyper-V, VMware ESX 3.x, and Citrix XenServer. VMM enables increased utilization of physical servers, centralized management of a virtual infrastructure, delegation of administration in distributed environments, and rapid provisioning of new virtual machines by system administrators and users via a self-service portal.
VMM 2012 will have the ability to build both Hyper-V hosts and host clusters as it moves to being a private cloud product in terms of management and provisioning rather than just a virtualization management solution. This provisioning will involve deploying services using service templates, in addition to simply configuring storage and networking.
Concero (code-named), a self-service portal built on Silverlight, will allow IT managers to more easily deploy and manage applications in cloud infrastructures. Concero enables administrators to manage multiple private and public clouds while provisioning virtual machines and services to individual business units. Using Concero with VMM 2012, data center administrators will be able to provision not only virtual machine OS deployments but also, leveraging App-V, deploy and manage down to the application level, minimizing the number of virtual hard disk (VHD) templates necessary to maintain.