Chapter 1: Introducing Opalis Integration Server 6.3

1 2 3 4 5 Page 4
Page 4 of 5
  • New Operator Console—The existing Opalis Operator Console will be completely discarded and replaced with a Silverlight-based web console. This is no surprise to those familiar with the existing version. Considering the existing console is based on Java, requires numerous individual downloads, and has an onerous installation, it is clear change is needed. By using Microsoft’s Silverlight technology, the future console can be shipped with the core product and have a proper installer.

  • A Uniform Installation—As SCO will be a full Microsoft product, it will have an installer similar to the other System Center tools. Although it is too soon to know exactly what shape that will take, installing the product will not require manually copying files into deeply nested directories.

  • Standardization—One of the by-products of being a full Microsoft product is SCO will meet the high standards of excellence the company requires. As an example, OIS 6.3 is not supported on non-English version operating systems and likewise is supported only with English versions of SQL. This is anticipated to change with SCO. Microsoft’s significant engineering and testing resources will help make SCO a better, more standard product than OIS.

SCO 2012 Similarities

There are several areas not expected to change between OIS and SCO 2012.

  • Look and feel—The overall look of the SCO UI is not expected to change much from that of OIS. Given the tight deadlines and the many other changes that will be made, the UI will probably look similar to OIS. That is important for companies and users who invest time and resources into learning OIS now, as these skills should port over to SCO without much difficulty.

  • Export compatibility and upgradability—Microsoft has announced publicly that policies built in OIS along with their data will be useable in SCO. There was a great deal of concern that SCO might be so radically different as to be incompatible and negate those investments customers have made in process automation. This will not be the case. Although it is not clear how much of the product’s internals will change, it is expected that the database structures will be upgraded significantly. Either way, there will be a path to get from OIS to SCO, which is not anticipated to be particularly difficult.


Here are several questions not yet answered in relation to the features of SCO:

  • Integration Pack compatibility—Given that there are 30 IPs available today, it isn’t clear how many of those will work on SCO without modification. The anticipation is that if existing IPs are not fully compatible, Microsoft will upgrade the most popular ones as quickly as possible. An automation platform is only as useful as the products it can automate!

  • Documentation detail improvement—Documentation is probably the most important deliverable for any software product. Whether for installation, troubleshooting, or general usage, you expect all the information to be available quickly and intuitively. The release of version 6.3 included the addition of OIS documentation to the TechNet Library for System Center ( It is likely this is just a first step in getting all documentation online. The current IP documentation includes only content for System Center IPs. Content specifics aside, continued improvement in documentation detail is expected.

  • Quick Integration Kit (QIK) enhancement—The best way to extend the reach and power of OIS is by taking advantage of the SDK (QIK). Although current functionality is sufficient, enhancements could make it even better. Currently, there is no improvement roadmap or guidance on what changes might be necessary to enable QIK-based IPs to work in the upcoming version of the product. For the sake of all existing QIK-based IPs (some of which were created and shipped by Microsoft), expectations are that some framework enhancements are planned.

  • Multi-tenant and remote Action Server support—The ability to support multiple customers or geographies with a centralized OIS deployment has been a longstanding customer request. Although techniques exist to implement this architecture today, there is only documented guidance and suggestions. There is no wizard or walkthrough to expand a current single-tenant OIS deployment into a multitenant or cross-geo implementation. Although how this functionality is actually introduced into SCO is still unknown, the capability is highly anticipated.

Understanding IT Process Automation

People who are new to OIS often are also new to ITPA. You might have also seen terms like Runbook Automation and Data Center Automation (DCA) used to describe OIS or tools offering similar services. The next sections discuss what these terms mean and what differentiates them.

A Brief History of IT Process Automation

Fundamentally, ITPA is any automated process operating within the context of a data center. ITPA focuses on activities in a data center, those that are within the purview of the IT department, similar to how a Business Process Automation (BPA) or Business Process Management (BPM) tool focuses on processes from the perspective of the business. In fact, OIS, with its rich UI, is often mistaken for a BPA or BPM tool. The tools look similar initially, but there are two major differences:

  • OIS and other ITPA tools focus on processes that provide the underlying infrastructure to meet the business’s needs (such as server provisioning, incident management, or data refresh).

  • BPA or BPM tools focus on processes that serve the direct needs of business units or other front office tasks (typically streamlining tasks performed by employees to maximize performance).

As these terms are not standardized, you might encounter situations where the two overlap, but generally the delineation is the divide between the front office and back office.

The Origins of ITPA

Sometime in the early 2000s, Opalis Software began enhancing its job scheduling engine and adding functionality to start jobs not only based on a schedule but also as a reaction to other events. The addition of an event-driven job scheduler marked the beginnings of the shift to RBA and ITPA.

The capability to trigger a simple job scheduling task sequence as a reaction to an external event meant automatic remediation of the situation causing the event might be possible. In the nascent stages of this approach, things were fairly simplistic. As an example, the application might monitor the Windows Event Log for specific messages and upon finding a matching message take some rudimentary corrective action like restarting a service or application, clearing log files, and so on.

This approach met with early success and was expanded to add additional sources of information. SNMP messages were an obvious choice to add because they were ubiquitous and provided a listener to a number of software packages with relative simplicity.

RBA was the first term used to describe this situation. Runbooks were still common at the time. Runbooks are the set of steps used to remediate problems in a data center. At one time, these were literally a bound set of instructions on how to address any conceivable error that might be encountered. These runbooks typically sat on a shelf in the computer room. RBA was concerned then with taking common errors and creating an automated set of tasks that would respond to them and either fully remediate those errors, or at least take most of the steps toward remediation. With this model, a new space would emerge several years later, where half a dozen small startups competed and ultimately were consumed by large software companies.

The final step in the creation of RBA from Opalis Software was the inclusion of monitors for specific monitoring applications such as Microsoft Operations Manager, NetIQ, HP OpenView Operations, and others. This meant that users would not need to rely on SNMP or errors as the lowest common denominator and could instead get their information directly from the monitoring tool. These add-ons were first seen in OpalisRobot, the forerunner of OIS.

As the idea of RBA grew, more companies adopted it. Each new installation found new challenges and saw the creation of new automated runbooks. Around this time, ITIL began to take a firm hold in the United States as its adoption was embraced even more strongly in the UK and Europe. The ripples caused by the sudden adoption of ITIL had a dramatic impact on RBA.

ITIL Gives Rise to ITPA

ITIL introduced a structured approach to data center management, one seen as a natural evolution of the industry. ITIL is a framework of best practices on how to handle situations within an IT organization. It provides an organizational framework, roles, and (in the latest versions) prescriptive advice on how to handle issues. ITIL separated incidents from problems and mandated tools to capture incidents, changes, assets, and even the relationships between them. These tools would increase the adoption of automation and give rise to ITPA.

Until this point, Opalis Software was largely alone in the space of runbook automation and focused most of its sales and engineering efforts on the runbook. In an ITIL world, Opalis Software was dedicated to the idea of incident resolution, but without much consideration given to incident management or the lifecycle of an event beyond the event monitoring system. The adoption of ITIL convinced the company to focus on the full lifecycle of an event rather than the simple resolution of that event, although that clearly was the critical aspect. Organizations began to want the automation to reach into their service desks and create a trouble ticket for the event, rather than simply solving it. The reach of automation extended into other silos as well.

CMDBs were also beginning to show their value and take hold. If you had a rich repository of data like a CMDB or even an asset manager, your automation could interrogate the repository and adjust the runbook as needed. It could also update the CMDB with information based on actions taken. Consider the value of having your system automatically check your CMDB when an alert is captured, to determine whether the affected asset was in a scheduled maintenance window:

  • If the system were having maintenance performed, the alert could be safely resolved.

  • If the system were not in maintenance but the outcome of the automated runbook would result in an outage, the CMDB and the event monitor could be updated to reflect this in a manner both human operators and systems could view.

Reaching Across the Data Center

Today these scenarios are the heart of what OIS does for customers. The terms ITPA and RBA (and to a lesser extent DCA) are now synonymous. Likewise, the idea of an automation platform that can integrate with any software component has become the norm. Events and incidents live in an ecosystem of tools, and to automate a process effectively, you must include all those tools in the process. It is not unusual to have a single automated process with service desks, event monitors, change management systems, configuration tools, virtualization tools, and more. The notion of providing a broad set of integration tools to use without coding or scripting is one pioneered by Opalis Software.

Process Is King

When discussing automation, companies are often asked whether they need to have a documented process to use OIS. Although not required, having a documented process greatly speeds the work of automation. You do however need to have a process. Oftentimes companies have processes, often fairly elaborate ones, but they are not properly documented. These processes live in the minds of the teams who are responsible; this is known as tribal knowledge. Whether in a formal document, a white board, a bar napkin, or only in someone’s head, if a process exists, an effort can be made to automate it. In the past, Opalis Software offered automation workshops to help companies capture these processes so they could automate. However, if a company has not matured to the stage where it has proper processes, be those formal or informal, no amount of automation will bring value.

Consider you have a process. What does that process look like? How did you come up with it? What considerations were given? What you should keep in mind when considering these questions is process is king. To illustrate this, consider one of your own processes. Did you design the process with your tools in mind, regardless if your tools could service every step?

Processes should be governed by their goals, not limited by the tools you own. If a tool does not provide a facility for a step desirable in the process, that normally becomes a manual step. This is how most organizations operate when designing process. The processes should provide step X at stage Y. If software package Z does not offer step X, the gap is manually dealt with, or occasionally the process is altered or deprecated.

You should not need to alter processes because of shortcomings in your tools or lack of connectivity between them. OIS is a great example of letting the software you have work together in a way that was previously difficult, if not impossible, without an ITPA tool.

Old Processes and Unwanted Artifacts

Having existing processes as you enter into the world of OIS is the most important prerequisite to effective IT process automation. However, you should consider the age of your processes. If more than two years old, revisit the logic for performing each step. (This is actually a good idea for any process). The review will limit the number of compromises and exceptions introduced into the new process. Often the reasons for compromise are no longer valid, or can be overcome using current tools.

1 2 3 4 5 Page 4
Page 4 of 5
The 10 most powerful companies in enterprise networking 2022