Historically, service-level agreements were developed between technical organizations. For example, an ISP and an IT group would agree upon service levels that the service provider would guarantee for its Internet service. Over time, the IT group might also ask for specific performance metrics for the connection, such as packet loss or jitter, based on the applications its customers were using.Although the metrics were altered somewhat, the conversation was largely technical and the agreements were written in technical terms that both parties understood.However, as technology becomes more integrated with business, the perspectives must also change. Instead of being strictly an agreement between technical organizations, the SLA morphs into an agreement between lines of business and service providers.This presents a new challenge: to write an SLA that bridges the language and cultural differences between the two groups. For example, a business group is mainly interested in the time that it takes to complete an order, while an IT organization focuses on maintaining the systems and networks that enable the order processing. The challenge is translating the technical metrics of the underlying technology components that IT understands into business function metrics that the line-of-business manager wants as measurements in creating the SLA.Unfortunately, SLAs have continued to be written in technical terms that business managers do not understand - or worse, think they understand.Today, most business applications are multi-tiered. Typically, a single transaction passes through multiple components, from the desktop through the network and into multiple servers. The relationships between the components and the impact of each component on the overall application performance are complex and challenging to understand. But understanding those relationships is required to measure the business effect of changes to any of the components.To take a real-world example, a technical support organization focuses on metrics such as time on hold, time to answer, and customer satisfaction. Each of these metrics is affected by the support technical infrastructure. Time on hold is affected by the number of available support staff, the sophistication of the call director, and the length of time it takes to address calls ahead of the ones on hold. The time to answer inquiries is affected by the amount of typing the staff must do, the ease of getting to existing answers in the support knowledge-base system, and the performance of that system. The SLA metrics need to account for these key elements by measuring response time of the overall system, the availability of that system, and even the design of the system to speed transactions (such as auto-filling customer information whenever possible). This broader approach focuses on application performance metrics, not component performance.The IT group then has the responsibility to correlate these metrics to the system architecture and measurement of the individual components. Redundancy, load balancing, network topology, and other architectural characteristics will have an impact on those metrics. But, those individual measures are only components of the overall vital statistics: the overall business metrics. In practice, the IT staff responsible for the silos of technology typically communicate with each other by communicating at the component level, but to service its business customers, IT needs to communicate to the business people in the over-arching business perspective.Together, the service provider and consumer must work together to determine the best translation of business requirements to service levels. Then, the provider must determine the mechanisms for maintaining those service levels, measuring them, and proactively engineering compliance. Using more comprehensive business measurements is certainly more complex than monitoring individual elements, but it provides significantly more benefit to the consumer in the process.