In a previous column, I noted some reasons why\u00a0SNMP should be re-engineered around XML and event-based bus architectures. That column drew many responses, most of which were critical of the need for such a drastic change. Critics cited the additional overhead required in an embedded device agent and requiring users to learn a new programming language when the current command-line interface works just fine.Change often is difficult to embrace but must happen so the technology industry can better serve users and applications. The original focus of management was on fault detection and recovery. Failure rates were high and diagnostic tools immature. The situation today is drastically different. Hardware now is designed using fewer components and less power consumption\/heat generation, and has intelligent embedded microprocessors and better manufacturing quality. The result is almost a magnitude fewer failures.The latest techniques that use autonomic technology soon will let all communications hardware meet a 99.99999% uptime level. Using redundant components and back-up power, that reliability level will be almost 100%. Hardware fault management will not be the primary concern of management systems.Software is another issue, but current developer tools, testing techniques and reusable component technology are producing more reliable code that is less prone to development errors. This, combined with the ability of operating systems to dynamically reload software components after fault identification, will increase the reliability of software to a level equal to that of hardware.Overhead issues relating to XML exist but will be addressed in hardware from a network perspective and in agent software through the use of faster microprocessors and additional memory within the hardware. This is the evolution of embedded systems - smarter, simpler and more communicative.The real focus of management has shifted to monitoring, performance measurement, configuration and provisioning. These will all be XML-based applications that must directly interact with corporate policy and business applications. Although I still believe the World Wide Web Consortium (W3C) should be the standards body focused on transforming systems and network management into XML-based applications, the IETF has taken the lead because of its experience. The making of standards is a slow-but-sure process. Various standards committees are in place, and new committees are being created to address the transition. Many papers and committee reports are available on Web sites such as www.ietf.org, www.oasis-open.org and www.w3.org.The IETF was slow to don the mantle of transition, but now is rushing headlong into XML network management. Fortunately, the IETF is coordinating its efforts with the Organization for the Advancement of Structured Information Standards and the W3C. The driving force in the systems management world is vendors, not standards bodies. Dynamic provisioning, end-to-end service-level agreement monitoring, real-time performance measurement and policy-based configuration discovery are all traits required within the on-demand\/utility IT model. Companies such as IBM and HP already have in place XML-based Web services management applications in support of their versions of the\u00a0on-demand\/utility\u00a0model.Resistance will be futile; it will occur rapidly in the systems management world and more slowly in the network management world. This is not all bad because the IETF will not have to reinvent the wheel and might even adopt the same management XML schemas as the systems world, thereby unifying systems and network management under one application, development and operations structure.